Quantum key distribution and authentication: Separating facts from myths

Key exchange protocols and authentication mechanisms solve distinct problems and must be integrated in a secure communication system.

Quantum key distribution (QKD) is a technology that leverages the laws of quantum physics to securely share secret information between distant communicating parties. With QKD, quantum-mechanical properties ensure that if anyone tries to tamper with the secret-sharing process, the communicating parties will know. Keys established through QKD can then be used in traditional symmetric encryption or with other cryptographic technologies to secure communications.

“Record now, decrypt later" (RNDL) is a cybersecurity risk arising from advances in quantum computing. The term refers to the situation in which attackers record encrypted data today, even though they cannot decrypt it immediately. They store this data with the expectation that future quantum computers will be powerful enough to break the cryptographic algorithms currently securing it. Sensitive information such as financial records, healthcare data, or state secrets could be at risk, even years after it was transmitted.

Mitigating RNDL requires adopting quantum-resistant cryptographic methods, such as post-quantum cryptography (PQC) and/or quantum key distribution (QKD), to ensure confidentiality against future quantum advancements. AWS has invested in the migration to post-quantum cryptography to protect the confidentiality, integrity, and authenticity of customer data.

Quantum communication is important enough that in 2022, three of its pioneers won the Nobel Prize for physics. However, misconceptions about QKD’s role still persist. One of them is that QKD lacks practical value because it “doesn’t solve the authentication problem”. This view can obscure the broad benefits that QKD brings to secure communications when integrated properly into existing systems.

QKD should be viewed as a complement to — rather than a replacement for — existing cybersecurity frameworks. Functionally, QKD solves the same problem solved by other key establishment protocols, including the well-known Diffie-Hellman (DH) method and the module-lattice-based key encapsulation mechanism (ML-KEM), the standard recently ratified by the FIPS — but it does it in a fundamentally different way. Like those methods, QKD depends on strong authentication to defend against threats such as man-in-the-middle attacks, where an attacker poses as one of the communicating parties.

Related content
The head of Amazon Web Services’ quantum communication program on the Nobel winners’ influence on her field.

In short, key exchange protocols and authentication mechanisms are different security primitives for solving distinct problems and must be integrated together in a secure communication system.

The challenge, then, is not to give QKD an authentication mechanism but to understand how it can be integrated with other established mechanisms to strengthen the overall security infrastructure. As quantum technologies continue to evolve, it’s important to shift the conversation from skepticism about authentication to consideration of how QKD can be thoughtfully and practically implemented to address today’s and tomorrow’s cybersecurity needs — such as the need to mitigating the “record now, decrypt later” (RNDL) attack (see sidebar).

Understanding the role of authentication in QKD

When discussing authentication in the context of QKD, we focus on the classical digital channel that the parties use to exchange information about their activities on the quantum channel. This isn’t about user authentication methods, such as logging in with passwords or biometrics, but rather about authenticating the communicating entities and the data exchanged. Entity authentication ensures that the parties are who they claim to be; data authentication guarantees that the information received is the same as what was sent by the claimed source. QKD protocols include a classical-communication component that uses both authentication methods to assure the overall security of the interaction.

Entity authentication

Entity authentication is the process by which one party (the "prover") asserts its identity, and another party (the "verifier") validates that assertion. This typically involves a registration step, in which the verifier obtains reliable identification information about the prover, as a prelude to any further authentication activity. The purpose of this step is to establish a “root of trust” or “trust anchor”, ensuring that the verifier has a trusted baseline for future authentications.

Related content
Collaboration will seek to advance the development of a quantum internet.

Several entity authentication methods are in common use, each based on a different type of trust anchor:

  • Public-key-infrastructure (PKI) authentication: In this method, a prover’s certificate is issued by a trusted certificate authority (CA). The verifier relies on this CA, or the root CA in a certificate chain, to establish trust. The certificate acts as the trust anchor that links the prover’s identity to its public key.
  • PGP-/GPG-based (web of trust) authentication: Here, trust is decentralized. A prover’s public key is trusted if it has been vouched for by one or more trusted third parties, such as a mutual acquaintance or a public-key directory. These third parties serve as the trust anchors.
  • Pre-shared-key-based (PSK) authentication: In this case, both the prover and the verifier share a secret key that was exchanged via an offline or other secure out-of-band method. The trust anchor is the method of securely sharing this key a priori, such as a secure courier or another trusted channel.

These trust anchors form the technical backbones of all authentication systems. However, all entity authentication methods are based on a fundamental assumption: the prover is either the only party that holds the critical secret data (e.g., the prover’s private key in PKI or PGP) or the only other party that shares the secret with the verifier (PSK). If this assumption is broken — e.g., the prover's private key is stolen or compromised, or the PSK is leaked — the entire authentication process can fail.

Data authentication

Data authentication, also known as message authentication, ensures both the integrity and authenticity of the transmitted data. This means the data received by the verifier is exactly what the sender sent, and it came from a trusted source. As with entity authentication, the foundation of data authentication is the secure management of secret information shared by the communicating parties.

Related content
Among the ‘first wave’ of scientists to gain a PhD in quantum technology, the senior manager of research science discusses her two-decade-long career journey.

The most common approach to data authentication is symmetric cryptography, where both parties share a secret key. A keyed message authentication code (MAC), such as HMAC or GMAC, is used to compute a unique tag for the transmitted data. This tag allows the receiver to verify that the data hasn’t been altered during transit. The security of this method depends on the collision resistance of the chosen MAC algorithm — that is, the computational infeasibility of finding two or more plaintexts that could yield the same tag — and the confidentiality of the shared key. The authentication tag ensures data integrity, while the secret key guarantees the authenticity of the data origin.

An alternative method uses asymmetric cryptography with digital signatures. In this approach, the sender generates a signature using a private key and the data itself. The receiver, or anyone else, can verify the signature’s authenticity using the sender’s public key. This method provides data integrity through the signature algorithm, and it assures data origin authenticity as long as only the sender holds the private key. In this case, the public key serves as a verifiable link to the sender, ensuring that the signature is valid.

In both the symmetric and the asymmetric approaches, successful data authentication depends on effective entity authentication. Without knowing and trusting the identity of the sender, the verification of the data’s authenticity is compromised. Therefore, the strength of data authentication is closely tied to the integrity of the underlying entity authentication process.

Authentication in QKD

The first quantum cryptography protocol, known as BB84, was developed by Bennett and Brassard in 1984. It remains foundational to many modern QKD technologies, although notable advancements have been made since then.

Related content
New method enables entanglement between vacancy centers tuned to different wavelengths of light.

QKD protocols are unique because they rely on the fundamental principles of quantum physics, which allow for “information-theoretic security.” This is distinct from the security provided by computational complexity. In the quantum model, any attempt to eavesdrop on the key exchange is detectable, providing a layer of security that classical cryptography cannot offer.

QKD relies on an authenticated classical communication channel to ensure the integrity of the data exchanged between parties, but it does not depend on the confidentiality of that classical channel. (This is why RNDL is not an effective attack against QKD). Authentication just guarantees that the entities establishing keys are legitimate, protecting against man-in-the-middle attacks.

Currently, several commercial QKD products are available, many of which implement the original BB84 protocol and its variants. These solutions offer secure key distribution in real-world applications, and they all pair with strong authentication processes to ensure the communication remains secure from start to finish. By integrating both technologies, organizations can build communication infrastructures capable of withstanding both classical and quantum threats.

Authentication in QKD bootstrap: A manageable issue

During the initial bootstrap phase of a QKD system, the authentic classical channel is established using traditional authentication methods based on PKI or PSK. As discussed earlier, all of these methods ultimately rely on the establishment of a trust anchor.

Related content
Automated reasoning and optimizations specific to CPU microarchitectures improve both performance and assurance of correct implementation.

While confidentiality may need to be maintained for an extended period (sometimes decades), authentication is a real-time process. It verifies identity claims and checks data integrity in the moment. Compromising an authentication mechanism at some future point will not affect past verifications. Once an authentication process is successfully completed, the opportunity for an adversary to tamper with it has passed. That is, even if, in the future, a specific authentication mechanism used in QKD is broken by a new technology, QKD keys generated prior to that point are still safe to use, because no adversary can go back in time to compromise past QKD key generation.

This means that the reliance on traditional, non-QKD authentication methods presents an attack opportunity only during the bootstrap phase, which typically lasts just a few minutes. Given that this phase is so short compared to the overall life cycle of a QKD deployment, the potential risks posed by using authentication mechanisms are relatively minor.

Authentication after QKD bootstrap: Not a new issue

Once the bootstrap phase is complete, the QKD devices will have securely established shared keys. These keys can then be used for PSK-based authentication in future communications. In essence, QKD systems can maintain the authenticated classical communication channel by utilizing a small portion of the very keys they generate, ensuring continued secure communication beyond the initial setup phase.

It is important to note that if one of the QKD devices is compromised locally for whatever reason, the entire system’s security could be at risk. However, this is not a unique vulnerability introduced by QKD. Any cryptographic system faces similar challenges when the integrity of an endpoint is compromised. In this respect, QKD is no more susceptible to it than any other cryptographic system.

Overcoming key challenges to QKD’s role in cybersecurity

Up to now we have focused on clarifying the myths about authentication needs in QKD. Next we will discuss several other challenges in using QKD in practice.

Bridging the gap between QKD theory and implementation

While QKD protocols are theoretically secure, there remains a significant gap between theory and real-world implementations. Unlike traditional cryptographic methods, which rely on well-understood algorithms that can be thoroughly reviewed and certified, QKD systems depend on specialized hardware. This introduces complexity, as the process of reviewing and certifying QKD hardware is not yet mature.

Related content
Using time to last byte — rather than time to first byte — to assess the effects of data-heavy TLS 1.3 on real-world connections yields more encouraging results.

In conventional cryptography, risks like side-channel attacks — which use runtime clues such as memory access patterns or data retrieval times to deduce secrets — are well understood and mitigated through certification processes. QKD systems are following a similar path. The European Telecommunications Standards Institute (ETSI) has made a significant move by introducing the Common Criteria Protection Profile for QKD, the first international effort to create a standardized certification framework for these systems. ISO/IEC has also published standards on security requirements and test and evaluation methods for QKD. These represent crucial steps in building the same level of trust that traditional cryptography enjoys.

Once the certification process is fully established, confidence in QKD’s hardware implementations will continue to grow, enabling the cybersecurity community to embrace QKD as a reliable, cutting-edge solution for secure communication. Until then, the focus remains on advancing the review and certification processes to ensure that these systems meet the highest security standards.

QKD deployment considerations

One of the key challenges in the practical deployment of QKD is securely transporting the keys generated by QKD devices to their intended users. While it’s accepted that QKD is a robust mechanism for distributing keys to the QKD devices themselves, it does not cover the secure delivery of keys from the QKD device to the end user (or key consumer).

QKD diagram.png
A schematic representation of two endpoints — site A and site B — that want to communicate safely. The top line represents the user traffic being protected, and the bottom lines are the channels required to establish secure communication. An important practical consideration is how to transmit a key between a QKD device and an end user within an endpoint.

This issue arises whether the QKD system is deployed within a large intranet or a small local-area network. In both cases, the keys must be transported over a non-QKD system. The standard deployment requirement is that the key delivery from the QKD system to the key consumer occurs “within the same secure site”, and the definition of a “secure site” is up to the system operator.

Related content
Prize honors Amazon senior principal scientist and Penn professor for a protocol that achieves a theoretical limit on information-theoretic secure multiparty computation.

The best practice is to make the boundary of the secure site as small as is practical. One extreme option is to remove the need for transporting keys over classical networks entirely, by putting the QKD device and the key user’s computing hardware in the same physical unit. This eliminates the need for traditional network protocols for key transport and realizes the full security benefits of QKD without external dependency. In cases where the extreme option is infeasible or impractical, the secure site should cover only the local QKD system and the intended key consumers.

Conclusion

QKD-generated keys will remain secure even when quantum computers emerge, and communications using these keys are not vulnerable to RNDL attacks. For QKD to reach its full potential, however, the community must collaborate closely with the broader cybersecurity ecosystem, particularly in areas like cryptography and governance, risk, and compliance (GRC). By integrating the insights and frameworks established in these fields, QKD can overcome its current challenges in trust and implementation.

This collective effort is essential to ensure that QKD becomes a reliable and integral part of secure communication systems. As these collaborations deepen, QKD will be well-positioned to enhance existing security frameworks, paving the way for its adoption across industries and applications.

Related content

IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!
IN, HR, Gurugram
We're on a journey to build something new a green field project! Come join our team and build new discovery and shopping products that connect customers with their vehicle of choice. We're looking for a talented Senior Applied Scientist to join our team of product managers, designers, and engineers to design, and build innovative automotive-shopping experiences for our customers. This is a great opportunity for an experienced engineer to design and implement the technology for a new Amazon business. We are looking for a Applied Scientist to design, implement and deliver end-to-end solutions. We are seeking passionate, hands-on, experienced and seasoned Senior Applied Scientist who will be deep in code and algorithms; who are technically strong in building scalable computer vision machine learning systems across item understanding, pose estimation, class imbalanced classifiers, identification and segmentation.. You will drive ideas to products using paradigms such as deep learning, semi supervised learning and dynamic learning. As a Senior Applied Scientist, you will also help lead and mentor our team of applied scientists and engineers. You will take on complex customer problems, distill customer requirements, and then deliver solutions that either leverage existing academic and industrial research or utilize your own out-of-the-box but pragmatic thinking. In addition to coming up with novel solutions and prototypes, you will directly contribute to implementation while you lead. A successful candidate has excellent technical depth, scientific vision, project management skills, great communication skills, and a drive to achieve results in a unified team environment. You should enjoy the process of solving real-world problems that, quite frankly, haven’t been solved at scale anywhere before. Along the way, we guarantee you’ll get opportunities to be a bold disruptor, prolific innovator, and a reputed problem solver—someone who truly enables AI and robotics to significantly impact the lives of millions of consumers. Key job responsibilities Architect, design, and implement Machine Learning models for vision systems on robotic platforms Optimize, deploy, and support at scale ML models on the edge. Influence the team's strategy and contribute to long-term vision and roadmap. Work with stakeholders across , science, and operations teams to iterate on design and implementation. Maintain high standards by participating in reviews, designing for fault tolerance and operational excellence, and creating mechanisms for continuous improvement. Prototype and test concepts or features, both through simulation and emulators and with live robotic equipment Work directly with customers and partners to test prototypes and incorporate feedback Mentor other engineer team members. A day in the life - 6+ years of building machine learning models for retail application experience - PhD, or Master's degree and 6+ years of applied research experience - Experience programming in Java, C++, Python or related language - Experience with neural deep learning methods and machine learning - Demonstrated expertise in computer vision and machine learning techniques.
US, WA, Seattle
Do you want to re-invent how millions of people consume video content on their TVs, Tablets and Alexa? We are building a free to watch streaming service called Fire TV Channels (https://techcrunch.com/2023/08/21/amazon-launches-fire-tv-channels-app-400-fast-channels/). Our goal is to provide customers with a delightful and personalized experience for consuming content across News, Sports, Cooking, Gaming, Entertainment, Lifestyle and more. You will work closely with engineering and product stakeholders to realize our ambitious product vision. You will get to work with Generative AI and other state of the art technologies to help build personalization and recommendation solutions from the ground up. You will be in the driver's seat to present customers with content they will love. Using Amazon’s large-scale computing resources, you will ask research questions about customer behavior, build state-of-the-art models to generate recommendations and run these models to enhance the customer experience. You will participate in the Amazon ML community and mentor Applied Scientists and Software Engineers with a strong interest in and knowledge of ML. Your work will directly benefit customers and you will measure the impact using scientific tools.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multi-modal systems. You will support projects that work on technologies including multi-modal model alignment, moderation systems and evaluation. Key job responsibilities As an Applied Scientist with the AGI team, you will support the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). You are also expected to publish in top tier conferences. About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems. Specifically, we focus on model alignment with an aim to maintain safety while not denting utility, in order to provide the best-possible experience for our customers.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Senior Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Basic Qualifications: - Master’s or PhD in computer science, statistics or a related field or relevant science experience (publications/scientific prototypes) in lieu of Masters - Experience in deep learning, machine learning, and data science. - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. Preferred Qualifications: - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - The motivation to achieve results in a fast-paced environment. - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment - Papers published in AI/ML venues of repute
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Basic Qualifications: - Master’s or PhD in computer science, statistics or a related field - 2-7 years experience in deep learning, machine learning, and data science. - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. - Papers published in AI/ML venues of repute Preferred Qualifications: - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - The motivation to achieve results in a fast-paced environment. - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment
IN, KA, Bengaluru
Amazon is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. The ATT team, based in Bangalore, is responsible for ensuring that ads are relevant and is of good quality, leading to higher conversion for the sellers and providing a great experience for the customers. We deal with one of the world’s largest product catalog, handle billions of requests a day with plans to grow it by order of magnitude and use automated systems to validate tens of millions of offers submitted by thousands of merchants in multiple countries and languages. In this role, you will build and develop ML models to address content understanding problems in Ads. These models will rely on a variety of visual and textual features requiring expertise in both domains. These models need to scale to multiple languages and countries. You will collaborate with engineers and other scientists to build, train and deploy these models. As part of these activities, you will develop production level code that enables moderation of millions of ads submitted each day.
US, WA, Seattle
The Search Supply & Experiences team, within Sponsored Products, is seeking an Applied Scientist to solve challenging problems in natural language understanding, personalization, and other areas using the latest techniques in machine learning. In our team, you will have the opportunity to create new ads experiences that elevate the shopping experience for our hundreds of millions customers worldwide. As an Applied Scientist, you will partner with other talented scientists and engineers to design, train, test, and deploy machine learning models. You will be responsible for translating business and engineering requirements into deliverables, and performing detailed experiment analysis to determine how shoppers and advertisers are responding to your changes. We are looking for candidates who thrive in an exciting, fast-paced environment and who have a strong personal interest in learning, researching, and creating new technologies with high customer impact. Key job responsibilities As an Applied Scientist on the Search Supply & Experiences team you will: - Perform hands-on analysis and modeling of enormous datasets to develop insights that increase traffic monetization and merchandise sales, without compromising the shopper experience. - Drive end-to-end machine learning projects that have a high degree of ambiguity, scale, and complexity. - Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. - Design and run experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Stay up to date on the latest advances in machine learning. About the team We are a customer-obsessed team of engineers, technologists, product leaders, and scientists. We are focused on continuous exploration of contexts and creatives where advertising delivers value to shoppers and advertisers. We specifically work on new ads experiences globally with the goal of helping shoppers make the most informed purchase decision. We obsess about our customers and we are continuously innovating on their behalf to enrich their shopping experience on Amazon
US, WA, Seattle
Have you ever wondered how Amazon launches and maintains a consistent customer experience across hundreds of countries and languages it serves its customers? Are you passionate about data and mathematics, and hope to impact the experience of millions of customers? Are you obsessed with designing simple algorithmic solutions to very challenging problems? If so, we look forward to hearing from you! At Amazon, we strive to be Earth's most customer-centric company, where both internal and external customers can find and discover anything they want in their own language of preference. Our Translations Services (TS) team plays a pivotal role in expanding the reach of our marketplace worldwide and enables thousands of developers and other stakeholders (Product Managers, Program Managers, Linguists) in developing locale specific solutions. Amazon Translations Services (TS) is seeking an Applied Scientist to be based in our Seattle office. As a key member of the Science and Engineering team of TS, this person will be responsible for designing algorithmic solutions based on data and mathematics for translating billions of words annually across 130+ and expanding set of locales. The successful applicant will ensure that there is minimal human touch involved in any language translation and accurate translated text is available to our worldwide customers in a streamlined and optimized manner. With access to vast amounts of data, cutting-edge technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the way customers and stakeholders engage with Amazon and our platform worldwide. Together, we will drive innovation, solve complex problems, and shape the future of e-commerce. Key job responsibilities * Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language translation-related challenges in the eCommerce space. * Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. * Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance seller performance and customer experiences across various international marketplaces. * Continuously explore and evaluate state-of-the-art modeling techniques and methodologies to improve the accuracy and efficiency of language translation-related systems. * Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. About the team We are a start-up mindset team. As the long-term technical strategy is still taking shape, there is a lot of opportunity for this fresh Science team to innovate by leveraging Gen AI technoligies to build scalable solutions from scratch. Our Vision: Language will not stand in the way of anyone on earth using Amazon products and services. Our Mission: We are the enablers and guardians of translation for Amazon's customers. We do this by offering hands-off-the-wheel service to all Amazon teams, optimizing translation quality and speed at the lowest cost possible.