Latency from post-quantum cryptography shrinks as data increases

Using time to last byte — rather than time to first byte — to assess the effects of data-heavy TLS 1.3 on real-world connections yields more encouraging results.

The risk that a quantum computer might break cryptographic standards widely used today has ignited numerous efforts to standardize quantum-resistant algorithms and introduce them into transport encryption protocols like TLS 1.3. The choice of post-quantum algorithm will naturally affect TLS 1.3’s performance. So far, studies of those effects have focused on the “handshake time” required for two parties to establish a quantum-resistant encrypted connection, known as the time to first byte.

Although these studies have been important in quantifying increases in handshake time, they do not provide a full picture of the effect of post-quantum cryptography on real-world TLS 1.3 connections, which often carry sizable amounts of data. At the 2024 Workshop on Measurements, Attacks, and Defenses for the Web (MADweb), we presented a paper advocating time to last byte (TTLB) as a metric for assessing the total impact of data-heavy, quantum-resistant algorithms such as ML-KEM and ML-DSA on real-world TLS 1.3 connections. Our paper shows that the new algorithms will have a much lower net effect on connections that transfer sizable amounts of data than they do on the TLS 1.3 handshake itself.

Post-quantum cryptography

TLS 1.3, the latest version of the transport layer security protocol, is used to negotiate and establish secure channels that encrypt and authenticate data passing between a client and a server. TLS 1.3 is used in numerous Web applications, including e-banking and streaming media.

Related content
Prize honors Amazon senior principal scientist and Penn professor for a protocol that achieves a theoretical limit on information-theoretic secure multiparty computation.

Asymmetric cryptographic algorithms, such as the one used in TLS 1.3, depend for their security on the difficulty of the discrete-logarithm or integer factorization problems, which a cryptanalytically relevant quantum computer could solve efficiently. The US National Institute of Standards and Technology (NIST) has been working on standardizing quantum-resistant algorithms and has selected ML-Key Encapsulation Mechanism (KEM) for key exchange. NIST has also selected ML-DSA for signatures, or cryptographic authentication.

As these algorithms have kilobyte-size public keys, ciphertexts, and signatures — versus the 50- to 400-byte sizes of the existing algorithms — they would inflate the amount of data exchanged in a TLS handshake. A number of works have compared handshake time using traditional TLS 1.3 key exchange and authentication to that using post-quantum (PQ) key exchange and authentication.

These comparisons were useful to quantify the overhead that each new algorithm introduces to the time to first byte, or completion of the handshake protocol. But they ignored the data transfer time over the secure connection that, together with the handshake time, constitutes the total delay before the application can start processing data. The total time from the start of the connection to the end of data transfer is, by contrast, the time to last byte (TTLB). How much TTLB slowdown is acceptable depends highly on the application.

Experiments

We designed our experiments to simulate various network conditions and measured the TTLB of classical and post-quantum algorithms in TLS 1.3 connections where the client makes a small request and the server responds with hundreds of kilobytes (KB) of data. We used Linux namespaces in a Ubuntu 22.04 virtual-machine instance. The namespaces were interconnected using virtual ethernet interfaces. To emulate the “network” between the namespaces, we used the Linux kernel’s netem utility, which can introduce variable network delays, bandwidth fluctuations, and packet loss between the client and server.

A standard AWS EC2 instance icon (which looks like a stylized integrated circuit) in which a netem emulation is running, with an emulated cloud server (represented by cloud icon) passing data back and forth with a server namespace (represented by a server-stack icon) and a client namespace (represented by a desktop-computer icon).
The experimental setup, with client and server Linux namespaces and netem-emulated network conditions.

Our experiments had several configurable parameters that allowed us to compare the effect of the PQ algorithm on TTLB under stable, unstable, fast, and slow network conditions:

  • TLS key exchange mechanism (classical ECDH or ECDH+ML-KEM post-quantum hybrid)
  • TLS certificate chain size corresponding to classical RSA or ML-DSA certificates.
  • TCP initial congestion window (initcwnd)
  • Network delay between client and server, or round-trip time (RTT)
  • Bandwidth between client and server
  • Loss probability per packet
  • Amount of data transferred from the server to the client

Results

The results of our testing are thoroughly analyzed in the paper. They essentially show that a few extra KB in the TLS 1.3 handshake due to the post-quantum public keys, ciphertexts, and signatures will not be noticeable in connections transferring hundreds of KB or more. Connections that transfer less than 10-20 KB of data will probably be more affected by the new data-heavy handshakes.

PQTLS fig. 1.png
Figure 1: Percentage increase in TLS 1.3 handshake time between traditional and post-quantum TLS 1.3 connections. Bandwidth = 1Mbps; loss probability = 0%, 1%, 3%, and 10%; RTT = 35ms and 200ms; TCP initcwnd=20.
A bar graph whose y-axis is "handshake time % increase" and whose x-axis is a sequence of percentiles (50th, 75th, and 90th). At each percentile are two bars, one blue (for the traditional handshake protocol) and one orange (for post-quantum handshakes). In all three instances, the orange bar is around twice as high as the blue one.

Figure 1 shows the percentage increase in the duration of the TLS 1.3 handshake for the 50th, 75th, and 90th percentiles of the aggregate datasets collected for 1Mbps bandwidth; 0%, 1%, 3%, and 10% loss probability; and 35-millisecond and 200-millisecond RTT. We can see that the ML-DSA size (16KB) certificate chain takes almost twice as much time as the 8KB chain. This means that if we manage to keep the volume of ML-DSA authentication data low, it would significantly benefit the speed of post-quantum handshakes in low-bandwidth connections.

A line graph whose y-axis is the time-to-last-byte (TTLB) percentage increase and whose x-axis is the size of the data files transmitted over the secure connection, ranging from 0 KiB to 200 KiB. There are three lines, representing the 50th, 75th, and 90th percentiles. They start at almost the same value and all drop precipitously from 0 KiB to 50 KiB, continuing to decline from 50 KiB to 200 KiB, with the 90th-percentile line declining slightly more rapidly than the other two.
Figure 2: Percentage increase in TTLB between existing and post-quantum TLS 1.3 connections at 0% loss probability. Bandwidth = 1Gbps; RTT = 35ms; TCP initcwnd = 20.

Figure 2 shows the percentage increase in the duration of the post-quantum handshake relative to the existing algorithm for all percentiles and different data sizes at 0% loss and 1Gbps bandwidth. We can observe that although the slowdown is low (∼3%) at 0 kibibytes (KiB, or multiples of 1,024 bytes, the nearest power of 2 to 1,000) from the server (equivalent to the handshake), it drops even more (∼1%) as the data from the server increases. At the 90th percentile the slowdown is slightly lower.

A line graph whose y-axis is the time-to-last-byte (TTLB) percentage increase and whose x-axis is the size of the data files transmitted over the secure connection, ranging from 0 KiB to 200 KiB. There are three lines, representing the 50th, 75th, and 90th percentiles. They start at exactly the same value and all decline in lockstep, dropping precipitously from 0 KiB to 50 KiB and continuing a steady decline from 50 KiB to 200 KiB.
Figure 3: Percentage increase in TTLB between existing and post-quantum TLS 1.3 connections at 0% loss probability. Bandwidth = 1Mbps; RTT = 200ms; TCP initcwnd = 20.

Figure 3 shows the percentage increase in the TTLB between existing and post-quantum TLS 1.3 connections carrying 0-200KiB of data from the server for each percentile at 1Mbps bandwidth, 200ms RTT, and 0% loss probability. We can see that increases for the three percentiles are almost identical. They start high (∼33%) at 0KiB from the server, but as the data size from the server increases, they drop to ∼6% because the handshake data size is amortized over the connection.

A line graph whose y-axis is the time-to-last-byte (TTLB) percentage increase and whose x-axis is the size of the data files transmitted over the secure connection, ranging from 0 KiB to 200 KiB. There are three lines, representing the 50th, 75th, and 90th percentiles. The 50th-percentile line drops precipitously from 0 KiB to 50 KiB, declines more gradually from 50 to 100, then increases slightly from 100 to 200. The 90th-percentile line starts much lower but increases slightly to 50 KiB, before declining to 100 and 200. The 75th-percentile line starts lower still, declines to 100 KiB, the increases slightly from 100 to 200.
Figure 4: Percentage increase in TTLB between existing and post-quantum TLS 1.3 connections. Loss = 10%; bandwidth = 1Mbps; RTT = 200ms; TCP initcwnd = 20.
Related content
Amazon is helping develop standards for post-quantum cryptography and deploying promising technologies for customers to experiment with.

Figure 4 shows the percentage increase in TTLB between existing and post-quantum TLS 1.3 connections carrying 0-200 KiB of data from the server for each percentile at 1Mbps bandwidth, 200ms RTT, and 10% loss probability. It shows that at 10% loss, the TTLB increase settles between 20-30% for all percentiles. The same experiments for 35ms RTT produced similar results. Although a 20-30% increase may seem high, we note that re-running the experiments could sometimes lead to smaller or higher percentage increases because of the general network instability of the scenario. Also, bear in mind that TTLBs for the existing algorithm at 200KiB from the server, 200ms RTT, and 10% loss were 4,644ms, 7,093ms, and 10,178ms, whereas their post-quantum-connection equivalents were 6,010ms, 8,883ms, and 12,378ms. At 0% loss they were 2,364ms, 2,364ms, and 2,364ms. So, although the TTLBs for the post-quantum connections increased by 20-30% relative to the conventional connections, the conventional connections are already impaired (by 97-331%) due to network loss. An extra 20-30% is not likely to make much difference in an already highly degraded connection time.

A line graph whose y-axis is the time-to-last-byte (TTLB) percentage increase and whose x-axis is the size of the data files transmitted over the secure connection, ranging from 0 KiB to 200 KiB. There are three lines, representing the 50th, 75th, and 90th percentiles. They start at different values but all decline precipitously from 0 KiB to 50 KiB. From 50KiB to 100 KiB, the 75th-percentile line and the 50th-percentile line continue to decline, but the 90th-percentile line increases slightly. All three increase slightly between 100 KiB and 200.
Figure 5: Percentage increase in TTLB between existing and post-quantum TLS 1.3 connections for 0% loss probability under “volatile network” conditions. Bandwidth = 1Gbps; RTT = 35ms; TCP initcwnd = 20.

Figure 5 shows the percentage increase in TTLB between existing and post-quantum TLS 1.3 connections for 0% loss probability and 0-200KiB data sizes transferred from the server. To model a highly volatile RTT, we used a Pareto-normal distribution with a mean of 35ms and 35/4ms jitter. We can see that the increase in post-quantum connection TTLB starts high at 0KiB server data and drops to 4-5%. As with previous experiments, the percentages were more volatile the higher the loss probabilities, but overall, the results show that even under “volatile network conditions” the TTLB drops to acceptable levels as the amount of transferred data increases.

A line graph whose y-axis is the cumulative distribution function (CDF), from 0.0 to 1.0, and whose x-axis is time to last byte (TTLB) in milliseconds. There are five differently colored lines. The first four all have the same round-trip time. Two of them have bandwidth of 1Gbps and two bandwidth of 1Mbps. Within each bandwidth tier, the two lines represent 0% and 5% loss. The fifth line is Pareto-normal round-trip time. The high-bandwidth lines and the Pareto-normal line all begin near the origin. The high-bandwidth, low-loss line is almost vertical, reaching 1.0 almost immediately. The high-bandwidth, high-loss line and Pareto-normal line look like offsets of each other, with the Pareto-normal line increasing at a slightly lower rate; both rise fairly quickly, reaching 0.8 at about 1,000 milliseconds. The low-bandwidth lines both begin at TTLB values of of about 2,000. Again, the low-loss line is almost vertical; the higher-loss line rises at a slower rate.
Figure 6: TTLB cumulative distribution function for post-quantum TLS 1.3 connections. 200KiB from the server; RTT = 35ms; TCP initcwnd = 20.

To confirm the volatility under unstable network conditions, we used the TTLB cumulative distribution function (CDF) for post-quantum TLS 1.3 connections transferring 200KiB from the server (figure 6). We observe that under all types of volatile conditions (1Gbps and 5% loss, 1Mbps and 10% loss, Pareto-normal distributed network delay), the TTLB increases very early in the experimental measurement sample, which demonstrates that the total connection times are highly volatile. We made the same observation with TLS 1.3 handshake times under unstable network conditions.

Conclusion

This work demonstrated that the practical effect of data-heavy, post-quantum algorithms on TLS 1.3 connections is lower than their effect on the handshake itself. Low-loss, low- or high-bandwidth connections will see little impact from post-quantum handshakes when transferring sizable amounts of data. We also showed that although the effects of PQ handshakes could vary under unstable conditions with higher loss rates or high-variability delays, they stay within certain limits and drop as the total amount of transferred data increases. Additionally, we saw that unstable connections inherently provide poor completion times; a small latency increase due to post-quantum handshakes would not render them less usable than before. This does not mean that trimming the amount of handshake data is undesirable, especially if little application data is sent relative to the size of the handshake messages.

For more details, please see our paper.

Related content

US, CA, Santa Clara
We are seeking an Applied Scientist II to join Amazon Customer Service's Science team, where you will build AI-based automated customer service solutions using state-of-the-art techniques in retrieval-augmented generation (RAG), agentic AI, and post-training of large language models. You will work at the intersection of research and production, developing intelligent systems that directly impact millions of customers while collaborating with scientists, engineers, and product managers in a fast-paced, innovative environment. Key job responsibilities - Design, develop, and deploy information retrieval systems and RAG pipelines using embedding models, reranking algorithms, and generative models to improve customer service automation - Conduct post-training of large language models using techniques such as Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Group Relative Policy Optimization (GRPO) to optimize model performance for customer service tasks - Build and curate high-quality datasets for model training and evaluation, ensuring data quality and relevance for customer service applications - Design and implement comprehensive evaluation frameworks, including data curation, metrics development, and methods such as LLM-as-a-judge to assess model performance - Develop AI agents for automated customer service, understanding their advantages and common pitfalls, and implementing solutions that balance automation with customer satisfaction - Independently perform research and development with minimal guidance, staying current with the latest advances in machine learning and AI - Collaborate with cross-functional teams including engineering, product management, and operations to translate research into production systems - Publish findings and contribute to the broader scientific community through papers, patents, and open-source contributions - Monitor and improve deployed models based on real-world performance metrics and customer feedback A day in the life As an Applied Scientist II, you will start your day reviewing metrics from deployed models and identifying opportunities for improvement. You might spend your morning experimenting with new post-training techniques to improve model accuracy, then collaborate with engineers to integrate your latest model into production systems. You will participate in design reviews, share your findings with the team, and mentor junior scientists. You will balance research exploration with practical implementation, always keeping the customer experience at the forefront of your work. You will have the autonomy to drive your own research agenda while contributing to team goals and deliverables. About the team The Amazon Customer Service Science team is dedicated to revolutionizing customer support through advanced AI and machine learning. We are a diverse group of scientists and engineers working on some of the most challenging problems in natural language understanding and AI automation. Our team values innovation, collaboration, and a customer-obsessed mindset. We encourage experimentation, celebrate learning from failures, and are committed to maintaining Amazon's high bar for scientific rigor and operational excellence. You will have access to world-class computing resources, massive datasets, and the opportunity to work alongside some of the brightest minds in AI and machine learning.
US, WA, Redmond
Amazon Leo is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband connectivity to unserved and underserved communities around the world. As a Communications Engineer in Modeling and Simulation, this role is primarily responsible for the developing and analyzing high level system resource allocation techniques for links to ensure optimal system and network performance from the capacity, coverage, power consumption, and availability point of view. Be part of the team defining the overall communication system and architecture of Amazon Leo’s broadband wireless network. This is a unique opportunity to innovate and define novel wireless technology with few legacy constraints. The team develops and designs the communication system of Leo and analyzes its overall system level performance, such as overall throughput, latency, system availability, packet loss, etc., as well as compatibility for both connectivity and interference mitigation with other space and terrestrial systems. This role in particular will be responsible for 1) evaluating complex multi-disciplinary trades involving RF bandwidth and network resource allocation to customers, 2) understanding and designing around hardware/software capabilities and constraints to support a dynamic network topology, 3) developing heuristic or solver-based algorithms to continuously improve and efficiently use available resources, 4) demonstrating their viability through detailed modeling and simulation, 5) working with operational teams to ensure they are implemented. This role will be part of a team developing the necessary simulation tools, with particular emphasis on coverage, capacity, latency and availability, considering the yearly growth of the satellite constellation and terrestrial network. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Key job responsibilities • Work within a project team and take the responsibility for the Leo's overall communication system design and architecture • Extend existing code/tools and create simulation models representative of the target system, primarily in MATLAB • Design interconnection strategies between fronthaul and backhaul nodes. Analyze link availability, investigate link outages, and optimize algorithms to study and maximize network performance • Use RF and optical link budgets with orbital constellation dynamics to model time-varying system capacity • Conduct trade-off analysis to benefit customer experience and optimization of resources (costs, power, spectrum), including optimization of satellite constellation design and link selection • Work closely with implementation teams to simulate expected system level performance and provide quick feedback on potential improvements • Analyze and minimize potential self-interference or interference with other communication systems • Provide visualizations, document results, and communicate them across multi-disciplinary project teams to make key architectural decisions
US, WA, Seattle
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to apply their causal inference / structural econometrics skillsets to solve real world problems. The intern will work in the area of Store Economics and Science (SEAS) and develop models to SEAS. Our PhD Economist Internship Program offers hands-on experience in applied economics, supported by mentorship, structured feedback, and professional development. Interns work on real business and research problems, building skills that prepare them for full-time economist roles at Amazon and beyond. You will learn how to build data sets and perform applied econometric analysis collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. About the team The Stores Economics and Science Team (SEAS) is a Stores-wide interdisciplinary team at Amazon with a "peak jumping" mission focused on disruptive innovation. The team applies science, economics, and engineering expertise to tackle the business's most critical problems, working to move from local to global optima across Amazon Stores operations. SEAS builds partnerships with organizations throughout Amazon Stores to pursue this mission, exploring frontier science while learning from the experience and perspective of others. Their approach involves testing solutions first at a small scale, then aligning more broadly to build scalable solutions that can be implemented across the organization. The team works backwards from customers using their unique scientific expertise to add value, takes on long-run and high-risk projects that business teams typically wouldn't pursue, helps teams with kickstart problems by building practical prototypes, raises the scientific bar at Amazon, and builds and shares software that makes Amazon more productive.
US, WA, Seattle
Amazon is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced electromechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. Amazon is seeking a talented and motivated Principal Applied Scientist to develop tactile sensors and guide the sensing strategy for our gripper design. The ideal candidate will have extensive experience in sensor development, analysis, testing and integration. This candidate must have the ability to work well both independently and in a multidisciplinary team setting. Key job responsibilities - Author functional requirements, design verification plans and test procedures - Develop design concepts which meet the requirements - Work with engineering team members to implement the concepts in a product design - Support product releases to manufacturing and customer deployments - Work efficiently to support aggressive schedules
US, TX, Austin
Amazon Security is seeking an Applied Scientist to work on GenAI acceleration within the Secure Third Party Tools (S3T) organization. The S3T team has bold ambitions to re-imagine security products that serve Amazon's pace of innovation at our global scale. This role will focus on leveraging large language models and agentic AI to transform third-party security risk management, automate complex vendor assessments, streamline controllership processes, and dramatically reduce assessment cycle times. You will drive builder efficiency and deliver bar-raising security engagements across Amazon. Key job responsibilities Own and drive end-to-end technical delivery for scoped science initiatives focused on third-party security risk management, independently defining research agendas, success metrics, and multi-quarter roadmaps with minimal oversight. Understanding approaches to automate third-party security review processes using state-of-the-art large language models, development intelligent systems for vendor assessment document analysis, security questionnaire automation, risk signal extraction, and compliance decision support. Build advanced GenAI and agentic frameworks including multi-agent orchestration, RAG pipelines, and autonomous workflows purpose-built for third-party risk evaluation, security documentation processing, and scalable vendor assessment at enterprise scale. Build ML-powered risk intelligence capabilities that enhance third-party threat detection, vulnerability classification, and continuous monitoring throughout the vendor lifecycle. Coordinate with Software Engineering and Data Engineering to deploy production-grade ML solutions that integrate seamlessly with existing third-party risk management workflows and scale across the organization. About the team Security is central to maintaining customer trust and delivering delightful customer experiences. At Amazon, our Security organization is designed to drive bar-raising security engagements. Our vision is that Builders raise the Amazon security bar when they use our recommended tools and processes, with no overhead to their business. Diverse Experiences Amazon Security values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why Amazon Security? At Amazon, security is central to maintaining customer trust and delivering delightful customer experiences. Our organization is responsible for creating and maintaining a high bar for security across all of Amazon’s products and services. We offer talented security professionals the chance to accelerate their careers with opportunities to build experience in a wide variety of areas including cloud, devices, retail, entertainment, healthcare, operations, and physical stores. Inclusive Team Culture In Amazon Security, it’s in our nature to learn and be curious. Ongoing DEI events and learning experiences inspire us to continue learning and to embrace our uniqueness. Addressing the toughest security challenges requires that we seek out and celebrate a diversity of ideas, perspectives, and voices. Training & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, training, and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, CA, Mountain View
At AWS Healthcare AI, we're revolutionizing healthcare delivery through AI solutions that serve millions globally. As a pioneer in healthcare technology, we're building next-generation services that combine Amazon's world-class AI infrastructure with deep healthcare expertise. Our mission is to accelerate our healthcare businesses by delivering intuitive and differentiated technology solutions that solve enduring business challenges. The AWS Healthcare AI organization includes services such as HealthScribe, Comprehend Medical, HealthLake, and more. We're seeking a Senior Applied Scientist to join our team working on our AI driven clinical solutions that are transforming how clinicians interact with patients and document care. Key job responsibilities To be successful in this mission, we are seeking an Applied Scientist to contribute to the research and development of new, highly influencial AI applications that re-imagine experiences for end-customers (e.g., consumers, patients), frontline workers (e.g., customer service agents, clinicians), and back-office staff (e.g., claims processing, medical coding). As a leading subject matter expert in NLU, deep learning, knowledge representation, foundation models, and reinforcement learning, you will collaborate with a team of scientists to invent novel, generative AI-powered experiences. This role involves defining research directions, developing new ML techniques, conducting rigorous experiments, and ensuring research translates to impactful products. You will be a hands-on technical innovator who is passionate about building scalable scientific solutions. You will set the standard for excellence, invent scalable, scientifically sound solutions across teams, define evaluation methods, and lead complex reviews. This role wields significant influence across AWS, Amazon, and the global research community.
US, TX, Austin
Amazon Leo is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband connectivity to unserved and underserved communities around the world. As a Systems Engineer, this role is primarily responsible for the design, development and integration of communication payload and customer terminal systems. The Role: Be part of the team defining the overall communication system and architecture of Amazon Leo’s broadband wireless network. This is a unique opportunity to innovate and define groundbreaking wireless technology at global scale. The team develops and designs the communication system for Leo and analyzes its overall system level performance such as for overall throughput, latency, system availability, packet loss etc. This role in particular will be responsible for leading the effort in designing and developing advanced technology and solutions for communication system. This role will also be responsible developing advanced physical layer + protocol stacks systems as proof of concept and reference implementation to improve the performance and reliability of the LEO network. In particular this role will be responsible for using concepts from digital signal processing, information theory, wireless communications to develop novel solutions for achieving ultra-high performance LEO network. This role will also be part of a team and develop simulation tools with particular emphasis on modeling the physical layer aspects such as advanced receiver modeling and abstraction, interference cancellation techniques, FEC abstraction models etc. This role will also play a critical role in the integration and verification of various HW and SW sub-systems as a part of system integration and link bring-up and verification. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum.
US, WA, Seattle
Come be a part of a rapidly expanding $35 billion-dollar global business. At Amazon Business, a fast-growing startup passionate about building solutions, we set out every day to innovate and disrupt the status quo. We stand at the intersection of tech & retail in the B2B space developing innovative purchasing and procurement solutions to help businesses and organizations thrive. At Amazon Business, we strive to be the most recognized and preferred strategic partner for smart business buying. Bring your insight, imagination and a healthy disregard for the impossible. Join us in building and celebrating the value of Amazon Business to buyers and sellers of all sizes and industries. Unlock your career potential. Amazon Business Data Insights and Analytics team is looking for a Data Scientist to lead the research and thought leadership to drive our data and insights strategy for Amazon Business. This role is central in shaping the definition and execution of the long-term strategy for Amazon Business. You will be responsible for researching, experimenting and analyzing predictive and optimization models, designing and implementing advanced detection systems that analyze customer behavior at registration and throughout their journey. You will work on ambiguous and complex business and research science problems with large opportunities. You'll leverage diverse data signals including customer profiles, purchase patterns, and network associations to identify potential abuse and fraudulent activities. You are an analytical individual who is comfortable working with cross-functional teams and systems, working with state-of-the-art machine learning techniques and AWS services to build robust models that can effectively distinguish between legitimate business activities and suspicious behavior patterns You must be a self-starter and be able to learn on the go. Excellent written and verbal communication skills are required as you will work very closely with diverse teams. Key job responsibilities - Interact with business and software teams to understand their business requirements and operational processes - Frame business problems into scalable solutions - Adapt existing and invent new techniques for solutions - Gather data required for analysis and model building - Create and track accuracy and performance metrics - Prototype models by using high-level modeling languages such as R or in software languages such as Python. - Familiarity with transforming prototypes to production is preferred. - Create, enhance, and maintain technical documentation
US, MA, N.reading
Amazon Industrial Robotics Group is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics Group, we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of dexterous manipulation system that: - Enables unprecedented generalization across diverse tasks - Enables contact-rich manipulation in different environments - Seamlessly integrates low-level skills and high-level behaviors - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. A day in the life - Work on design and implementation of methods for Visual SLAM, navigation and spatial reasoning - Leverage simulation and real-world data collection to create large datasets for model development - Develop a hierarchical system that combines low-level control with high-level planning - Collaborate effectively with multi-disciplinary teams to co-design hardware and algorithms for dexterous manipulation
US, NY, New York
We are seeking an Applied Scientist to lead the development of evaluation frameworks and data collection protocols for robotic capabilities. In this role, you will focus on designing how we measure, stress-test, and improve robot behavior across a wide range of real-world tasks. Your work will play a critical role in shaping how policies are validated and how high-quality datasets are generated to accelerate system performance. You will operate at the intersection of robotics, machine learning, and human-in-the-loop systems, building the infrastructure and methodologies that connect teleoperation, evaluation, and learning. This includes developing evaluation policies, defining task structures, and contributing to operator-facing interfaces that enable scalable and reliable data collection. The ideal candidate is highly experimental, systems-oriented, and comfortable working across software, robotics, and data pipelines, with a strong focus on turning ambiguous capability goals into measurable and actionable evaluation systems. Key job responsibilities - Design and implement evaluation frameworks to measure robot capabilities across structured tasks, edge cases, and real-world scenarios - Develop task definitions, success criteria, and benchmarking methodologies that enable consistent and reproducible evaluation of policies - Create and refine data collection protocols that generate high-quality, task-relevant datasets aligned with model development needs - Build and iterate on teleoperation workflows and operator interfaces to support efficient, reliable, and scalable data collection - Analyze evaluation results and collected data to identify performance gaps, failure modes, and opportunities for targeted data collection - Collaborate with engineering teams to integrate evaluation tooling, logging systems, and data pipelines into the broader robotics stack - Stay current with advances in robotics, evaluation methodologies, and human-in-the-loop learning to continuously improve internal approaches - Lead technical projects from conception through production deployment - Mentor junior scientists and engineers