Amazon Science Forecasting Algorithm.png

The history of Amazon’s forecasting algorithm

The story of a decade-plus long journey toward a unified forecasting model.

When a customer visits Amazon, there is an almost inherent expectation that the item they are searching for will be in stock. And that expectation is understandable — Amazon sells more than 400 million products in over 185 countries.

However, the sheer volume of products makes it cost-prohibitive to maintain surplus inventory levels for every product.

Recommended reads
Automated method that uses gradients to identify salient layers prevents regression on previously seen data.

Historical patterns can be leveraged to make decisions on inventory levels for products with predictable consumption patterns — think household staples like laundry detergent or trash bags. However, most products exhibit a variability in demand due to factors that are beyond Amazon’s control.

Take the example of a book like Michelle Obama’s Becoming, or the recent proliferation of sweatsuits, which emerged as both a comfortable and a fashion-forward clothing option during 2020. It’s difficult to account for the steep spike in sales caused by a publicity tour featuring Oprah Winfrey and nearly impossible to foresee the effect COVID-19 would have on, among other things, stay-at-home clothing trends.

Today, Amazon’s forecasting team has drawn on advances in fields like deep learning, image recognition, and natural-language processing to develop a forecasting model that makes accurate decisions across diverse product categories. Arriving at this unified forecasting model hasn’t been the result of one “eureka” moment. Rather, it has been a decade-plus-long journey.

Hands-off-the-wheel automation: Amazon’s supply chain optimization

“When we started the forecasting team at Amazon, we had ten people and no scientists,” says Ping Xu, forecasting science director within Amazon’s Supply Chain Optimization Technologies (SCOT) organization. “Today, we have close to 200 people on our team. The focus on scientific and technological innovation has been key in allowing us to draw an accurate estimate of the immense variability in future demand and make sure that customers are able to fulfill their shopping needs on Amazon.”

In the beginning: A patchwork of models

Kari Torkkola, senior principal research scientist, has played a key role in driving the evolution of Amazon’s forecasting systems in his 12 years at the company.

“When I joined Amazon, the company relied on traditional time series models for forecasting,” says Torkkola.

Clockwise from top left, Ping Xu, forecasting science director; Kari Torkkola, senior principal research scientist; Dhruv Madeka, principal applied scientist; and Ruofeng Wen, senior applied scientist
Clockwise from top left, Ping Xu, forecasting science director; Kari Torkkola, senior principal research scientist; Dhruv Madeka, principal applied scientist; and Ruofeng Wen, senior applied scientist

Time series forecasting is a statistical technique that uses historical values and associated patterns to predict future activity. In 2008, Amazon’s forecasting system used standard textbook time series forecasting methods to make predictions.

The system produced accurate forecasts in scenarios where the time series was predictable and stationary. However, it was unable to produce accurate forecasts for situations such as new products that had no prior history or products with highly seasonal sale patterns. Amazon’s forecasting teams had to develop new methods to account for each of these scenarios.

The system was incredibly hard to maintain. It gradually became clear that we needed to work towards developing a unified forecasting model.
Kari Torkkola

So they set about developing an add-on component to model seasonal patterns in products such as winter jackets. Another specialized component solved for the effects of price elasticity, where products see spikes in demand due to price drops, while yet another component called the Distribution Engine modeled past errors to produce estimates of forecast distributions on top of point forecasts.

“There were multiple components, all of which needed our attention,” says Torkkola. “The system was incredibly hard to maintain. It gradually became clear that we needed to work towards developing a unified forecasting model.”

Enter the random forest

If the number of components made maintaining the forecasting system laborious, routing special forecasting cases or even product groups to specialized models, which involved encoding expert knowledge, complicated matters even further.

Then Torkkola had a deceptively simple insight as he began working toward a unified forecasting model. “There are products across multiple categories that behave the same way,” he said.

Recommended reads
Representing facts using knowledge triplets rather than natural language enables finer-grained judgments.

For example, there is clear delineation between new products and products with an established history. The forecast for a new video game or laptop can be generated, in part, from how similar products behaved when they had launched in the past.

Torkkola extracted a set of features from information such as demand, sales, product category, and page views. He used these features to train a random forest model. Random forests are commonly used machine learning algorithms that comprise  a number of decision trees. The outputs of the decision trees are bundled together to provide a more stable and accurate prediction.

“By pooling everything together in one model, we gained statistical strength across multiple categories,” Torkkola says.

At the time, Amazon’s base forecasting system produced point forecasts to predict future demand — a single number that conveys information about the future demand. However, full forecast distributions or a set of quantiles of the distribution are necessary when it comes to make informed forecasting decisions on inventory levels. The Distribution Engine, which was another add-on to the base system, was producing poorly calibrated distributions.

Related content
Learning the complete quantile function, which maps probabilities to variable values, rather than building separate models for each quantile level, enables better optimization of resource trade-offs.

Torkkola wrote an initial implementation of the random-forest approach to output quantiles of forecast distributions. This was rewritten in a new incarnation called a Sparse Quantile Random Forest (SQRF). That implementation allowed a single forecasting system to make forecasts for different product lines where each may have had different features, thus each of those features seems very “sparse”. SQRF could also scale to millions of products and represented a step change for Amazon to produce forecasts at scale.

However, the system suffered from a serious drawback. It still required the team to manually engineer features for the model — in other words, the system needed humans to define the input variables that would provide the best possible output.

That was all set to change in 2013, when the field of deep learning went into overdrive.

Deep learning produces the unified model

“In 2013, there was a lot of excitement in the machine learning community around deep learning,” Torkkola says. “There were significant advances in the field of image recognition. In addition, tensor frameworks such as THEANO, developed by the University of Montreal, were allowing developers to build deep-learning models on the fly. Currently popular frameworks such as TensorFlow were not yet available.”

Neural networks were a tantalizing prospect for Amazon’s forecasting team. In theory, neural networks could do away with the need to manually engineer features. The network could ingest raw data and learn the most relevant implicit features needed to produce a forecast without human input.

With the help of interns hired over the summers of 2014 and 2015, Torkkola experimented with both feed-forward and recurrent neural networks (RNNs). In feed-forward networks, the connections between nodes do not form a cycle; the opposite is true with RNNs. The team began by developing a RNN to produce a point forecast. Over the next summer, another intern developed a model to produce a distribution forecast. However, these early iterations did not outperform SQRF, the existing production system.

Related content
How Amazon’s scientists developed a first-of-its-kind multi-echelon system for inventory buying and placement.

Amazon’s forecasting team went back to the drawing board and had another insight, one that would prove crucial in the journey towards developing a unified forecasting model.

“We trained the network on minimizing quantile loss over multiple forecast horizons,” Torkkola says. Quantile loss is among the most important metrics used in forecasting systems. It is appropriate when under- and overprediction errors have different costs, such as in inventory buying.

“When you train a system on the same metric that you are interested in evaluating, the system performs better,” Torkkola says. The new feed-forward network delivered a significant improvement in forecasting relative to SQRF.

This was the breakthrough that the team had been working towards: the team could finally start retiring the plethora of old models and utilize a unified forecasting model that would produce accurate forecasts for multiple scenarios, forecasts, and categories. The result was a 15-fold improvement in forecast accuracy and great simplification of the entire system.

At last, no feature engineering!

While the feed-forward network had delivered an impressive improvement in performance, the system still continued using the same hand-engineered features SQRF had used. "There was no way to tell how far those features were from optimal," Ruofeng Wen, a senior applied scientist who formerly worked as a forecasting scientist and joined the project in 2016, pointed out. “Some were redundant, and some were useless.”

Related content
Method uses metric learning to determine whether images depict the same product.

The team set out to develop a model that would remove the need to manually engineer domain-specific features, thus being applicable to any general forecasting problem. The breakthrough approach, known as MQ-RNN/CNN, was published in a 2018 paper titled "A Multi-Horizon Quantile Recurrent Forecaster". It built off the recent advances made in recurrent networks (RNN) and convolutional networks (CNNs).

CNNs are frequently used in image recognition due to their ability to scan an image, determine the saliency of various parts of that image, and make decisions about the relative importance of those facets. RNNs are usually used in a different domain, parsing semantics and sentiments from texts. Crucially, both RNNs and CNNs are able to extract the most relevant features without manual engineering. “After all, forecasting is based on past sequential patterns,” Wen said, “and RNNs/CNNs are pretty good at capturing them.”

Leveraging the new general approach allowed Amazon to forecast the demand of any fast-moving products with a single model structure. This outperformed a dozen legacy systems designed for difference product lines, since the model was smart enough to learn business-specific demand patterns all by itself. However, for a system to make accurate predictions about the future, it has to have a detailed understanding of the errors it has made in the past. The architecture of the Multi-Horizon Quantile Recurrent Forecaster had few mechanisms that would enable it to ingest knowledge about past errors.

Amazon’s forecasting team worked through this limitation by turning to the latest advances in natural-language processing (NLP).

Leaning on natural language processing

Dhruv Madeka, a principal applied scientist who had conducted innovative work in developing election forecasting systems at Bloomberg, was among the scientists who had joined Amazon’s forecasting team in 2017.

“Sentences are a sequence of words,” Madeka says. “The attention mechanisms in many NLP models look at a sequence of words and determine which other parts of the sentence are important for a given context and task. By incorporating these context-aware mechanisms, we now had a way to make our forecasting system pay attention to its history and gain an understanding of the errors it had made in the past.”

Amazon’s forecasting team honed in on the transformer architectures that were shaking up the world of NLP. Their new approach, which used decoder-encoder attention mechanisms for context alignment, was outlined in the paper "MQTransformer: Multi-Horizon Forecasts with Context Dependent and Feedback-Aware Attention", published in December 2020. The decoder-encoder attention mechanisms meant that the system could study its own history to improve forecasting accuracy and decrease volatility.

With MQ Transformer, Amazon now has a unified forecasting model able to make even more accurate predictions across the company’s vast catalogue of products.

Today, the team is developing deep-reinforcement-learning models that will enable Amazon to ensure that the accuracy improvements in forecasts translate directly into cost savings, resulting in lower costs for customers. To design a system that optimizes directly for savings — as opposed to inventory levels — the forecasting team is drawing on cutting-edge research from fields such as deep reinforcement learning.

“Amazon is an exceptional place for a scientist because of the focus on innovation grounded on making a real impact,” says Xu. “Thinking big is more than having a bold vision. It involves planting seeds, growing it continuously by failing fast, and doubling down on scaling once the evidence of success becomes apparent.”

Related content

TW, TPE, Hsinchu City
Are you passionate about robotics and research? Do you want to solve real customer problems through innovative technology? Do you enjoy working on scalable research and projects in a collaborative team environment? Do you want to see your science solutions directly impact millions of customers worldwide? At Amazon, we hire the best minds in technology to innovate and build on behalf of our customers. Customer obsession is part of our company DNA, which has made us one of the world's most beloved brands. We’re looking for current PhD or Master students with a passion for robotic research and applications to join us as Robotics Applied Scientist II Intern/Co-ops in 2026 to shape the future of robotics and automation at an unprecedented scale across. For these positions, our Robotics teams at Amazon are looking for students with a specialization in one or more of the research areas in robotics such as: robotics, robotics manipulation (e.g., robot arm, grasping, dexterous manipulation, end of arm tools/end effector), autonomous mobile robots, mobile manipulation, movement, autonomous navigation, locomotion, motion/path planning, controls, perception, sensing, robot learning, artificial intelligence, machine learning, computer vision, large language models, human-robot interaction, robotics simulation, optimization, and more! We're looking for curious minds who think big and want to define tomorrow's technology. At Amazon, you'll grow into the high-impact engineer you know you can be, supported by a culture of learning and mentorship. Every day brings exciting new challenges and opportunities for personal growth. By applying to this role, you will be considered for Robotics Applied Scientist II Intern/Co-op (2026) opportunities across various Robotics teams at Amazon with different robotics research focus, with internship positions available for multiple locations, durations (3 to 6+ months), and year-round start dates (winter, spring, summer, fall). Amazon intern and co-op roles follow the same internship structure. "Intern/Internship" wording refers to both interns and co-ops. Amazon internships across all seasons are full-time positions during vacation, and interns should expect to work in office, Monday-Friday, up to 40 hours per week typically between 9am-6pm. Specific team norms around working hours will be communicated by your manager. Interns should not have other employment during the Amazon work-day. Applicants should have a minimum of one quarter/semester/trimester remaining in their studies after their internship concludes. The robotics internship join dates, length, location, and prospective team will be finalized at the time of any applicable job offers. In your application, you will be able to provide your preference of research interests, start dates, internship duration, and location. While your preference will be taken into consideration, we cannot guarantee that we can meet your selection based on several factors including but not limited to the internship availability and business needs of this role.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the limits. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As an Applied Scientist on our team, you will focus on building state-of-the-art ML models for biology. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the forefront of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with other teams. Key job responsibilities As an Applied Science, you will have access to large datasets with billions of images and video to build large-scale machine learning systems. Additionally, you will analyze and model terabytes of text, images, and other types of data to solve real-world problems and translate business and functional requirements into quick prototypes or proofs of concept. We are looking for smart scientists capable of using a variety of domain expertise combined with machine learning and statistical techniques to invent, design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. About the team Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best.
US, WA, Seattle
Here at Amazon, we embrace our differences. We are committed to furthering our culture of diversity and inclusion of our teams within the organization. How do you get items to customers quickly, cost-effectively, and—most importantly—safely, in less than an hour? And how do you do it in a way that can scale? Our teams of hundreds of scientists, engineers, aerospace professionals, and futurists have been working hard to do just that! We are delivering to customers, and are excited for what’s to come. Check out more information about Prime Air on the About Amazon blog (https://www.aboutamazon.com/news/transportation/amazon-prime-air-delivery-drone-reveal-photos). If you are seeking an iterative environment where you can drive innovation, apply state-of-the-art technologies to solve real world delivery challenges, and provide benefits to customers, Prime Air is the place for you. Come work on the Amazon Prime Air Team! Prime Air is seeking an experienced Applied Science Manager to help develop our advanced Navigation algorithms and flight software applications. In this role, you will lead a team of scientists and engineers to conduct analyses, support cross-functional decision-making, define system architectures and requirements, contribute to the development of flight algorithms, and actively identify innovative technological opportunities that will drive significant enhancements to meet our customers' evolving demands. This person must be comfortable working with a team of top-notch software developers and collaborating with our science teams. We’re looking for someone who innovates, and loves solving hard problems. You will work hard, have fun, and make history! Export Control License: This position may require a deemed export control license for compliance with applicable laws and regulations. Placement is contingent on Amazon’s ability to apply for and obtain an export control license on your behalf.
US, VA, Herndon
Application deadline: Applications will be accepted on an ongoing basis Are you excited to help the US Intelligence Community design, build, and implement AI algorithms, including advanced Generative AI solutions, to augment decision making while meeting the highest standards for reliability, transparency, and scalability? The Amazon Web Services (AWS) US Federal Professional Services team works directly with US Intelligence Community agencies and other public sector entities to achieve their mission goals through the adoption of Machine Learning (ML) and Generative AI methods. We build models for text, image, video, audio, and multi-modal use cases, leveraging both traditional ML approaches and state-of-the-art generative models including Large Language Models (LLMs), text-to-image generation, and other advanced AI capabilities to fit the mission. Our team collaborates across the entire AWS organization to bring access to product and service teams, to get the right solution delivered and drive feature innovation based on customer needs. At AWS, we're hiring experienced data scientists with a background in both traditional and generative AI who can help our customers understand the opportunities their data presents, and build solutions that earn the customer trust needed for deployment to production systems. In this role, you will work closely with customers to deeply understand their data challenges and requirements, and design tailored solutions that best fit their use cases. You should have broad experience building models using all kinds of data sources, and building data-intensive applications at scale. You should possess excellent business acumen and communication skills to collaborate effectively with stakeholders, develop key business questions, and translate requirements into actionable solutions. You will provide guidance and support to other engineers, sharing industry best practices and driving innovation in the field of data science and AI. This position requires that the candidate selected must currently possess and maintain an active TS/SCI Security Clearance with Polygraph. The position further requires the candidate to opt into a commensurate clearance for each government agency for which they perform AWS work. Key job responsibilities As an Data Scientist, you will: - Collaborate with AI/ML scientists and architects to research, design, develop, and evaluate AI algorithms to address real-world challenges - Interact with customers directly to understand the business problem, help and aid them in implementation of AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production. - Create and deliver best practice recommendations, tutorials, blog posts, sample code, and presentations adapted to technical, business, and executive stakeholder - Provide customer and market feedback to Product and Engineering teams to help define product direction - This position may require up to 25% local travel. About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences and inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
US, TX, Austin
Our team is involved with pre-silicon design verification for custom IP. A critical requirement of the verification flow is the requirement of legal and realistic stimulus of a custom Machine Learning Accelerator Chip. Content creation is built using formal methods that model legal behavior of the design and then solving the problem to create the specific assembly tests. The entire frame work for creating these custom tests is developed using a SMT solver and custom software code to guide the solution space into templated scenarios. This highly visible and innovative role requires the design of this solving framework and collaborating with design verification engineers, hardware architects and designers to ensure that interesting content can be created for the projects needs. Key job responsibilities Develop an understanding for a custom machine learning instruction set architecture. Model correctness of instruction streams using first order logic. Create custom API's to allow control over scheduling and randomness. Deploy algorithms to ensure concurrent code is safely constructed. Create coverage metrics to ensure solution space coverage. Use novel methods like machine learning to automate content creation. About the team Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for customers who require specialized security solutions for their cloud services. Annapurna Labs (our organization within AWS UC) designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world. About AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
CN, 11, Beijing
职位:Applied scientist 应用科学家实习生 毕业时间:2026年10月 - 2027年7月之间毕业的应届毕业生 · 入职日期:2026年6月及之前 · 实习时间:保证一周实习4-5天全职实习,至少持续3个月 · 工作地点:北京朝阳区 投递须知: 1 填写简历申请时,请把必填和非必填项都填写完整。提交简历之后就无法修改了哦! 2 学校的英文全称请准确填写。中英文对应表请查这里(无法浏览请登录后浏览)https://docs.qq.com/sheet/DVmdaa1BCV0RBbnlR?tab=BB08J2 如果您正在攻读计算机,AI,ML或搜索领域专业的博士或硕士研究生,而且对应用科学家的实习工作感兴趣。如果您也喜爱深入研究棘手的技术问题并提出解决方案,用成功的产品显著地改善人们的生活。 那么,我们诚挚邀请您加入亚马逊的International Technology搜索团队改善Amazon的产品搜索服务。我们的目标是帮助亚马逊的客户找到他们所需的产品,并发现他们感兴趣的新产品。 这会是一份收获满满的工作。您每天的工作都与全球数百万亚马逊客户的体验紧密相关。您将提出和探索创新,基于TB级别的产品和流量数据设计机器学习模型。您将集成这些模型到搜索引擎中为客户提供服务,通过数据,建模和客户反馈来完成闭环。您对模型的选择需要能够平衡业务指标和响应时间的需求。
CN, 44, Shenzhen
职位:Applied scientist 应用科学家实习生 毕业时间:2026年10月 - 2027年7月之间毕业的应届毕业生 · 入职日期:2026年6月及之前 · 实习时间:保证一周实习4-5天全职实习,至少持续3个月 · 工作地点:深圳福田区 投递须知: 1 填写简历申请时,请把必填和非必填项都填写完整。提交简历之后就无法修改了哦! 2 学校的英文全称请准确填写。中英文对应表请查这里(无法浏览请登录后浏览)https://docs.qq.com/sheet/DVmdaa1BCV0RBbnlR?tab=BB08J2 如果您正在攻读计算机,AI,ML领域专业的博士或硕士研究生,而且对应用科学家的实习工作感兴趣。如果您也喜爱深入研究棘手的技术问题并提出解决方案,用成功的产品显著地改善人们的生活。 那么,我们诚挚邀请您加入亚马逊。这会是一份收获满满的工作。您每天的工作都与全球数百万亚马逊客户的体验紧密相关。您将提出和探索创新,基于TB级别的产品和流量数据设计机器学习模型。您将集成这些为客户提供服务,通过数据,建模和客户反馈来完成闭环。您对模型的选择需要能够平衡业务指标和响应时间的需求。
LU, Luxembourg
Join our team as an Applied Scientist II where you'll develop innovative machine learning solutions that directly impact millions of customers. You'll work on ambiguous problems where neither the problem nor solution is well-defined, inventing novel scientific approaches to address customer needs at the project level. This role combines deep scientific expertise with hands-on implementation to deliver production-ready solutions that drive measurable business outcomes. Key job responsibilities Invent: - Design and develop novel machine learning models and algorithms to solve ambiguous customer problems where textbook solutions don't exist - Extend state-of-the-art scientific techniques and invent new approaches driven by customer needs at the project level - Produce internal research reports with the rigor of top-tier publications, documenting scientific findings and methodologies - Stay current with academic literature and research trends, applying latest techniques when appropriate Implement: - Write production-quality code that meets or exceeds SDE I standards, ensuring solutions are testable, maintainable, and scalable - Deploy components directly into production systems supporting large-scale applications and services - Optimize algorithm and model performance through rigorous testing and iterative improvements - Document design decisions and implementation details to enable reproducibility and knowledge transfer - Contribute to operational excellence by analyzing performance gaps and proposing solutions Influence: - Collaborate with cross-functional teams to translate business goals into scientific problems and metrics - Mentor junior scientists and help new teammates understand customer needs and technical solutions - Present findings and recommendations to both technical and non-technical stakeholders - Contribute to team roadmaps, priorities, and strategic planning discussions - Participate in hiring and interviewing to build world-class science teams
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to support the development of GenAI algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in GenAI. About the team The AGI team has a mission to push the envelope with GenAI in LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
IN, HR, Gurugram
Lead ML teams building large-scale forecasting and optimization systems that power Amazon’s global transportation network and directly impact customer experience and cost. As an Applied Science Manager, you will set scientific direction, mentor applied scientists, and partner with engineering and product leaders to deliver production-grade ML solutions at massive scale. Key job responsibilities 1. Lead and grow a high-performing team of Applied Scientists, providing technical guidance, mentorship, and career development. 2. Define and own the scientific vision and roadmap for ML solutions powering large-scale transportation planning and execution. 3. Guide model and system design across a range of techniques, including tree-based models, deep learning (LSTMs, transformers), LLMs, and reinforcement learning. 4. Ensure models are production-ready, scalable, and robust through close partnership with stakeholders. Partner with Product, Operations, and Engineering leaders to enable proactive decision-making and corrective actions. 5. Own end-to-end business metrics, directly influencing customer experience, cost optimization, and network reliability. 6. Help contribute to the broader ML community through publications, conference submissions, and internal knowledge sharing. A day in the life Your day includes reviewing model performance and business metrics, guiding technical design and experimentation, mentoring scientists, and driving roadmap execution. You’ll balance near-term delivery with long-term innovation while ensuring solutions are robust, interpretable, and scalable. Ultimately, your work helps improve delivery reliability, reduce costs, and enhance the customer experience at massive scale.