Line art of silicon chips developed by Annapurna Labs since its acquisition by Amazon in 2015.  Line art includes mentions of Graviton, Inferentia, and Trainium chips, along with AWS Nitro system.
Amazon's acquisition of Annapurna Labs in 2015 has led to, among other advancements, the development of five generations of the AWS Nitro system, three generations of Arm-based Graviton processors, as well as AWS Trainium and AWS Inferentia chips that are optimized for machine learning training and inference. These chips and systems were discussed at the AWS Silicon Innovation Day event on August 3. The event included a talk by Nafea Bshara, AWS vice president and distinguished engineer, on silicon innovation emerging from Annapurna Labs.

How silicon innovation became the ‘secret sauce’ behind AWS’s success

Nafea Bshara, AWS vice president and distinguished engineer, discusses Annapurna Lab’s path to silicon success; Annapurna co-founder was a featured speaker at AWS Silicon Innovation Day virtual event.

Nafea Bshara, Amazon Web Services vice president and distinguished engineer, and the co-founder of Annapurna Labs, an Israeli-based chipmaker that Amazon acquired in 2015, maintains a low profile, as does his friend and Annapurna co-founder, Hrvoye (Billy) Bilic.

Nafea Bshara headshot image
Nafea Bshara, AWS vice president and distinguished engineer.

Each executive’s LinkedIn profile is sparse, in fact, Bilic’s is out of date.

“We hardly do any interviews; our philosophy is to let our products do the talking,” explains Bshara.

Those products, and silicon innovations, have done a lot of talking since 2015, as the acquisition has led to, among other advancements, the development of five generations of the AWS Nitro System, three generations (1, 2, 3) of custom-designed, Arm-based Graviton processors that support data-intensive workloads, as well as AWS Trainium, and AWS Inferentia chips optimized for machine learning training and inference.

Some observers have described the silicon that emerges from Annapurna Labs in the U.S. and Israel as AWS’s “secret sauce”.

Nafea’s silicon journey began at Technion University in Israel, where he earned bachelor’s and master’s degrees in computer engineering, and where he first met Hrvoye. The two then went on to work for Israel-based Galileo, a company that made chips for networking switches, and controllers for networking routers. Galileo was acquired by U.S. semiconductor manufacturer Marvell in 2000, where Bshara and Bilic would work for a decade before deciding to venture out on their own.

“We had developed at least 50 different chips together,” Bshara explained, “so we had a track record and a first-hand understanding of customer needs, and the market dynamics. We could see that some market segments were being underserved, and with the support from our spouses, Lana and Liat, and our funding friends Avigdor [Willenz] and Manuel [Alba], we started Annapurna Labs.”

That was mid-2011, and three and half years later Amazon acquired the company. The two friends have continued their journey at Amazon, where their team’s work has spoken for itself.

Last year, industry analyst David Vellante praised AWS’s “revolution in system architecture.”

“Much in the same way that AWS defined the cloud operating model last decade, we believe it is once again leading in future systems. The secret sauce underpinning these innovations is specialized designs… We believe these moves position AWS to accommodate a diversity of workloads that span cloud, data center as well as the near and far edge.”

Annapurna’s work was highlighted during the AWS Silicon Innovation Day virtual event on August 3. In fact, Nafea was a featured speaker in the event. The Silicon Innovation Day broadcast, which highlighted AWS silicon innovations, included a keynote from David Brown, vice president, Amazon EC2; a talk about the history of AWS silicon innovation from James Hamilton, Amazon senior vice president and distinguished engineer who holds more than 200 patents in 22 countries in server and datacenter infrastructure, database, and cloud computing; and a fireside chat on the Nitro System with Anthony Liguori, AWS vice president and distinguished engineer, and Jeff Barr, AWS vice president and chief evangelist.

In advance of the silicon-innovation event, Amazon Science connected with Bshara to discuss the history of Annapurna, how the company and the industry have evolved in the past decade, and what the future portends.

  1. Q. 

    You co-founded Annapurna Labs just over 11 years ago. Why Annapurna?

    A. 

     I co-founded the company with my longtime partner, Billy, and with an amazing set of engineers and leaders who believed in the mission. We started Annapurna Labs because we looked at the way the chip industry was investing in infrastructure and data centers; it was minuscule at that time because everybody was going after the gold rush of mobile phones, smartphones, and tablets.

    We believed the industry was over indexing on investment for mobile, and under investing in the data center. The data center market was underserved. That, combined with the fact that there was increasing disappointment with the ineffective and non-productive method of developing chips, especially when compared with software development. The productivity of software developers had improved significantly in the past 25 years, while the productivity of chip developers hadn’t improved much since the ‘90s. In assessing the opportunity, we saw a data-center market that was being underserved, and an opportunity to redefine chip development with greater productivity, and with a better business model. Those factors contributed to us starting Annapurna Labs.

  2. Q. 

    How has the chip industry evolved in the past 11 years?

    A. 

    The chip industry realized, a bit late, but nevertheless realized that productivity and time to market needed to be addressed. While Annapurna has been a pioneer in advancing productivity and time to market, many others are following in our footsteps and transitioning to a building-blocks-centric development mindset, similar to how the software industry moved toward object-oriented, and service-oriented software design.

    Chip companies have now transitioned to what we refer to as an intellectual property-oriented, or IP-oriented, correct-by-design approach. Secondly, the chip industry has adopted the cloud. Cloud adoption has led to an explosion of compute power for building chips. Using the cloud, we are able to use compute in a ‘bursty’ way and in parallel. We and our chip-industry colleagues couldn’t deliver the silicon we do today without the cloud. This has led to the creation of a healthy market where chip companies have realized they don’t need to build everything in house, in much the same way software companies have realized they can buy libraries from open source or other library providers. The industry has matured to the point where now there is a healthy business model around buying building blocks, or IPs, from providers like Arm, Synopsys, Alphawave, or Cadence.

  3. Q. 

    Annapurna Labs was named after one of the tallest peaks in the Himalayas that’s regarded as one of the most dangerous mountains to climb. What's been the tallest peak you've had to climb?

    A. 

    I’m up in the cloud, I don’t need to climb anything [laughing]. Yes, Billy and I picked the name Annapurna Labs for a couple of reasons. First, Billy and I originally planned to climb Annapurna before we started the company. But then we got excited about the idea, acquired funding, and suddenly time was of the essence, so we put our climbing plans on hold and started the company. We called it Annapurna because at that time – and it’s true even today – there is a high barrier to entry in starting a chip company. The challenge is steep, and the risk is high, so it’s just like climbing Annapurna. We also believed that we wanted to reach a point above the clouds where you could see things very clearly, and without clutter. That’s always been a mantra for us as a company: Avoid the clutter, and look far into the future to understand what the customer really needs versus getting distracted by the day-to-day noise.

  4. Q. 

    What are the unique challenges you face in designing chips for ML training and inference versus more general CPU designs?

    A. 

    First, I would want to emphasize what challenge we didn’t have to worry about: with the strong foundation, methodologies, and engineering muscle we built delivering multiple generations of Nitro, we had confidence in our ability to execute on building the chips and manufacturing them at high volume, and high quality. So that was a major thing we didn’t need to worry about. Designing for machine learning is one the most challenging, but also the most rewarding tasks I've had the pleasure to participate in. There is an insatiable demand for machine learning right now, so anyone with a good product won’t have any issues finding customer demand. The demand is there, but there are a couple of challenges.

    Related content
    Two authors of Amazon Redshift research paper that will be presented at leading international forum for database researchers reflect on how far the first petabyte scale cloud data warehouse has advanced since it was announced ten years ago.

    The first is that customers want ‘just works’ solutions because they have enough challenges to work on the science side. So they are looking for a frictionless migration from the incumbent, let's say GPU-based machine learning, to AWS Trainium or AWS Inferentia. Our biggest challenge is to hide all the complexity so it’s what we refer to internally as boring to migrate. We don’t want our customers, the scientists and researchers, to have to think about moving from one piece of hardware to another. This is a challenge because the incumbent GPUs, specifically NVIDIA, have done a very good job developing broadly adopted technologies. The customer shouldn’t see or experience any of the hard work we’ve done in developing our chips; what the customer should experience is that it’s transparent and frictionless to transition to Inferentia and Trainium. That’s a hefty task and one of our internal challenges as a team.

    Trainium artwork from AWS website
    "The customer shouldn’t see or experience any of the hard work we’ve done in developing our chips; what the customer should experience is that it’s transparent and frictionless to transition to Inferentia and Trainium," says Bshara.

    The second challenge is more external; it’s the fact that science and machine learning are moving very fast. As an organization that is building hardware, our job is to predict what customers will need three, four, five years down the road because the development cycle for a chip can be two years, and then it gets deployed for three years. The lifecycle is around five years and trying to predict how the needs of scientists and the machine-learning community will evolve over that time span is difficult. Unlike CPU workloads, which aren’t evolving very quickly, machine learning workloads are, and it’s a bit of an art to keep apace. I would give ourselves a high score, not a perfect score, in being efficient in terms of execution and cost, while still being future proof. It’s the art of predicting what customers will need three years from now, while still executing on time and budget. These things only come with experience, and I’m fortunate to be part of a great team that has the experience to strike the right balance between cost, schedule, and future-proofing the product.

  5. Q. 

    At the recent re:MARS conference Rohit Prasad, Amazon senior vice president and Alexa head scientist, said the voice assistant is interacting with customers billions of times each week. Alexa is powered by EC2 Inf1 instances, which use AWS Inferentia chips. Why is it more effective for Alexa workloads to take advantage of this kind of specialized processing versus more general-purpose GPUs?

    A. 

    Alexa is one of those Amazon technologies that we want to bring to as many people as possible. It’s also a great example of the Amazon flywheel; the more people use it, the more value it delivers. One of our goals is to provide this service with as low latency as possible, and at the lowest cost possible, and over time improve the machine-learning algorithms behind Alexa. When people say improving Alexa, it really means handling much more complex machine learning, much more sophisticated models while maintaining the performance, and low latency. Using Inferentia, the chip, and Inf1, the EC2 instances that actually hosts all of these chips, Alexa is able to run much more advanced machine learning algorithms at lower costs and with lower latency than a standard general-purpose chip. It's not that the general-purpose chip couldn't do the job, it's that it would do so at higher costs and higher latency. With Inferentia we deliver lower latency and support much more sophisticated algorithms. This results in customers having a better experience with Alexa, and benefitting from a smarter Alexa.

  6. Q. 

    AI has been called the new electricity. But as ML models become increasingly large and complex as you just discussed, there also are concerns that energy consumption for AI model training and inference is damaging to the environment. At the chip level, what can be done to reduce the environmental impact of ML model training and Inference?

    A. 

    What we can do at the chip level, at the EC2 level, is actually work on three vectors, which we’re doing right now. The first is drive to lower power quickly by using more advanced silicon processes. Every time we build a chip in an advanced silicon process we're utilizing new semiconductor processes with smaller transistors that require less power for the same work. Because of our focus on efficient execution, we can deliver to EC2 customers a new chip based on a more modern, power-efficient silicon process every 18 months or so.

    The second vector is building more technologies, trying to accelerate in hardware and in algorithms, to get training and inference done faster. The faster we can handle training and inference, the less power is consumed. For example, one of the technologies we innovated in the last Trainium chip was something called stochastic rounding which, depending upon which measure you're looking at for some neural workloads, could accelerate neural network training by up to 30%. When you say 30% less time that translates into 30% less power.

    Another thing we're doing at the algorithmic level is offering different data types. For example, historically machine learning used a 32-bit floating point. Now we’re offering multiple versions of 16-bit and a few versions of 8-bit. When these different data types are used, they not only accelerate machine learning training, they significantly reduce the power for the same amount of workload. For example, doing matrix multiplication on a 16-bit float point is less than one-third the total power if we had done it with 32-bit floating point. The ability to add things like stochastic rounding or new data types at the algorithmic level provides a step-function improvement in power consumption for the same amount of workload.

    The third vector is credit to EC2 and the Nitro System, we’re offering more choice for customers. There are different chips optimized for different workloads, and the best way for customers to save energy is to follow the classic Amazon mantra – the everything store. We offer all different types of chips, including multiple generations of Nvidia GPUs, Intel Habana, and Trainium, and share with the customer the power profile and performance of each of the instances hosting these chips, so the customer can choose the right chip for the right workload, and optimize for the lowest possible power consumption at the lowest cost.

  7. Q. 

    I’ve focused primarily on machine learning. But let’s turn our attention to more general-purpose workloads running in the cloud, and your work on Graviton processors for Amazon EC2. 

    A. 

    Yes, in a way Graviton is the opposite of our work on machine learning, in the sense that the focus is on building server processors for general-purpose workloads running in EC2. The market for general-purpose chips has been there for thirty or forty years, and the workloads themselves haven’t evolved as rapidly as machine learning, so when we started designing, the target was clear to us.

    This is an image of a Graviton silicon chip with a blue background.
    AWS is three generations into its Graviton chip journey, and Bshara says the company has plans for "many more generations" to come.

    Because this segment of the industry wasn’t moving that fast, we felt our challenge was to move the industry faster, specifically in offering step function improvement in performance, and reducing costs, and power consumption. There are many times when you build plans, especially for chips, where the original plans are rosy, but as the development progresses you have to make tradeoffs, and the actual product falls short of the original promise. With first-generation Graviton, we experienced the opposite; we were pleasantly surprised that both performance and power efficiency turned out better than our original plan. That’s very rare in our industry.

    Related content
    Amazon DynamoDB was introduced 10 years ago today; one of its key contributors reflects on its origins, and discusses the 'never-ending journey' to make DynamoDB more secure, more available and more performant.

    The same has been true with Graviton2. Because of this there has been a massive movement inside Amazon for general workloads to move to Graviton2, mainly to save on power, but also on costs. For the same workloads, Graviton2 will on average consume 60% less power than same-generation competitive offerings, and we’re passing on those cost-savings to customers. Outside Amazon, at least 48 of AWS’s top 50 customers have not just tested, but have production workloads running on Graviton2.

    In May, Graviton3 processors became available, so it’s still Day 1 as we’re only three generations into this journey. We have plans for many more generations, but it’s always very satisfying and rewarding to hear how boring it is for customers to migrate to Graviton, and to hear all the customer success stories. It is incredibly satisfying to come to work every day and hear some of the success stories from the tens of thousands of customers using Graviton.

  8. Q. 

    You have more than 100 openings on your jobs page. What kind of talent are you seeking? And what are the characteristics of employees who succeed at Annapurna Labs? 

    A. 

    We are seeking individuals who like to work on cutting-edge technology, and approach challenges from a principles-first approach because most of the challenges we confront haven’t been dealt with before. While actual experience is important, we place greater value on proper thinking and a principles-first mindset, or reasoning from first principles.

    We also value individuals who enjoy working in a dynamic environment where the solution isn’t always the same hammer after the same nail. Given our principles-first approach, many of our challenges get solved at the chip level, the terminal level, and the system level, so we seek individuals who have systems understanding, and are skilled at working across disciplines. It’s difficult for an individual with a single discipline, or single domain knowledge, who isn’t willing to challenge her or himself by learning across other domains, to succeed at Annapurna. Last but not least, we look for individuals who focus on delivering, within a team environment. We recognize ideas are “cheap”, and what makes the difference is delivering on the idea all the way to production. Ideas are a commodity. Executing on those ideas is not.

  9. Q. 

    I've read that Billy and you share the belief that if you can dream it, you can do it. So what's your dream about future silicon development?

    A. 

    That’s true, and it’s the main reason Billy and I wanted to join AWS, because we had a common vision that there’s so much value we can bring to customers, and AWS leadership and Amazon in general were willing to invest in that vision for the long term. We agreed to be acquired by Amazon not only because of the funding and our common long-term vision, but also because building components for our own data centers would allow us to quickly deliver customer value. We’ve been super happy with the relationship for many reasons, but primarily because of our ability to have customer impact at global scale.

    At Amazon, we operate at such a scale and with such a diversity of customers that we are capable of doing application-specific, or domain-specific acceleration. Machine learning is one example of that. What we’ve done with Aqua (advanced query accelerator) for Amazon Redshift is another example where we’ve delivered hardware-based acceleration for analytics. Our biggest challenge these days is deciding what project to prioritize. There’s no shortage of opportunities to deliver value. The only way we’re able to take this approach is because of AWS. Developing silicon requires significant investment, and the only way to gain a good return on that investment is by having a lot of volume and cost-effective development, and we’ve been able to develop a large, and successful customer base with AWS.

    I should also add that before joining Amazon we thought we really took a long-term perspective. But once you sit in Amazon meetings, you realize what long-term strategic thinking really means. I continue to learn every day about how to master that. Suffice to say, we have a product roadmap, and a technology and investment strategy that extends to 2032. As much uncertainty as there is in the future, there are a few things we’re highly convicted in, and we’re investing in them, even though they may be ten years out. I obviously can’t disclose future product plans, but we continue to dream big on behalf of our customers.

    The AWS Annapurna Labs team has more than 100 job openings for software developers, physical design engineers, design specification engineers, and many other technical roles. The team has development centers in the U.S. and Israel.

Research areas

Related content

US, CA, Santa Clara
Join the next science and engineering revolution at Amazon's Delivery Foundation Model team, where you'll work alongside world-class scientists and engineers to pioneer the next frontier of logistics through advanced AI and foundation models. We are seeking an exceptional Senior Applied Scientist to help develop innovative foundation models that enable delivery of billions of packages worldwide. In this role, you'll combine highly technical work with scientific leadership, ensuring the team delivers robust solutions for dynamic real-world environments. Your team will leverage Amazon's vast data and computational resources to tackle ambitious problems across a diverse set of Amazon delivery use cases. Key job responsibilities - Design and implement novel deep learning architectures combining a multitude of modalities, including image, video, and geospatial data. - Solve computational problems to train foundation models on vast amounts of Amazon data and infer at Amazon scale, taking advantage of latest developments in hardware and deep learning libraries. - As a foundation model developer, collaborate with multiple science and engineering teams to help build adaptations that power use cases across Amazon Last Mile deliveries, improving experience and safety of a delivery driver, an Amazon customer, and improving efficiency of Amazon delivery network. - Guide technical direction for specific research initiatives, ensuring robust performance in production environments. - Mentor fellow scientists while maintaining strong individual technical contributions. A day in the life As a member of the Delivery Foundation Model team, you’ll spend your day on the following: - Develop and implement novel foundation model architectures, working hands-on with data and our extensive training and evaluation infrastructure - Guide and support fellow scientists in solving complex technical challenges, from trajectory planning to efficient multi-task learning - Guide and support fellow engineers in building scalable and reusable infra to support model training, evaluation, and inference - Lead focused technical initiatives from conception through deployment, ensuring successful integration with production systems- Drive technical discussions within the team and and key stakeholders - Conduct experiments and prototype new ideas - Mentor team members while maintaining significant hands-on contribution to technical solutions About the team The Delivery Foundation Model team combines ambitious research vision with real-world impact. Our foundation models provide generative reasoning capabilities required to meet the demands of Amazon's global Last Mile delivery network. We leverage Amazon's unparalleled computational infrastructure and extensive datasets to deploy state-of-the-art foundation models to improve the safety, quality, and efficiency of Amazon deliveries. Our work spans the full spectrum of foundation model development, from multimodal training using images, videos, and sensor data, to sophisticated modeling strategies that can handle diverse real-world scenarios. We build everything end to end, from data preparation to model training and evaluation to inference, along with all the tooling needed to understand and analyze model performance. Join us if you're excited about pushing the boundaries of what's possible in logistics, working with world-class scientists and engineers, and seeing your innovations deployed at unprecedented scale.
US, WA, Bellevue
At Amazon, we're working to be the world’s most customer-centric company. Driving innovation on behalf of customers is core to our mission, and this position supports one of our largest business to deliver on this mission. As member of the Operations Insights, Planning, Analytics and Technology (IPAT) team, this position owns monthly change management, Controllership and Governance, Risk and Compliance (GRC) process for World Wide Operations IPAT team. Key job responsibilities In the midst of our rapidly expanding scope, we are actively seeking a Data Scientist who possesses strategic thinking skills and a knack for creative problem-solving. This Data Scientist will play a pivotal role in supporting hyper-growth projects. Collaborating closely with cross-functional finance and business leaders within the WW Operations organization, this role should be skilled in ML models development, Optimization models, model implementation, hypothesis testing, high quality analysis, database design, be comfortable dealing with large and complex data sets, and using visualization tools. Join us on this captivating journey in an exhilarating domain, and become a part of making history!
US, NY, New York
Join the next science and engineering revolution at Amazon's Delivery Foundation Model team, where you'll work alongside world-class scientists and engineers to pioneer the next frontier of logistics through advanced AI and foundation models. We are seeking an exceptional Senior Applied Scientist to help develop innovative foundation models that enable delivery of billions of packages worldwide. In this role, you'll combine highly technical work with scientific leadership, ensuring the team delivers robust solutions for dynamic real-world environments. Your team will leverage Amazon's vast data and computational resources to tackle ambitious problems across a diverse set of Amazon delivery use cases. Key job responsibilities - Design and implement novel deep learning architectures combining a multitude of modalities, including image, video, and geospatial data. - Solve computational problems to train foundation models on vast amounts of Amazon data and infer at Amazon scale, taking advantage of latest developments in hardware and deep learning libraries. - As a foundation model developer, collaborate with multiple science and engineering teams to help build adaptations that power use cases across Amazon Last Mile deliveries, improving experience and safety of a delivery driver, an Amazon customer, and improving efficiency of Amazon delivery network. - Guide technical direction for specific research initiatives, ensuring robust performance in production environments. - Mentor fellow scientists while maintaining strong individual technical contributions. A day in the life As a member of the Delivery Foundation Model team, you’ll spend your day on the following: - Develop and implement novel foundation model architectures, working hands-on with data and our extensive training and evaluation infrastructure - Guide and support fellow scientists in solving complex technical challenges, from trajectory planning to efficient multi-task learning - Guide and support fellow engineers in building scalable and reusable infra to support model training, evaluation, and inference - Lead focused technical initiatives from conception through deployment, ensuring successful integration with production systems- Drive technical discussions within the team and and key stakeholders - Conduct experiments and prototype new ideas - Mentor team members while maintaining significant hands-on contribution to technical solutions About the team The Delivery Foundation Model team combines ambitious research vision with real-world impact. Our foundation models provide generative reasoning capabilities required to meet the demands of Amazon's global Last Mile delivery network. We leverage Amazon's unparalleled computational infrastructure and extensive datasets to deploy state-of-the-art foundation models to improve the safety, quality, and efficiency of Amazon deliveries. Our work spans the full spectrum of foundation model development, from multimodal training using images, videos, and sensor data, to sophisticated modeling strategies that can handle diverse real-world scenarios. We build everything end to end, from data preparation to model training and evaluation to inference, along with all the tooling needed to understand and analyze model performance. Join us if you're excited about pushing the boundaries of what's possible in logistics, working with world-class scientists and engineers, and seeing your innovations deployed at unprecedented scale.
US, CA, San Francisco
Amazon has launched a new research lab in San Francisco to develop foundational capabilities for useful AI agents. We’re enabling practical AI to make our customers more productive, empowered, and fulfilled. In particular, our work combines large language models (LLMs) with reinforcement learning (RL) to solve reasoning, planning, and world modeling in both virtual and physical environments. Our research builds on that of Amazon’s broader AGI organization, which recently introduced Amazon Nova, a new generation of state-of-the-art foundation models (FMs). Key job responsibilities You will contribute directly to AI agent development in an engineering management role: leading a software development team focused on our internal platform for acquiring agentic experience at large scale. You will help set direction, align the team’s goals with the broader lab, mentor team members, recruit great people, and stay technically involved. You will be hired as a Member of Technical Staff. About the team Our lab is a small, talent-dense team with the resources and scale of Amazon. We’re entering an exciting new era where agents can redefine what AI makes possible. We’d love for you to join our lab and build it from the ground up!
US, NY, New York
Are you a passionate Applied Scientist (AS) ready to shape the future of digital content creation? At Amazon, we're building Earth's most desired destination for creators to monetize their unique skills, inspire the next generation of customers, and help brands expand their reach. We build innovative products and experiences that drive growth for creators across Amazon's ecosystem. Our team owns the entire Creator product suite, ensuring a cohesive experience, optimizing compensation structures, and launching features that help creators achieve both monetary and non-monetary goals. Key job responsibilities As an AS on our team, you will: - Handle challenging problems that directly impact millions of creators and customers - Independently collect and analyze data - Develop and deliver scalable predictive models, using any necessary programming, machine learning, and statistical analysis software - Collaborate with other scientists, engineers, product managers, and business teams to creatively solve problems, measure and estimate risks, and constructively critique peer research - Consult with engineering teams to design data and modeling pipelines which successfully interface with new and existing software - Participate in design and implementation across teams to contribute to initiatives and develop optimal solutions that benefit the creators organization The successful candidate is a self-starter, comfortable with a dynamic, fast-paced environment, and able to think big while paying careful attention to detail. You have deep knowledge of an area/multiple areas of science, with a track record of applying this knowledge to deliver science solutions in a business setting and a demonstrated ability to operate at scale. You excel in a culture of invention and collaboration.
US, WA, Seattle
The AWS Supply Chain organization is looking for a Sr. Manager of Applied Science to lead science and data teams working on innovative AI-powered supply chain solutions. As part of the AWS Solutions organization, we have a vision to provide business applications, leveraging Amazon’s unique experience and expertise, that are used by millions of companies worldwide to manage day-to-day operations. We will accomplish this by accelerating our customers’ businesses through delivery of intuitive and differentiated technology solutions that solve enduring business challenges. We blend vision with curiosity and Amazon’s real-world experience to build opinionated, turnkey solutions. Where customers prefer to buy over build, we become their trusted partner with solutions that are no-brainers to buy and easy to use. Are you excited about developing state-of-the-art GenAI/Agentic AI based solutions for enterprise applications? As a Sr. Manager of Applied Scientist at AWS Supply Chain, you will bring AI advancements to customer facing enterprise applications. In this role, you will drive the technical vision and strategy for your team while fostering a culture of innovation and scientific excellence. You will be leading a fast-paced, cross-disciplinary team of researchers who are leaders in the field. You will take on challenging problems, distill real requirements, and then deliver solutions that either leverage existing academic and industrial research, or utilize your own out-of-the-box pragmatic thinking. In addition to coming up with novel solutions and prototypes, you may even need to deliver these to production in customer facing products. Key job responsibilities Building and mentoring teams of Applied Scientists, ML Engineers, and Data Scientists. Setting technical direction and research strategy aligned with business goals. Driving innovation in Supply Chains systems using AI/ML models and AI Agents. Collaborating with cross-functional teams to translate research into production. Managing project portfolios and resource allocation.
CA, ON, Toronto
About Sponsored Products and Brands The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. About our team The Targeting and Recommendations team within Sponsored Products and Brands empowers advertisers with intelligent targeting controls and one-click campaign recommendations that automatically populate optimal settings based on ASIN data. This comprehensive suite provides advanced targeting capabilities through AI-generated keyword and ASIN suggestions, sophisticated targeting controls including Negative Targeting, Manual Targeting with Product Attribute Targeting (PAT) and Keyword Targeting (KWT), and Automated Targeting (ATv2). Our vision is to build a revolutionary, highly personalized and context-aware agentic advertiser guidance system that seamlessly integrates Large Language Models (LLMs) with sophisticated tooling, operating across both conversational and traditional ad console experiences while scaling from natural language queries to proactive, intelligent guidance delivery based on deep advertiser understanding, ultimately enhancing both targeting precision and one-click campaign optimization. Through strategic partnerships across Ad Console, Sales, and Marketing teams, we identify high-impact opportunities spanning from strategic product guidance to granular keyword optimization and deliver them through personalized, scalable experiences grounded in state-of-the-art agent architectures, reasoning frameworks, sophisticated tool integration, and model customization approaches including tuning, MCP, and preference optimization. This presents an exceptional opportunity to shape the future of e-commerce advertising through advanced AI technology at unprecedented scale, creating solutions that directly impact millions of advertisers. Key job responsibilities * Design and build targeting and 1 click recommendation agents to guide advertisers in conversational and non-conversational experience. * Design and implement advanced model and agent optimization techniques, including supervised fine-tuning, instruction tuning and preference optimization (e.g., DPO/IPO). * Collaborate with peers across engineering and product to bring scientific innovations into production. * Stay current with the latest research in LLMs, RL, and agent-based AI, and translate findings into practical applications. * Develop agentic architectures that integrate planning, tool use, and long-horizon reasoning. A day in the life As an Applied Scientist on our team, your days will be immersed in collaborative problem-solving and strategic innovation. You'll partner closely with expert applied scientists, software engineers, and product managers to tackle complex advertising challenges through creative, data-driven solutions. Your work will center on developing sophisticated machine learning and AI models, leveraging state-of-the-art techniques in natural language processing, recommendation systems, and agentic AI frameworks. From designing novel targeting algorithms to building personalized guidance systems, you'll contribute to breakthrough innovations
US, NY, New York
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist to work on pre-training methodologies for Generative Artificial Intelligence (GenAI) models. You will interact closely with our customers and with the academic and research communities. Key job responsibilities Join us to work as an integral part of a team that has experience with GenAI models in this space. We work on these areas: - Scaling laws - Hardware-informed efficient model architecture, low-precision training - Optimization methods, learning objectives, curriculum design - Deep learning theories on efficient hyperparameter search and self-supervised learning - Learning objectives and reinforcement learning methods - Distributed training methods and solutions - AI-assisted research About the team The AGI team has a mission to push the envelope in GenAI with Large Language Models (LLMs) and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to support the development of algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Principal Quantum Research Scientist. You will join a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers working at the forefront of quantum computing. You should have a deep and broad knowledge of experimental quantum computing and a track record of original scientific contributions. We are looking for candidates with strong engineering principles, resourcefulness and a bias for action, superior problem solving, and excellent communication skills. Working effectively within a team environment is essential. As principal research scientist you will be expected to lead new ideas and stay abreast of the field of experimental quantum computation. Key job responsibilities Key job responsibilities In this role, you will work on improvements in all components of SC qubits quantum hardware, from qubits and resonators to quantum-limited amplifiers. You will also work on their integration into multiqubit chips. This will require designing new experiments, collecting statistically significant data through automation, analyzing the results, and summarizing conclusions in written form. Finally, you will work with hardware engineers, material scientists, and circuit designers to advance the state of the art of SC qubits hardware. About the team About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.