Amazon Bedrock offers access to multiple generative AI models

AWS service enables machine learning innovation on a robust foundation.

The drive to harness the transformative power of high-end machine learning models has meant some businesses are facing new challenges. Teams want assistance in crafting compelling documents, summarizing complex documents, building conversational-AI agents, or generating striking, customized visuals.

Find out about all of the recent updates designed to help even more customers build and scale generative AI applications.

In April, Amazon stepped in to assist customers contending with the need to build and scale generative AI applications with a new service: Amazon Bedrock. Bedrock arms developers and businesses with secure, seamless, and scalable access to cutting-edge models from a range of leading companies.

Bedrock provides access to Stability AI’s text-to-image models — including Stable Diffusion, multilingual LLMs from AI21 Labs, and Anthropic’s multilingual LLMs, called Claude and Claude Instant, which excel at conversational and text-processing tasks. Bedrock has been further expanded with the additions of Cohere’s foundation models, as well as Anthropic’s Claude 2 and Stability AI’s Stable Diffusion XL 1.0.

These models, trained on large amounts of data, are increasingly known under the umbrella term foundation models (FMs) — hence the name Bedrock. The abilities of a wide range of FMs — as well as Amazon’s own new FM, called Amazon Titan — are available through Bedrock’s API.

Werner Vogels and Swami Sivasubramanian discuss generative AI

Why gather all these models in one place?

“The world is moving so fast on FMs, it is rather unwise to expect that one model is going to get everything right,” says Amazon senior principal engineer Rama Krishna Sandeep Pokkunuri. “All models come with individual strengths and weaknesses, so our focus is on customer choice.”

Expanding ML access

Bedrock is the latest step in Amazon’s ongoing effort to democratize ML by making it easy for customers to access high-performing FMs, without the large costs inherent in both building those models and maintaining the necessary infrastructure. To that end, the team behind Bedrock is working to enable customers to privately customize that suite of FMs with their own data.

This digital visualization, created with Stable Diffusion XL, reveals a mesmerizing array of embeddings in the latent space of a machine learning model. Each point represents a unique concept or data point, with lines and colors representing the distances and relationships between points. Together they produce a multidimensional landscape filled with intricate clusters, swirling patterns, and hidden connections.
In this digital visualization, created with Stable Diffusion XL, the latent space of a machine learning model reveals a mesmerizing array of embeddings. It is a multidimensional landscape filled with intricate clusters, swirling patterns, and hidden connections. Each point represents a unique concept or data point. The environment is digital, with lines and colors representing the distances and relationships between embeddings.

“Customers don’t have to stick to our training recipes. We are working to provide a high degree of customizability,” says Bing Xiang, director of applied science at Amazon Web Services' AI Labs.

“For example," Xiang continues, “customers can just point a Titan model at dozens of labeled examples they collected for their use cases and stored in Amazon S3 and fine-tune the model for the specific task.”

Not only is a suite of AI tools offered, it is also meticulously safeguarded. At Amazon, data security is so critical it is often referred to as “job zero”. While Bedrock hosts a growing number of third-party models, those third-party companies never see any customer data. That data, which is encrypted, and the Bedrock-hosted models themselves, remain firmly ensconced on Amazon’s secure servers.

Tackling toxicity

In addition to its commitment to security, Amazon has experience in the LLM arena, having developed a range of proprietary FMs in recent years. Last year, it made its Alexa Teacher Model — a 20-billion-parameter LLM — publicly available. Also last year, Amazon launched Amazon CodeWhisperer, a fully managed service powered by LLMs that can generate reams of robust computer code from natural-language prompts, among other things.

Related content
Generative AI raises new challenges in defining, measuring, and mitigating concerns about fairness, toxicity, and intellectual property, among other things. But work has started on the solutions.

Continuing in that vein, a standout feature of Bedrock is the availability of Amazon’s Titan FMs, including a generative LLM and an embeddings LLM. Titan FMs are built to help customers grapple with the challenge of toxic content by detecting and removing harmful content in data and filtering model outputs that contain inappropriate content.

When several open-source LLMs burst onto the world stage last year, users quickly realized they could be prompted to generate toxic output, including sexist, racist, and homophobic content. Part of the problem, of course, is that the Internet is awash with such material, so models can absorb some of this toxicity and bias.

Amazon’s extensive investments in responsible AI include the building of guardrails and filters into Titan to ensure the models minimize toxicity, profanity, and other inappropriate behavior. “We are aware that this is a challenging problem, one that will require continuous improvement,” Xiang observed.

Related content
Prompt engineering, adaptation of language models, and attempts to remediate large language models’ (LLMs’) “hallucinations” point toward future research in the field.

To that end, during the Titan models’ development, outputs undergo extensive “red teaming” — a rigorous evaluation process aimed at pinpointing potential vulnerabilities or flaws in a model's design. Amazon even had experts attempt to coax harmful behavior from the models using a variety of tricky text prompts.

“No system of this nature will be perfect, but we're creating Titan with utmost care,” says principal applied scientist Miguel Ballesteros. “We are working towards raising the bar in this field.”

Building Amazon Titan models for efficiency

Creating the Titan models also meant overcoming significant technological challenges, particularly in distributed computing.

“Imagine you are faced with a mathematical problem with four decomposable sub-problems that will take eight hours of solid brain work to complete,” explains Ramesh Nallapati, senior principal applied scientist. “If there were four of you working on it together, how long would it take? Two hours is the intuitive answer, because you are working in parallel.

Related content
Finding that 70% of attention heads and 20% of feed-forward networks can be excised with minimal effect on in-context learning suggests that large language models are undertrained.

“That’s not true in the real world, and it’s not true in the computing world,” Nallapati continues. “Why? Because communication time between parties and time for aggregating solutions from sub-problems must be factored in.”

In order to make the distributed computing efficient and cost effective, Amazon has developed both AWS Trainium accelerators — designed mainly for high-performance training of generative AI models, including large language models — and AWS Inferentia accelerators that power its models in operation. Both of these specialized accelerators offer higher throughput and lower cost per inference than comparable Amazon EC2 instances.

These accelerators need to constantly communicate and synchronize during training. To streamline this communication, the team employs 3-D parallelism. Here, three elements — parallelizing data mini-batches, parallelizing model parameters, and pipelining layer-wise computations across these accelerators — are distributed across hardware resources to varying degrees.

“Deciding on the combination of these three axes determines how we use the accelerators effectively,” says Nallapati.

Titan’s training task is further complicated by the fact that accelerators, like all sophisticated hardware, occasionally fail. “Using as many accelerators as we do, it is a question of days or weeks, but one of them is going to fail, and there’s a risk the whole thing is going to come down fast,” says Pokkunuri.

To tackle this reality, the team is pioneering ground-breaking techniques in resilience and fault tolerance in distributed computing.

Efficiency is critical in FMs — both for bottom-line considerations and from a sustainability standpoint, because FMs require immense power, both in training and in operation.

“Inferentia and Trainium are big strategic efforts to make sure our customers get the best cost performance,” says Pokkunuri.

Retrieval-augmented generation

Using Bedrock to efficiently combine the complementary abilities of the Titan models also puts the building blocks of a particularly useful process at a customer’s disposal, via a form of retrieval-augmented generation (RAG).

RAG can address a significant shortcoming in standalone LLMs — they cannot account for new events. GPT-4, for example, trained on information up to 2021, can only tell you that “the most significant recent Russian military action in Ukraine was in 2014”.

This graphic shows embeddings of text phrases in a representational space, a question "who won the 2022 world cup" and two answers "Messi secures first World Cup after extra-time drama" and "France wins in highest-scoring World Cup final since 1996" are linked to dots in the space, the Messi answer is closer to the question
Embedding news reports in a representational space enables the retrieval of information added since the last update to an LLM; the LLM can then leverage that information to generate text responses to queries (e.g., "Who won the 2022 World Cup?").

It is a massive and expensive undertaking to retrain huge LLMs, with the process itself taking months. RAG provides a way to both incorporate new content into LLMs’ outputs in-between re-trainings and provide a cost-effective way to leverage the power of LLMs on proprietary data.

For example, let’s say you run a big news or financial organization, and you want to use an LLM to intelligently interrogate your entire corpus of news or financial reports, which includes up-to-date knowledge.

“You will be able to use Titan models to generate text based on your proprietary content,” explains Ballesteros. “The Titan embeddings model helps to find documents that are relevant to the prompts. Then, the Titan generative model can leverage those documents as well as the information it has learned during training to generate text responses to the prompts. This allows customers to rapidly digest and query their own data sources.”

A commitment to responsible AI

In April, select Amazon customers were given access to Bedrock, to evaluate the service and provide feedback. Pokkunuri stresses the importance of this feedback: “We are not just trying to meet the bar here — we are trying to raise it. We’re looking to give our customers a delightful experience, to make sure their expectations are being met with this suite of models.”

The stepped launch of Bedrock also underscores Amazon's commitment to responsible AI, says Xiang. “This is a very powerful service, and our commitment to responsible AI is paramount.”

As the number of powerful FMs grows, expect Amazon’s Bedrock to grow in tandem, with an expanding roster of leading third-party models and more exclusive models from Amazon itself.

“Generative AI has evolved rapidly in the past few years, but it’s still in its early stage and has a huge potential,” says Xiang. “We are excited about the opportunity of putting Bedrock in the hands of our customers and helping to solve a variety of problems they are facing today and tomorrow.”

Related content

US, WA, Seattle
At Amazon, we strive every day to be Earth’s most customer centric company. Selling Partner Experience Science (SPeXSci) delivers on this by building AI-enhanced experiences, optimization, and automation that help third-party sellers build more successful businesses. This includes recommendations that drive growth and AI-enhanced assistance for troubleshooting issues. There are many challenges that we confront caused by the volume, diversity, and complexity of our selling partner's needs… and we are always striving to do better. Do you want to join an innovative team who creatively applies techniques ranging from statistics and traditional machine learning to deep learning and natural language processing? A team that drives our flywheel of improvement by hunting down opportunities to do better that are buried in tens of millions of solved cases? Are you interested in helping us redefine what world class support can be in an age of automation and AI, while prizing human empathy and ingenuity? The SPeXSci Team is looking for an Applied Scientist to build statistical and machine learning solutions that help us understand and solve our most challenging problems. We need to better understand our Sellers and the problems they face, to design permanent fixes to recurring problems, to anticipate problems so that we are prepared to deal with them, to measure our success at delighting our customers, and to identify opportunities to grow and improve. In this role, you will have ownership of the end-to-end development of solutions to complex problems and you will play an integral role in strategic decision-making. You will also work closely with engineers, operations teams, product owners to build ML pipelines, platforms and solutions that solve problems of defect detection, automation, and workforce optimization. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, Palo Alto
We’re working to improve shopping on Amazon using the conversational capabilities of large language models, and are searching for pioneers who are passionate about technology, innovation, and customer experience, and are ready to make a lasting impact on the industry. You'll be working with talented scientists, engineers, and technical program managers (TPM) to innovate on behalf of our customers. If you're fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey! We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA
JP, 13, Tokyo
日本の大学で機械学習や関連領域の研究に従事している学生の皆様に向けたフェローシッププログラムのご案内です。Amazon JapanのRetail Scienceチームでは、何百万人もの顧客にインパクトを与える価値あるテクノロジーに繋がるような、新しいプロトタイプやコンセプトを開発するプロジェクトに従事していただく学生を募集しています。プログラムは1ヶ月から3ヶ月の短期間のプロジェクトになります。 プロジェクトの対象となるテーマには、自然言語処理、表現学習、レコメンデーションシステム、因果推論といった領域が含まれますが、これらに限定されるわけではありません。プロジェクトは、チームのシニアサイエンティスト1名または複数名のガイダンスのもとで定義、遂行され、プロジェクト中は他のサイエンティストもメンターとしてフォローします。 学生の皆様が新しいモデルを考案したり、新しいテクノロジーを活用し実験する時間を最大化できるようにすることが目標です。そのため、プロジェクトではエンジニアリングやスケーリングよりも、プロトタイピングを行い具体的に概念実証を行うことに集中します。 また、Amazonでは論文出版も推奨しています。従事した研究開発活動の成果物として出版される論文には著者として参加することになります。 フェローシッププログラムは目黒の東京オフィスで、他のチームと一緒に行われます。Amazonは、プログラム期間中に必要なIT機器(ラップトップなど)、給与と通勤費を支給します。 Are you a current PhD student enrolled in a Japanese university researching Statistics, Machine Learning, Economics, or a related discipline? The Japan Retail Science team is looking for Fellows for short term (1-3 months) projects to develop new prototypes and concepts that can then be translated into meaningful technologies impacting millions of customers. In this position, you will be assigned a project to carry out from areas including but not limited to natural language processing, representation learning, recommender systems, or causal inference. The project will be defined and carried out under the supervision of one or more of our senior scientists, and you will be assigned another scientist as a mentor to follow you during the project. Our goal is to maximize the time you spend on inventing new models and experimenting with new techniques, so the work will concentrate on prototyping and creating a tangible proof of concept, rather than engineering and scaling. Amazon encourages publications, and you will be included as an author of any published manuscript. The fellowship will be carried out from our Tokyo office in Meguro together with the rest of the team. Amazon will provide the necessary IT equipment (laptop, etc.) for the duration of the fellowship, a salary, and commuting expenses. A day in the life - チームの多くのメンバーは、午前9時くらいから10時半くらいまでの間に仕事を始め、夕方6時から7時には仕事を終えています。出席が必要なミーティングに参加していれば、勤務時間は自由に決められます。 - パートタイムを希望する場合、勤務時間数は採用担当者とともに決定します。フルタイムの場合、労働時間は通常の契約通り週40時間となります。 - オフィスは目黒にあり、週3回の出社が必要です。残りの2日間はリモートワーク、オフィスへの出勤いずれも可能です。 - The majority of the team starts working between 9 and 10.30am until 18-19. You will have complete flexibility to determine your working hours as long as you are present for the meetings where your attendance is required. - Number of working hours will be determined together with the hiring manager in case you want to pursue the Fellowship part-time. In case of full-time, working hours will be 40/week as per a standard contract. - Our office is located in Meguro, and presence in the office is required 3 times/week. You are free to work remotely for the remaining two days or come to the office if you prefer. About the team 私たちのチームは、日本および世界のすべてのAmazonのベンダー企業に提供されるソリューションを支える製品を発明し、開発しています。私たちは、プロダクトマネージャーやビジネス関係者と協力し、科学的なモデルを開発し、インパクトのあるアプリケーションに繋げることで、Amazonのベンダー企業がより速く成長し、顧客により良いサービスを提供できるようにします。 私たちは、科学者同士のコラボレーションが重要であり、孤立した状態で仕事をしても、幸せなチームにはならないと考えています。私たちは、科学者が専門性を高め、最先端の技術についていけるよう、社内の仕組みを通じて継続的に学ぶことに重きを置いています。私たちの目標は、世界中のAmazonのベンダーソリューションの主要なサイエンスチームとなることです。 Our team invents and develops products powering the solutions offered to all Amazon vendors, in Japan and worldwide. We interact with Product Managers and Business stakeholders to develop rigorous science models that are linked to impactful applications helping Amazon vendors grow faster and better serving their customers. We believe that collaboration between scientists is paramount, and working in isolation does not lead to a happy team. We place strong emphasis on continuous learning through internal mechanisms for our scientists to keep on growing their expertise and keep up with the state of the art. Our goal is to be primary science team for vendor solutions in Amazon, worldwide. We are open to hiring candidates to work out of one of the following locations: Tokyo, 13, JPN
JP, 13, Tokyo
日本の大学で機械学習や関連領域の研究に従事している学生の皆様に向けたフェローシッププログラムのご案内です。Amazon JapanのRetail Scienceチームでは、何百万人もの顧客にインパクトを与える価値あるテクノロジーに繋がるような、新しいプロトタイプやコンセプトを開発するプロジェクトに従事していただく学生を募集しています。プログラムは1ヶ月から3ヶ月の短期間のプロジェクトになります。 プロジェクトの対象となるテーマには、自然言語処理、表現学習、レコメンデーションシステム、因果推論といった領域が含まれますが、これらに限定されるわけではありません。プロジェクトは、チームのシニアサイエンティスト1名または複数名のガイダンスのもとで定義、遂行され、プロジェクト中は他のサイエンティストもメンターとしてフォローします。 学生の皆様が新しいモデルを考案したり、新しいテクノロジーを活用し実験する時間を最大化できるようにすることが目標です。そのため、プロジェクトではエンジニアリングやスケーリングよりも、プロトタイピングを行い具体的に概念実証を行うことに集中します。 また、Amazonでは論文出版も推奨しています。従事した研究開発活動の成果物として出版される論文には著者として参加することになります。 フェローシッププログラムは目黒の東京オフィスで、他のチームと一緒に行われます。Amazonは、プログラム期間中に必要なIT機器(ラップトップなど)、給与と通勤費を支給します。 Are you a current PhD student enrolled in a Japanese university researching Statistics, Machine Learning, Economics, or a related discipline? The Japan Retail Science team is looking for Fellows for short term (1-3 months) projects to develop new prototypes and concepts that can then be translated into meaningful technologies impacting millions of customers. In this position, you will be assigned a project to carry out from areas including but not limited to natural language processing, representation learning, recommender systems, or causal inference. The project will be defined and carried out under the supervision of one or more of our senior scientists, and you will be assigned another scientist as a mentor to follow you during the project. Our goal is to maximize the time you spend on inventing new models and experimenting with new techniques, so the work will concentrate on prototyping and creating a tangible proof of concept, rather than engineering and scaling. Amazon encourages publications, and you will be included as an author of any published manuscript. The fellowship will be carried out from our Tokyo office in Meguro together with the rest of the team. Amazon will provide the necessary IT equipment (laptop, etc.) for the duration of the fellowship, a salary, and commuting expenses. A day in the life - チームの多くのメンバーは、午前9時くらいから10時半くらいまでの間に仕事を始め、夕方6時から7時には仕事を終えています。出席が必要なミーティングに参加していれば、勤務時間は自由に決められます。 - パートタイムを希望する場合、勤務時間数は採用担当者とともに決定します。フルタイムの場合、労働時間は通常の契約通り週40時間となります。 - オフィスは目黒にあり、週3回の出社が必要です。残りの2日間はリモートワーク、オフィスへの出勤いずれも可能です。 - The majority of the team starts working between 9 and 10.30am until 18-19. You will have complete flexibility to determine your working hours as long as you are present for the meetings where your attendance is required. - Number of working hours will be determined together with the hiring manager in case you want to pursue the Fellowship part-time. In case of full-time, working hours will be 40/week as per a standard contract. - Our office is located in Meguro, and presence in the office is required 3 times/week. You are free to work remotely for the remaining two days or come to the office if you prefer. About the team 私たちのチームは、日本および世界のすべてのAmazonのベンダー企業に提供されるソリューションを支える製品を発明し、開発しています。私たちは、プロダクトマネージャーやビジネス関係者と協力し、科学的なモデルを開発し、インパクトのあるアプリケーションに繋げることで、Amazonのベンダー企業がより速く成長し、顧客により良いサービスを提供できるようにします。 私たちは、科学者同士のコラボレーションが重要であり、孤立した状態で仕事をしても、幸せなチームにはならないと考えています。私たちは、科学者が専門性を高め、最先端の技術についていけるよう、社内の仕組みを通じて継続的に学ぶことに重きを置いています。私たちの目標は、世界中のAmazonのベンダーソリューションの主要なサイエンスチームとなることです。 Our team invents and develops products powering the solutions offered to all Amazon vendors, in Japan and worldwide. We interact with Product Managers and Business stakeholders to develop rigorous science models that are linked to impactful applications helping Amazon vendors grow faster and better serving their customers. We believe that collaboration between scientists is paramount, and working in isolation does not lead to a happy team. We place strong emphasis on continuous learning through internal mechanisms for our scientists to keep on growing their expertise and keep up with the state of the art. Our goal is to be primary science team for vendor solutions in Amazon, worldwide. We are open to hiring candidates to work out of one of the following locations: Tokyo, 13, JPN
US, WA, Seattle
As a Senior Data Scientist with expertise in Machine Learning (ML), development and use of multi-model models, utilizing diverse sets of large data you will work with a team of Applied Scientists and Software Engineers to build innovative foundation models for robotic manipulation utilizing computer vision and scene perception technology. Your role will focus first on feature engineering, data collection and data usage from large data sets across Fulfillment Technologies and Robotics (FTR), with an eye on strategy going forward to unify a data strategy across organizations. This position requires high levels of analytical thinking, ability to quickly approach large ambiguous problems and apply analytics, technical and engineering expertise to rapidly analyze, validate, visualize, prototype and deliver solutions. Key job responsibilities - Utilize expertise in feature engineering on massive data sets through exploratory data analysis across existing large data sets in Fulfillment Technologies and Robotics (FTR). Help identify areas where we could create new data sources that would improve training capabilities based on understanding of how different scenes in FCs could impact the trained model and ultimately performance of robotic manipulation. - Identify data requirements, build methodology and data modeling strategy across the diverse data sets for both short-term and long-term needs - Work closely with Applied Scientists in building FM solutions, ensuring that the data strategy fits the experimentation paths, as well as contribute to the FM strategy through identifying opportunities based on the data - Work with and develop large datasets (training/fine tuning) and bring large datasets together to inform both training in FOMO as well as across FTR - Design and implement data solutions, working closely with engineers to guide on best paths for building data pipelines and infrastructure for model training - Collaborate across teams both within and outside of FOMO on data strategy A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
IN, KA, Bangalore
Are you interested in changing the Digital Reading Experience? We are from Kindle Books Team looking for a set of Scientists to take the reading experience in Kindle to next level with a set of innovations! We envision Kindle as the place where readers find the best manifestation of all written content optimized with features that enable them to get the most out of reading, and creators are able to realize their vision to customers quickly and at scale. Every time customers open their content, regardless of surface, they start or restart their reading in a familiar, useful and engaging place. We achieve this by building a strong foundation of core experiences and act as a force multiplier and partner for content creators (directly or indirectly) to easily innovate on top of Kindle's purpose built content experience stack in a simple and extensible way. We will achieve this by providing a best-in-class reading experience, unique content experiences, and remaining agile in meeting the evolving needs and preferences of our users. Our goal is to foster long-lasting reading habits and make us the preferred destination for enriching literary experiences. We are building a In The Book Science team and looking for Scientists, who are passionate about Reading and are willing to take Reading to the next level. Every Book is a complex structure with different entities, layout, format and semantics, with more than 17MM eBooks in our catalog. We are looking for experts in all domains like core NLP, Generative AI, CV and Deep Learning Techniques for unlocking capabilities like analysis, enhancement, curation, moderation, translation, transformation and generation in Books based on Content structure, features, Intent & Synthesis. Scientists will focus on Inside the book content and semantically learn the different entities to enhance the Reading experience overall (Kindle & beyond). They have an opportunity to influence in 2 major phases of life-cycle - Publishing (Creation of Books process) and Reading experience (building engaging features & representation in the book thereby driving reading engagement). Key job responsibilities - 3+ years of building machine learning models for business application experience - PhD, or Master's degree and 2+ years of applied research experience - Knowledge of programming languages such as C/C++, Python, Java or Perl - Experience programming in Java, C++, Python or related language - You have expertise in one of the applied science disciplines, such as machine learning, natural language processing, computer vision, Deep learning - You are able to use reasonable assumptions, data, and customer requirements to solve problems. - You initiate the design, development, execution, and implementation of smaller components with input and guidance from team members. - You work with SDEs to deliver solutions into production to benefit customers or an area of the business. - You assume responsibility for the code in your components. You write secure, stable, testable, maintainable code with minimal defects. - You understand basic data structures, algorithms, model evaluation techniques, performance, and optimality tradeoffs. - You follow engineering and scientific method best practices. You get your designs, models, and code reviewed. You test your code and models thoroughly - You participate in team design, scoping and prioritization discussions. You are able to map a business goal to a scientific problem and map business metrics to technical metrics. - You invent, refine and develop your solutions to ensure they are meeting customer needs and team goals. You keep current with research trends in your area of expertise and scrutinize your results. A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test solutions to improve our experience. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, model development and productionizing the same. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. We are open to hiring candidates to work out of one of the following locations: Bangalore, IND | Bangalore, KA, IND
US, WA, Seattle
Selling Partner Promotions is seeking a Sr. Economist to use econometric and machine learning techniques to help offer Customers high quality deals and promotions. This role will be a key member of a team of scientists supporting the Pricing and Promotions related business. The Sr. Economist will work closely with other research scientists, machine learning experts, and economists to design and run experiments, research new algorithms, and find new ways to improve Seller Pricing and Promotions to optimize the Customer experience. Key job responsibilities - Build economic models to quantify the causal impact of pricing actions and promotions on customers and sellers. - Build models to define, measure and optimize for high quality deals - Define and execute an extensive experimental roadmap to test hypotheses and validate the outputs of models. - Create models that allow an optimization of selling partner ROI and customer long-term value. - Evaluate and validate the proposed models via offline benchmark tests as well as online A/B tests in production. - Publish and present your work at internal and external scientific venues. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
IN, KA, Bengaluru
The Amazon Artificial Generative Intelligence (AGI) team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key job responsibilities - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues We are open to hiring candidates to work out of one of the following locations: Bengaluru, KA, IND
US, WA, Seattle
Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a collaborative, smart team of doers that work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. Amazon Robotics is seeking enthusiastic Applied Scientists with a passion for robotic research. Our team works on challenging and high-impact projects within robotics. Examples of projects include allocating resources to complete a million orders a day, coordinating the motion of thousands of robots, autonomous navigation in warehouses, identifying objects and damage, and learning how to grasp all the products Amazon sells. Key job responsibilities • Research, design, implement and evaluate complex perception, motion planning, and decision making algorithms integrating across multiple disciplines and leveraging machine learning. • Create experiments and prototype implementations of new learning algorithms and prediction techniques. • Work closely with software engineering team members to drive scalable, real-time implementations. • Collaborate with machine learning and robotic controls experts to implement and deploy algorithms, such as machine learning models. • Collaborate closely with hardware engineering team members on developing systems from prototyping to production level. • Represent Amazon in academia community through publications and scientific presentations. • Work with stakeholders across hardware, science, and operations teams to iterate on systems design and implementation. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Quantum Research Scientist. You will join a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers working at the forefront of quantum computing. You should have a deep and broad knowledge of experimental measurement techniques. Candidates with a track record of original scientific contributions in experimental device physics will be preferred. We are looking for candidates with strong engineering principles, resourcefulness and a bias for action, superior problem solving, and excellent communication skills. Working effectively within a team environment is essential. As a research scientist you will be expected to work on new ideas and stay abreast of the field of experimental quantum computation. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Key job responsibilities As a research scientist you will be responsible for building experiments that encompass the integrated stack: design, fabrication, cryogenics, signal chain, and control stack software. Based on your tests you will provide recommendations that improve our next-generation quantum processors. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. We are open to hiring candidates to work out of one of the following locations: Pasadena, CA, USA