Responsible AI in the generative era

Generative AI raises new challenges in defining, measuring, and mitigating concerns about fairness, toxicity, and intellectual property, among other things. But work has started on the solutions.

In recent years, and even recent months, there have been rapid and dramatic advances in the technology known as generative AI. Generative AI models are trained on inconceivably massive collections of text, code, images, and other rich data. They are now able to produce, on demand, coherent and compelling stories, news summaries, poems, lyrics, paintings, and programs. The potential practical uses of generative AI are only just beginning to be understood but are likely to be manifold and revolutionary and to include writing aids, creative content production and refinement, personal assistants, copywriting, code generation, and much more.

Kearns with caption
Michael Kearns, a professor of computer and information science at the University of Pennsylvania and an Amazon Scholar.

There is thus considerable excitement about the transformations and new opportunities that generative AI may bring. There are also understandable concerns — some of them new twists on those of traditional responsible AI (such as fairness and privacy) and some of them genuinely new (such as the mimicry of artistic or literary styles). In this essay, I survey these concerns and how they might be addressed over time.

I will focus primarily on technical approaches to the risks, while acknowledging that social, legal, regulatory, and policy mechanisms will also have important roles to play. At Amazon, our hope is that such a balanced approach can significantly reduce the risks, while still preserving much of the excitement and usefulness of generative AI.

What is generative AI?

To understand what generative AI is and how it works, it is helpful to begin with the example of large language models (LLMs). Imagine the thought experiment in which we start with some sentence fragment like Once upon a time, there was a great ..., and we poll people on what word they would add next. Some might say wizard, others might say queen, monster, and so on. We would also expect that given the fairy tale nature of the fragment, words such as apricot or fork would be rather unlikely suggestions.

Related content
Model using ASR hypotheses as extra inputs reduces word error rate of human transcriptions by almost 11%.

If we poll a large enough population, a probability distribution over next words would begin to emerge. We could then randomly pick a word from that distribution (say wizard), and now our sequence would be one word longer — Once upon a time, there was a great wizard ... — and we could again poll for the next word. In this manner we could theoretically generate entire stories, and if we restarted the whole process, the crowd would produce an entirely different narrative due to the inherent randomness.

Dramatic advances in machine learning have effectively made this thought experiment a reality. But instead of polling crowds of people, we use a model to predict likely next words, one trained on a massive collection of documents — public collections of fiction and nonfiction, Wikipedia entries and news articles, transcripts of human dialogue, open-source code, and much more.

LLM objective.gif
An example of how a language model uses context to predict the next word in a sentence.

If the training data contains enough sentences beginning Once upon a time, there was a great …, it will be easy to sample plausible next words for our initial fragment. But LLMs can generalize and create as well, and not always in ways that humans might expect. The model might generate Once upon a time, there was a great storm based on occurrences of tremendous storm in the training data, combined with the learned synonymy of great and tremendous. This completion can happen despite great storm never appearing verbatim in the training data and despite the completions more expected by humans (like wizard and queen).

The resulting models are just as complex as their training data, often described by hundreds of billions of numbers (or parameters, in machine learning parlance), hence the “large” in LLM. LLMs have become so good that not only do they consistently generate grammatically correct text, but they create content that is coherent and often compelling, matching the tone and style of the fragments they were given (known as prompts). Start them with a fairy tale beginning, and they generate fairy tales; give them what seems to be the start of a news article, and they write a news-like article. The latest LLMs can even follow instructions rather than simply extend a prompt, as in Write lyrics about the Philadelphia Eagles to the tune of the Beatles song “Get Back”.

Related content
Models that map spoken language to objects in an image would make it easier for customers to communicate with multimodal devices.

Generative AI isn’t limited to text, and many models combine language and images, as in Create a painting of a skateboarding cat in the style of Andy Warhol. The techniques for building such systems are a bit more complex than for LLMs and involve learning a model of proximity between text and images, which can be done using data sources like captioned photos. If there are enough images containing cats that have the word cat in the caption, the model will capture the proximity between the word and pictures of cats.

The examples above suggest that generative AI is a form of entertainment, but many potential practical uses are also beginning to emerge, including generative AI as a writing tool (Shorten the following paragraphs and improve their grammar), for productivity (Extract the action items from this meeting transcript), for creative content (Propose logo designs for a startup building a dog-walking app), for simulating focus groups (Which of the following two product descriptions would Florida retirees find more appealing?), for programming (Give me a code snippet to sort a list of numbers), and many others.

So the excitement over the current and potential applications of generative AI is palpable and growing. But generative AI also gives rise to some new risks and challenges in the responsible use of AI and machine learning. And the likely eventual ubiquity of generative models in everyday life and work amplifies the stakes in addressing these concerns thoughtfully and effectively.

So what’s the problem?

The “generative” in generative AI refers to the fact that the technology can produce open-ended content that varies with repeated tries. This is in contrast to more traditional uses of machine learning, which typically solve very focused and narrow prediction problems.

For example, consider training a model for consumer lending that predicts whether an applicant would successfully repay a loan. Such a model might be trained using the lender’s data on past loans, each record containing applicant information (work history, financial information such as income, savings, and credit score, and educational background) along with whether the loan was repaid or defaulted.

Related content
NSF deputy assistant director Erwin Gianchandani on the challenges addressed by funded projects.

The typical goal would be to train a model that was as accurate as possible in predicting payment/default and then apply it to future applications to guide or make lending decisions. Such a model makes only lending outcome predictions and cannot generate fairy tales, improve grammar, produce whimsical images, write code, and so on. Compared to generative AI, it is indeed a very narrow and limited model.

But the very limitations also make the application of certain dimensions of responsible AI much more manageable. Consider the goal of making our lending model fair, which would typically be taken to mean the absence of demographic bias. For example, we might want to make sure that the error rate of the predictions of our model (and it generally will make errors, since even human loan officers are imperfect in predicting who will repay) is approximately equal on men and women. Or we might more specifically ask that the false-rejection rate — the frequency with which the model predicts default by an applicant who is in fact creditworthy — be the same across gender groups.

Once armed with this definition of fairness, we can seek to enforce it in the training process. In other words, instead of finding a model that minimizes the overall error rate, we find one that does so under the additional condition that the false-rejection rates on men and women are approximately equal (say, within 1% of each other). We might also want to apply the same notion of fairness to other demographic properties (such as young, middle aged, and elderly). But the point is that we can actually give reasonable and targeted definitions of fairness and develop training algorithms that enforce them.

It is also easy to audit a given model for its adherence to such notions of fairness (for instance, by estimating the error rates on both male and female applicants). Finally, when the predictive task is so targeted, we have much more control over the training data: we train on historical lending decisions only, and not on arbitrarily rich troves of general language, image, and code data.

Now consider the problem of making sure an LLM is fair. What might we even mean by this? Well, taking a cue from our lending model, we might ask that the LLM treat men and women equally. For instance, consider a prompt like Dr. Hanson studied the patient’s chart carefully, and then … . In service of fairness, we might ask that in the completions generated by an LLM, Dr. Hanson be assigned male and female pronouns with roughly equal frequency. We might argue that to do otherwise perpetuates the stereotype that doctors are typically male.

Related content
Method significantly reduces bias while maintaining comparable performance on machine learning tasks.

But then should we not also do this for mentions of nurses, firefighters, accountants, pilots, carpenters, attorneys, and professors? It’s clear that measuring just this one narrow notion of fairness will quickly become unwieldy. And it isn’t even obvious in what contexts it should be enforced. What if the prompt described Dr. Hanson as having a beard? What about the Women’s National Basketball Association (WNBA)? Should mention of a WNBA player in a prompt elicit male pronouns half the time?

Defining fairness for LLMs is even murkier than we suggest above, again because of the open-ended content they generate. Let’s turn from pronoun choices to tone. What if an LLM, when generating content about a woman, uses an ever-so-slightly more negative tone (in choice of words and level of enthusiasm) than when generating content about a man? Again, even detecting and quantifying such differences would be a very challenging technical problem. The field of sentiment analysis in natural-language processing might suggest some possibilities, but currently, it focuses on much coarser distinctions in narrower settings, such as distinguishing positive from negative sentiment in business news articles about particular corporations.

So one of the prices we pay for the rich, creative, open-ended content that generative AI can produce is that it becomes commensurately harder (compared to traditional predictive ML) to define, measure, and enforce fairness.

From fairness to privacy

In a similar vein, let’s consider privacy concerns. It is of course important that a consumer lending model not leak information about the financial or other data of the individual applicants in the training data. (One way this can happen is if model predictions are accompanied by confidence scores; if the model expresses 100% confidence that a loan application will default, it’s likely because that application, with a default outcome, was in the training data.) For this kind of traditional, more narrow ML, there are now techniques for mitigating such leaks by making sure model outputs are not overly dependent on any particular piece of training data.

Related content
Calibrating noise addition to word density in the embedding space improves utility of privacy-protected text.

But the open-ended nature of generative AI broadens the set of concerns from verbatim leaks of training data to more subtle copying phenomena. For example, if a programmer has written some code using certain variable names and then asks an LLM for help writing a subroutine, the LLM may generate code from its training data, but with the original variable names replaced with those chosen by the programmer. So the generated code is not literally in the training data but is different only in a cosmetic way.

There are defenses against these challenges, including curation of training data to exclude private information, and techniques to detect similarity of code passages. But more subtle forms of replication are also possible, and as I discuss below, this eventually bleeds into settings where generative AI reproduces the “style” of content in its training data.

And while traditional ML has begun developing techniques for explaining the decisions or predictions of trained models, they don’t always transfer to generative AI, in part because current generative models sometimes produce content that simply cannot be explained (such as scientific citations that don’t exist, something I’ll discuss shortly).

The special challenges of responsible generative AI

So the usual concerns of responsible AI become more difficult for generative AI. But generative AI also gives rise to challenges that simply don’t exist for predictive models that are more narrow. Let’s consider some of these.

Toxicity. A primary concern with generative AI is the possibility of generating content (whether it be text, images, or other modalities) that is offensive, disturbing, or otherwise inappropriate. Once again, it is hard to even define and scope the problem. The subjectivity involved in determining what constitutes toxic content is an additional challenge, and the boundary between restricting toxic content and censorship may be murky and context- and culture-dependent. Should quotations that would be considered offensive out of context be suppressed if they are clearly labeled as quotations? What about opinions that may be offensive to some users but are clearly labeled as opinions? Technical challenges include offensive content that may be worded in a very subtle or indirect fashion, without the use of obviously inflammatory language.

Related content
Prompt engineering enables researchers to generate customized training examples for lightweight “student” models.

Hallucinations. Considering the next-word distribution sampling employed by LLMs, it is perhaps not surprising that in more objective or factual use cases, LLMs are susceptible to what are sometimes called hallucinations — assertions or claims that sound plausible but are verifiably incorrect. For example, a common phenomenon with current LLMs is creating nonexistent scientific citations. If one of these LLMs is prompted with the request Tell me about some papers by Michael Kearns, it is not actually searching for legitimate citations but generating ones from the distribution of words associated with that author. The result will be realistic titles and topics in the area of machine learning, but not real articles, and they may include plausible coauthors but not actual ones.

In a similar vein, prompts for financial news stories result not in a search of (say) Wall Street Journal articles but news articles fabricated by the LLM using the lexicon of finance. Note that in our fairy tale generation scenario, this kind of creativity was harmless and even desirable. But current LLMs have no levers that let users differentiate between “creativity on” and “creativity off” use cases.

Related content
Combining contrastive training and selection of hard negative examples establishes new benchmarks.

Intellectual property. A problem with early LLMs was their tendency to occasionally produce text or code passages that were verbatim regurgitations of parts of their training data, resulting in privacy and other concerns. But even improvements in this regard have not prevented reproductions of training content that are more ambiguous and nuanced. Consider the aforementioned prompt for a multimodal generative model Create a painting of a skateboarding cat in the style of Andy Warhol. If the model is able to do so in a convincing yet still original manner because it was trained on actual Warhol images, objections to such mimicry may arise.

Plagiarism and cheating. The creative capabilities of generative AI give rise to worries that it will be used to write college essays, writing samples for job applications, and other forms of cheating or illicit copying. Debates on this topic are happening at universities and many other institutions, and attitudes vary widely. Some are in favor of explicitly forbidding any use of generative AI in settings where content is being graded or evaluated, while others argue that educational practices must adapt to, and even embrace, the new technology. But the underlying challenge of verifying that a given piece of content was authored by a person is likely to present concerns in many contexts.

Disruption of the nature of work. The proficiency with which generative AI is able to create compelling text and images, perform well on standardized tests, write entire articles on given topics, and successfully summarize or improve the grammar of provided articles has created some anxiety that some professions may be replaced or seriously disrupted by the technology. While this may be premature, it does seem that generative AI will have a transformative effect on many aspects of work, allowing many tasks previously beyond automation to be delegated to machines.

What can we do?

The challenges listed above may seem daunting, in part because of how unfamiliar they are compared to those of previous generations of AI. But as technologists and society learn more about generative AI and its uses and limitations, new science and new policies are already being created to address those challenges.

For toxicity and fairness, careful curation of training data can provide some improvements. After all, if the data doesn’t contain any offensive or biased words or phrases, an LLM simply won’t be able to generate them. But this approach requires that we identify those offensive phrases in advance and are certain that there are absolutely no contexts in which we would want them in the output. Use-case-specific testing can also help address fairness concerns — for instance, before generative AI is used in high-risk domains such as consumer lending, the model could be tested for fairness for that particular application, much as we might do for more narrow predictive models.

Related content
Amazon Visiting Academic Barbara Poblete helps to build safer, more-diverse online communities — and to aid disaster response.

For less targeted notions of toxicity, a natural approach is to train what we might call guardrail models that detect and filter out unwanted content in the training data, in input prompts, and in generated outputs. Such models require human-annotated training data in which varying types and degrees of toxicity or bias are identified, which the model can generalize from. In general, it is easier to control the output of a generative model than it is to curate the training data and prompts, given the extreme generality of the tasks we intend to address.

For the challenge of producing high-fidelity content free of hallucinations, an important first step is to educate users about how generative AI actually works, so there is no expectation that the citations or news-like stories produced are always genuine or factually correct. Indeed, some current LLMs, when pressed on their inability to quote actual citations, will tell the user that they are just language models that don’t verify their content with external sources. Such disclaimers should be more frequent and clear. And the specific case of hallucinated citations could be mitigated by augmenting LLMs with independent, verified citation databases and similar sources, using approaches such as retrieval-augmented generation. Another nascent but intriguing approach is to develop methods for attributing generated outputs to particular pieces of training data, allowing users to assess the validity of those sources. This could help with explainability as well.

Concerns around intellectual property are likely to be addressed over time by a mixture of technology, policy, and legal mechanisms. In the near term, science is beginning to emerge around various notions of model disgorgement, in which protected content or its effects on generative outputs are reduced or removed. One technology that might eventually prove relevant is differential privacy, in which a model is trained in a way that ensures that any particular piece of training data has negligible effects on the outputs the model subsequently produces.

Related content
By exploiting consistencies across components of ensemble classifiers, a new approach reduces data requirements by up to 89%.

Another approach is so-called sharding approaches, which divide the training data into smaller portions on which separate submodels are trained; the submodels are then combined to form the overall model. In order to undo the effects of any particular item of data on the overall model, we need only remove it from its shard and retrain that submodel, rather than retraining the entire model (which for generative AI would be sufficiently expensive as to be prohibitive).

Finally, we can consider filtering or blocking approaches, where before presentation to the user, generated content is explicitly compared to protected content in the training data or elsewhere and suppressed (or replaced) if it is too similar. Limiting the number of times any specific piece of content appears in the training data also proves helpful in reducing verbatim outputs.

Some interesting approaches to discouraging cheating using generative AI are already under development. One is to simply train a model to detect whether a given (say) text was produced by a human or by a generative model. A potential drawback is that this creates an arms race between detection models and generative AI, and since the purpose of generative AI is to produce high-quality content plausibly generated by a human, it’s not clear that detection methods will succeed in the long run.

An intriguing alternative is watermarking or fingerprinting approaches that would be implemented by the developers of generative models themselves. For example, since at each step LLMs are drawing from the distribution over the next word given the text so far, we can divide the candidate words into “red” and “green” lists that are roughly 50% of the probability each; then we can have the LLM draw only from the green list. Since the words on the green list are not known to users, the likelihood that a human would produce a 10-word sentence that also drew only from the green lists is ½ raised to the 10th power, which is only about 0.0009. In this way we can view all-green content as providing a virtual proof of LLM generation. Note that the LLM developers would need to provide such proofs or certificates as part of their service offering.

LLM watermarking.AI.gif
At each step, the model secretly divides the possible next words into green and red lists. The next word is then sampled only from the green list.
LLM watermarking.human.gif
A human generating a sentence is unaware of the division into green and red lists and is thus very likely to choose a sequence that mixes green and red words. Since, on long sentences, the likelihood of a human choosing an all-green sequence is vanishingly small, we can view all-green sentences as containing a proof they were generated by AI.

Disruption to work as we know it does not have any obvious technical defenses, and opinions vary widely on where things will settle. Clearly, generative AI could be an effective productivity tool in many professional settings, and this will at a minimum alter the current division of labor between humans and machines. It’s also possible that the technology will open up existing occupations to a wider community (a recent and culturally specific but not entirely ludicrous quip on social media was “English is the new programming language”, a nod to LLM code generation abilities) or even create new forms of employment, such as prompt engineer (a topic with its own Wikipedia entry, created in just February of this year).

But perhaps the greatest defense against concerns over generative AI may come from the eventual specialization of use cases. Right now, generative AI is being treated as a fascinating, open-ended playground in which our expectations and goals are unclear. As we have discussed, this open-endedness and the plethora of possible uses are major sources of the challenges to responsible AI I have outlined.

Related content
Technique that mixes public and private training data can meet differential-privacy criteria while cutting error increase by 60%-70%.

But soon more applied and focused uses will emerge, like some of those I suggested earlier. For instance, consider using an LLM as a virtual focus group — creating prompts that describe hypothetical individuals and their demographic properties (age, gender, occupation, location, etc.) and then asking the LLM which of two described products they might prefer.

In this application, we might worry much less about censoring content and much more about removing any even remotely toxic output. And we might choose not to eradicate the correlations between gender and the affinity for certain products in service of fairness, since such correlations are valuable to the marketer. The point is that the more specific our goals for generative AI are, the easier it is to make sensible context-dependent choices; our choices become more fraught and difficult when our expectations are vague.

Finally, we note that end user education and training will play a crucial role in the productive and safe use of generative AI. As the potential uses and harms of generative AI become better and more widely understood, users will augment some of the defenses I have outlined above with their own common sense.

Conclusion

Generative AI has stoked both legitimate enthusiasm and legitimate fears. I have attempted to partially survey the landscape of concerns and to propose forward-looking approaches for addressing them. It should be emphasized that addressing responsible-AI risks in the generative age will be an iterative process: there will be no “getting it right” once and for all. This landscape is sure to shift, with changes to both the technology and our attitudes toward it; the only constant will be the necessity of balancing the enthusiasm with practical and effective checks on the concerns.

Related content

US, WA, Bellevue
The Routing and Planning organization supports all parcel and grocery delivery programs across Amazon Delivery. All these programs have different characteristics and require a large number of decision support systems to operate at scale. As part of Routing and Planning organization, you’ll partner closely with other scientists and engineers in a collegial environment with a clear path to business impact. We have an exciting portfolio of research areas including network optimization, routing, routing inputs, electric vehicles, delivery speed, capacity planning, geospatial planning and dispatch solutions for different last mile programs leveraging the latest OR, ML, and Generative AI methods, at a global scale. We are actively looking to hire senior scientists to lead one or more of these problem spaces. Successful candidates will have a deep knowledge of Operations Research and Machine Learning methods, experience in applying these methods to large-scale business problems, the ability to map models into production-worthy code in Python or Java, the communication skills necessary to explain complex technical approaches to a variety of stakeholders and customers, and the excitement to take iterative approaches to tackle big research challenges. Inclusive Team Culture Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which reminds team members to seek diverse perspectives, learn and be curious, and earn trust. Mentorship & Career Growth We care about your career growth too. Whether your goals are to explore new technologies, take on bigger opportunities, or get to the next level, we'll help you get there. Our business is growing fast and our people will grow with it. Key job responsibilities • Invent and design new solutions for scientifically-complex problem areas and identify opportunities for invention in existing or new business initiatives. • Successfully deliver large or critical solutions to complex problems in the support of medium-to-large business goals. • Influence the design of scientifically-complex software solutions or systems, for which you personally write significant parts of the critical scientific novelty. • Apply mathematical optimization techniques and algorithms to design optimal or near optimal solution methodologies to be used by in-house decision support tools and software. • Research, prototype, simulate, and experiment with these models and participate in the production level deployment in Python or Java. • Make insightful contributions to teams's roadmaps, goals, priorities, and approach. • Actively engage with the internal and external scientific communities by publishing scientific articles and participating in research conferences.
US, WA, Seattle
The Creator team’s mission is to make Amazon the Earth’s most desired destination for commerce creators and their content. We own the Associates and Influencer programs and brands across Amazon to ensure a cohesive experience, expand creators’ opportunities to earn through innovation, and launch experiences that reinforce feelings of achievement for creators. Within Creators, our Shoppable Content team focuses on enriching the Amazon shopping experience with inspiring and engaging content like shoppable videos that guides customers’ purchasing decisions, building products and services that enable creators to publish and manage shoppable content posts internal teams to build innovative, content-first experiences for customers. We’re seeking a customer-obsessed, data driven leader to manage our Science and Analytics teams. You will lead a team of Applied Scientists, Economists, Data Scientists, Business Intelligence Engineers, and Data Engineers to develop innovative solutions that help us address creator needs and make creators more successful on Amazon. You will define the strategic vision for how to make science foundational to everything we do, including leading the development of data models and analysis tools to represent the ground truth about creator measurement and test results to facilitate important business decisions, both off and on Amazon. Domains include creator incrementality, compensation, acquisition, recommendations, life cycle, and content quality. You will work with multiple engineering, software, economics and business teams to specify requirements, define data collection, interpretation strategies, data pipelines and implement data analysis and reporting tools. Your focus will be in optimizing the analysis of test results to enable the efficient growth of the Creator channel. You should be able to operate with a high level of autonomy and possess strong communication skills, both written and verbal. You have a combination of strong data analytical skills and business acumen, and a demonstrable ability to operate at scale. You can influence up, down, and across and thrive in entrepreneurial environments. You will excel at hiring and retaining a team of top performers, set strategic direction, build bridges with stakeholders, and cultivate a culture of invention and collaboration. Key job responsibilities · Own and prioritize the science and BI roadmaps for the Creator channel, working with our agile developers to size, scope, and weigh the trade-offs across that roadmap and other areas of the business. · Lead a team of data scientists and engineers skilled at using a variety of techniques including classic analytic techniques as well Data Science and Machine Learning to address hard problems. · Deeply understand the creator, their needs, the business landscape, and backend technologies. · Partner with business and engineering stakeholders across multiple teams to gather data/analytics requirements, and provide clear guidance to the team on prioritization and execution. · Run frequent experiments, in partnership with business stakeholders, to inform our innovation plans. · Ensure high availability for data infrastructure and high data quality, partnering with upstream teams as required.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Research Scientist specializing the design of microwave components for cryogenic environments. Working alongside other scientists and engineers, you will design and validate hardware performing microwave signal conditioning at cryogenic temperatures for AWS quantum processors. Candidates must have a background in both microwave theory and implementation. Working effectively within a cross-functional team environment is critical. The ideal candidate will have a proven track record of hardware development from requirements development to validation. Key job responsibilities Our scientists and engineers collaborate across diverse teams and projects to offer state of the art, cost effective solutions for the signal conditioning of AWS quantum processor systems at cryogenic temperatures. You’ll bring a passion for innovation, collaboration, and mentoring to: Solve layered technical problems across our cryogenic signal chain. Develop requirements with key system stakeholders, including quantum device, test and measurement, cryogenic hardware, and theory teams. Design, implement, test, deploy, and maintain innovative solutions that meet both performance and cost metrics. Research enabling technologies necessary for AWS to produce commercially viable quantum computers. A day in the life As you design and implement cryogenic microwave signal conditioning solutions, from requirements definition to deployment, you will also: Participate in requirements, design, and test reviews and communicate with internal stakeholders. Work cross-functionally to help drive decisions using your unique technical background and skill set. Refine and define standards and processes for operational excellence. Work in a high-paced, startup-like environment where you are provided the resources to innovate quickly. About the team AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
US, WA, Bellevue
The Worldwide Design Engineering (WWDE) organization delivers innovative, effective and efficient engineering solutions that continually improve our customers’ experience. WWDE optimizes designs throughout the entire Amazon value chain providing overall fulfillment solutions from order receipt to last mile delivery. We are seeking a Simulation Scientist to assist in designing and optimizing the fulfillment network concepts and process improvement solutions using discrete event simulations for our World Wide Design Engineering Team. Successful candidates will be visionary technical expert and natural self-starter who have the drive to apply simulation and optimization tools to solve complex flow and buffer challenges during the development of next generation fulfillment solutions. The Simulation Scientist is expected to deep dive into complex problems and drive relentlessly towards innovative solutions working with cross functional teams. Be comfortable interfacing and influencing various functional teams and individuals at all levels of the organization in order to be successful. Lead strategic modelling and simulation projects related to drive process design decisions. Responsibilities: - Lead the design, implementation, and delivery of the simulation data science solutions to perform system of systems discrete event simulations for significantly complex operational processes that have a long-term impact on a product, business, or function using FlexSim, Demo 3D, AnyLogic or any other Discrete Event Simulation (DES) software packages - Lead strategic modeling and simulation research projects to drive process design decisions - Be an exemplary practitioner in simulation science discipline to establish best practices and simplify problems to develop discrete event simulations faster with higher standards - Identify and tackle intrinsically hard process flow simulation problems (e.g., highly complex, ambiguous, undefined, with less existing structure, or having significant business risk or potential for significant impact - Deliver artifacts that set the standard in the organization for excellence, from process flow control algorithm design to validation to implementations to technical documents using simulations - Be a pragmatic problem solver by applying judgment and simulation experience to balance cross-organization trade-offs between competing interests and effectively influence, negotiate, and communicate with internal and external business partners, contractors and vendors for multiple simulation projects - Provide simulation data and measurements that influence the business strategy of an organization. Write effective white papers and artifacts while documenting your approach, simulation outcomes, recommendations, and arguments - Lead and actively participate in reviews of simulation research science solutions. You bring clarity to complexity, probe assumptions, illuminate pitfalls, and foster shared understanding within simulation data science discipline - Pay a significant role in the career development of others, actively mentoring and educating the larger simulation data science community on trends, technologies, and best practices - Use advanced statistical /simulation tools and develop codes (python or another object oriented language) for data analysis , simulation, and developing modeling algorithms - Lead and coordinate simulation efforts between internal teams and outside vendors to develop optimal solutions for the network, including equipment specification, material flow control logic, process design, and site layout - Deliver results according to project schedules and quality Key job responsibilities • You influence the scientific strategy across multiple teams in your business area. You support go/no-go decisions, build consensus, and assist leaders in making trade-offs. You proactively clarify ambiguous problems, scientific deficiencies, and where your team’s solutions may bottleneck innovation for other teams. A day in the life The dat-to-day activities include challenging and problem solving scenario with fun filled environment working with talented and friendly team members. The internal stakeholders are IDEAS team members, WWDE design vertical and Global robotics team members. The team solve problems related to critical Capital decision making related to Material handling equipment and technology design solutions. About the team World Wide Design EngineeringSimulation Team’s mission is to apply advanced simulation tools and techniques to drive process flow design, optimization, and improvement for the Amazon Fulfillment Network. Team develops flow and buffer system simulation, physics simulation, package dynamics simulation and emulation models for various Amazon network facilities, such as Fulfillment Centers (FC), Inbound Cross-Dock (IXD) locations, Sort Centers, Airhubs, Delivery Stations, and Air hubs/Gateways. These intricate simulation models serve as invaluable tools, effectively identifying process flow bottlenecks and optimizing throughput. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities - Use machine learning and analytical techniques to create scalable solutions for business problems - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes - Design, development, evaluate and deploy innovative and highly scalable models for predictive learning - Research and implement novel machine learning and statistical approaches - Work closely with software engineering teams to drive real-time model implementations and new feature creations - Work closely with business owners and operations staff to optimize various business operations - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation - Mentor other scientists and engineers in the use of ML techniques ML-India
US, WA, Seattle
Amazon's Global Fixed Marketing Campaign Measurement & Optimization (CMO) team is looking for a senior economic expert in causal inference and applied ML to advance the economic measurement, accuracy validation and optimization methodologies of Amazon's global multi-billion dollar fixed marketing spend. This is a thought leadership position to help set the long-term vision, drive methods innovation, and influence cross-org methods alignment. This role is also an expert in modeling and measuring marketing and customer value with proven capacity to innovate, scale measurement, and mentor talent. This candidate will also work closely with senior Fixed Marketing tech, product, finance and business leadership to devise science roadmaps for innovation and simplification, and adoption of insights to influence important resource allocation, fixed marketing spend and prioritization decisions. Excellent communication skills (verbal and written) are required to ensure success of this collaboration. The candidate must be passionate about advancing science for business and customer impact. Key job responsibilities - Advance measurement, accuracy validation, and optimization methodology within Fixed Marketing. - Motivate and drive data generation to size. - Develop novel, innovative and scalable marketing measurement techniques and methodologies. - Enable product and tech development to scale science solutions and approaches. A day in the life - Propose and refine economic and scientific measurement, accuracy validation, and optimization methodology to improve Fixed Marketing models, outputs and business results - Brief global fixed marketing and retails executives about FM measurement and optimization approaches, providing options to address strategic priorities. - Collaborate with and influence the broader scientific methodology community. About the team CMO's vision is to maximizing long-term free cash flow by providing reliable, accurate and useful global fixed marketing measurement and decision support. The team measures and helps optimize the incremental impact of Amazon (Stores, AWS, Devices) fixed marketing investment across TV, Digital, Social, Radio, and many other channels globally. This is a fully self supported team composed of scientists, economists, engineers, and product/program leaders with S-Team visibility. We are open to hiring candidates to work out of one of the following locations: Irvine, CA, USA | San Francisco, CA, USA | Seattle, WA, USA | Sunnyvale, CA, USA
GB, Cambridge
Our team builds generative AI solutions that will produce some of the future’s most influential voices in media and art. We develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video, with Amazon Game Studios and Alexa, the ground-breaking service that powers the audio for Echo. Do you want to be part of the team developing the future technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language, Audio and Video technology. As an Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and generative AI models to drive the state of the art in audio (and vocal arts) generation. Position Responsibilities: * Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. * Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. * Research and implement novel ML and statistical approaches to add value to the business. * Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR
US, TX, Austin
The Workforce Solutions Analytics and Tech team is looking for a senior Applied Scientist who is interested in solving challenging optimization problems in the labor scheduling and operations efficiency space. We are actively looking to hire senior scientists to lead one or more of these problem spaces. Successful candidates will have a deep knowledge of Operations Research and Machine Learning methods, experience in applying these methods to large-scale business problems, the ability to map models into production-worthy code in Python or Java, the communication skills necessary to explain complex technical approaches to a variety of stakeholders and customers, and the excitement to take iterative approaches to tackle big research challenges. As a member of our team, you'll work on cutting-edge projects that directly impact over a million Amazon associates. This is a high-impact role with opportunities to designing and improving complex labor planning and cost optimization models. The successful candidate will be a self-starter comfortable with ambiguity, with strong attention to detail and outstanding ability in balancing technical leadership with strong business judgment to make the right decisions about model and method choices. Successful candidates must thrive in fast-paced environments, which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, and align research focuses with the Amazon's strategic needs. Key job responsibilities • Candidates will be responsible for developing solutions to better manage and optimize flexible labor capacity. The successful candidate should have solid research experience in one or more technical areas of Operations Research or Machine Learning. As a senior scientist, you will also help coach/mentor junior scientists on the team. • In this role, you will be a technical leader in applied science research with significant scope, impact, and high visibility. You will lead science initiatives for strategic optimization and capacity planning. They require superior logical thinkers who are able to quickly approach large ambiguous problems, turn high-level business requirements into mathematical models, identify the right solution approach, and contribute to the software development for production systems. • Invent and design new solutions for scientifically-complex problem areas and identify opportunities for invention in existing or new business initiatives. • Successfully deliver large or critical solutions to complex problems in the support of medium-to-large business goals. • Apply mathematical optimization techniques and algorithms to design optimal or near optimal solution methodologies to be used for labor planning. • Research, prototype, simulate, and experiment with these models and participate in the production level deployment in Python or Java. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Austin, TX, USA | Bellevue, WA, USA | Nashville, TN, USA | Seattle, WA, USA | Tempe, AZ, USA
US, WA, Bellevue
Amazon SCOT OIH (Supply Chain Optimization Technology - Optimal Inventory Health) team owns inventory health management for Retail worldwide. We use a dynamic programming model to maximize the net present value of inventory driving actions such as pricing markdowns, deals, removals, coupons etc. Our team, the OIH Insights Team energize and empower OIH business with the clarity and conviction required to make impactful business decisions through the generation of actionable and explainable insights, we do so through the following mechanisms: -- Transforming raw, complex datasets into intuitive, and actionable insights that impact OIH strategy and accelerate business decision making. -- Building and maintaining modular, scalable data models that provide the generality, flexibility, intuitiveness, and responsiveness required for seamless self-service insights. -- Generating deeper insights that drive competitive advantage using statistical modeling and machine learning. As a data scientist in the team, you can contribute to each layers of a data solution – you work closely with business intelligence engineers and product managers to obtain relevant datasets and prototype predictive analytic models, you team up with data engineers and software development engineers to implement data pipeline to productionize your models, and review key results with business leaders and stakeholders. Your work exhibits a balance between scientific validity and business practicality. You will be diving deep in our data and have a strong bias for action to quickly produce high quality data analyses with clear findings and recommendations. The ideal candidate is self-motivated, has experience in applying technical knowledge to a business context, can turn ambiguous business questions into clearly defined problems, can effectively collaborate with research scientists, software development engineers, and product managers, and deliver results that meet high standards of data quality, security, and privacy. Key job responsibilities 1. Define and conduct experiments to optimize Long Term Free Cash Flow for Amazon Retail inventory, and communicate insights and recommendations to product, engineering, and business teams 2. Interview stakeholders to gather business requirements and translate them into concrete requirement for data science projects 3. Build models that forecast growth and incorporate inputs from product, engineering, finance and marketing partners 4. Apply data science techniques to automatically identify trends, patterns, and frictions of product life cycle, seasonality, etc 5. Work with data engineers and software development engineers to deploy models and experiments to production 6. Identify and recommend opportunities to automate systems, tools, and processes
US, WA, Seattle
At Amazon, a large portion of our business is driven by third-party Sellers who set their own prices. The Pricing science team is seeking a Sr. Applied Scientist to use statistical and machine learning techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems, helping Marketplace Sellers offer Customers great prices. This role will be a key member of an Advanced Analytics team supporting Pricing related business challenges based in Seattle, WA. The Sr. Applied Scientist will work closely with other research scientists, machine learning experts, and economists to design and run experiments, research new algorithms, and find new ways to improve Seller Pricing to optimize the Customer experience. The Applied Scientist will partner with technology and product leaders to solve business and technology problems using scientific approaches to build new services that surprise and delight our customers. An Applied Scientist at Amazon applies scientific principles to support significant invention, develops code and are deeply involved in bringing their algorithms to production. They also work on cross-disciplinary efforts with other scientists within Amazon. The key strategic objectives for this role include: - Understanding drivers, impacts, and key influences on Pricing dynamics. - Optimizing Seller Pricing to improve the Customer experience. - Drive actions at scale to provide low prices and increased selection for customers using scientifically-based methods and decision making. - Helping to support production systems that take inputs from multiple models and make decisions in real time. - Automating feedback loops for algorithms in production. - Utilizing Amazon systems and tools to effectively work with terabytes of data. You can also learn more about Amazon science here - https://www.amazon.science/