Why ambient computing needs self-learning

To become the interface for the Internet of things, conversational agents will need to learn on their own. Alexa has already started down that path.

Today at the annual meeting of the ACM Special Interest Group on Information Retrieval (SIGIR), Ruhi Sarikaya, the director of applied science for Alexa AI, delivered a keynote address titled “Intelligent Conversational Agents for Ambient Computing”. This is an edited version of that talk.

For decades, the paradigm of personal computing was a desktop machine. Then came the laptop, and finally mobile devices so small we can hold them in our hands and carry them in our pockets, which felt revolutionary.

All these devices, however, tether you to a screen. For the most part, you need to physically touch them to use them, which does not seem natural or convenient in a number of situations.

So what comes next?

The most likely answer is the Internet of things (IOT) and other intelligent, connected systems and services. What will the interface with the IOT be? Will you need a separate app on your phone for each connected device? Or when you walk into a room, will you simply speak to the device you want to reconfigure?

At Alexa, we’re betting that conversational AI will be the interface for the IOT. And this will mean a shift in our understanding of what conversational AI is.

Related content
Alexa’s chief scientist on how customer-obsessed science is accelerating general intelligence.

In particular, the IOT creates new forms of context for conversational-AI models. By “context”, we mean the set of circumstances and facts that surround a particular event, situation, or entity, which an AI model can exploit to improve its performance.

For instance, context can help resolve ambiguities. Here are some examples of what we mean by context:

  • Device state: If the oven is on, then the question “What is the temperature?” is more likely to refer to oven temperature than it is in other contexts.
  • Device types: If the device has a screen, it’s more likely that “play Hunger Games” refers to the movie than if the device has no screen.
  • Physical/digital activity: If a customer listens only to jazz, “Play music” should elicit a different response than if the customer listens only to hard rock; if the customer always makes coffee after the alarm goes off, that should influence the interpretation of a command like “start brewing”. 

The same type of reasoning applies to other contextual signals, such as time of day, device and user location, environmental changes as measured by sensors, and so on.

Training a conversational agent to factor in so many contextual signals is much more complicated than training it to recognize, say, song titles. Ideally, we would have a substantial number of training examples for every combination of customer, device, and context, but that’s obviously not practical. So how do we scale the training of contextually aware conversational agents?

Self-learning

The answer is self-learning. By self-learning, we mean a framework that enables an autonomous agent to learn from customer-system interactions, system signals, and predictive models.

Related content
Self-learning system uses customers’ rephrased requests as implicit error signals.

Customer-system interactions can provide both implicit feedback and explicit feedback. Alexa already handles both. If a customer interrupts Alexa’s response to a request — a “barge-in”, as we call it — or rephrases the request, that’s implicit feedback. Aggregated across multiple customers, barge-ins and rephrases indicate requests that aren’t being processed correctly.

Customers can also explicitly teach Alexa how to handle particular requests. This can be customer-initiated, as when customers use Alexa’s interactive-teaching capability, or Alexa-initiated, as when Alexa asks, “Did I answer your question?”

The great advantages of self-learning are that it doesn’t require data annotation, so it scales better while protecting customer privacy; it minimizes the time and cost of updating models; and it relies on high-value training data, because customers know best what they mean and want.

We have a few programs targeting different applications of self-learning, including automated generation of ground truth annotations, defect reduction, teachable AI, and determining root causes of failure.

Automated ground truth generation

At Alexa, we have launched a multiyear initiative to shift Alexa’s ML model development from manual-annotation-based to primarily self-learning-based. The challenge we face is to convert customer feedback, which is often binary or low dimensional (yes/no, defect/non-defect), into high-dimensional synthetic labels such as transcriptions and named-entity annotations.

Our approach has two major components: (1) an exploration module and (2) a feedback collection and label generation module. Here’s the architecture of the label generation model:

Label generation model.png
The ground truth generation model converts customer feedback, which is often binary or low dimensional, into high-dimensional synthetic labels.

The input features include the dialogue context (user utterance, Alexa response, previous turns, next turns), categorical features (domain, intent, dialogue status), numerical features (number of tokens, speech recognition and natural-language-understanding confidence scores), and raw audio data. The model consists of a turn-level encoder and a dialogue-level Transformer-based encoder. The turn-level textual encoder is a pretrained RoBERTa model.

We pretrain the model in a self-supervised way, using synthetic contrastive data. For instance, we randomly swap answers from different dialogues as defect samples. After pretraining, the model is trained in a supervised fashion on multiple tasks, using explicit and implicit user feedback.

Related content
Prime Video beats previous state of the art on the MovieNet dataset by 13% with a new model that is 90% smaller and 84% faster.

We evaluate the label generation model on several tasks. Two of these are goal segmentation, or determining which utterances in a dialogue are relevant to the accomplishment of a particular task, and goal evaluation, or determining whether the goal was successfully achieved.

As a baseline for these tasks, we used a set of annotations each of which was produced in a single pass by a single annotator. Our ground truth, for both the model and the baseline, was a set of annotations each of which had been corroborated by three different human annotators.

Our model’s outputs on both tasks were comparable to the human annotators’: our model was slightly more accurate but had a slightly lower F1 score. We can set a higher threshold, exceeding human performance significantly, and still achieve much larger annotation throughput than manual labeling does.

In addition to the goal-related labels, our model also labels utterances according to intent (the action the customer wants performed, such as playing music), slots (the data types the intent operates on, such as song names), and slot-values (the particular values of the slots, such as “Purple Haze”).

As a baseline for slot and intent labeling, we used a RoBERTa-based model that didn’t incorporate contextual information, and we found that our model outperformed it across the board.

Self-learning-based defect reduction

Three years ago, we deployed a self-learning mechanism that automatically corrects defects in Alexa’s interpretation of customer utterances based purely on implicit signals.

Related content
More-autonomous machine learning systems will make Alexa more self-aware, self-learning, and self-service.

This mechanism — unlike the ground truth generation module — doesn’t involve retraining Alexa’s natural-language-understanding models. Instead, it overwrites those models’ outputs, to improve their accuracy.

There are two ways to provide rewrites:

  • Precomputed rewriting produces request-rewrite pairs offline and loads them at run time. This process has no latency constraints, so it can use complex models, and during training, it can take advantage of rich offline signals such as user follow-up turns, user rephrases, Alexa responses, and video click-through rate. Its drawback is that at run time, it can’t take advantage of contextual information.
  • Online rewriting leverages contextual information (e.g., previous dialogue turns, dialogue location, times) at run time to produce rewrites. It enables rewriting of long-tail-defect queries, but it must meet latency constraints, and its training can’t take advantage of offline information.

Precomputed rewriting

We’ve experimented with two different approaches to precomputing rewrite pairs, one that uses pretrained BERT models and one that uses absorbing Markov chains.

This slide illustrates the BERT-based approach:

Rephrase detection.png
The contextual rephrase detection model casts rephrase detection as a span prediction problem, predicting the probability that each token is the start or end of a span.

At left is a sample dialogue in which an Alexa customer rephrases a query twice. The second rephrase elicits the correct response, so it’s a good candidate for a rewrite of the initial query. The final query is not a rephrase, and the rephrase extraction model must learn to differentiate rephrases from unrelated queries.

We cast rephrase detection as a span prediction problem, where we predict the probability that each token is the start or end of a span, using the embedding output of the final BERT layer. We also use timestamping to threshold the number of subsequent customer requests that count as rephrase candidates.

We use absorbing Markov chains to extract rewrite pairs from rephrase candidates that recur across a wide range of interactions.

Absorbing Markov chains.png
The probabilities of sequences of rephrases across customer interactions can be encoded in absorbing Markov chains.

A Markov chain models a dynamic system as a sequence of states, each of which has a certain probability of transitioning to any of several other states. An absorbing Markov chain is one that has a final state, with zero probability of transitioning to any other, which is accessible from any other system state.

We use absorbing Markov chains to encode the probabilities that any given rephrase of the same query will follow any other across a range of interactions. Solving the Markov chain gives us the rewrite for any given request that is most likely to be successful.

Online rewriting

Instead of relying on customers’ own rephrasings, the online rewriting mechanism uses retrieval and ranking models to generate rewrites.

Rewrites are based on customers’ habitual usage patterns with the agent. In the example below, for instance, based on the customer’s interaction history, we rewrite the query “What’s the weather in Wilkerson?” as “What’s the weather in Wilkerson, California?” — even though “What’s the weather in Wilkerson, Washington?” is the more common query across interactions.

The model does, however, include a global layer as well as a personal layer, to prevent overindexing on personalized cases (for instance, inferring that a customer who likes the Selena Gomez song “We Don’t Talk Anymore” will also like the song from Encanto “We Don’t Talk about Bruno”) and to enable the model to provide rewrites when the customer’s interaction history provides little or no guidance.

Online rewriting.png
The online rewriting model’s personal layer factors in customer context, while the global prevents overindexing on personalized cases.

The personalized workstream and the global workstream include both retrieval and ranking models:

  • The retrieval model uses a dense-passage-retrieval (DPR) model, which maps texts into a low-dimensional, continuous space, to extract embeddings for both the index and the query. Then it uses some similarity measurement to decide the rewrite score.
  • The ranking model combines fuzzy match (e.g., through a single-encoder structure) with various metadata to make a reranking decision.

We’ve deployed all three of these self-learning approaches — BERT- and Markov-chain-based offline rewriting and online rewriting — and all have made a significant difference in the quality of Alexa customers’ experience.

Related content
With a new machine learning system, Alexa can infer that an initial question implies a subsequent request.

In experiments, we compared the BERT-based offline approach to four baseline models on six machine-annotated and two human-annotated datasets, and it outperformed all baselines across the board, with improvements of as much as 16% to 17% on some of the machine-annotated datasets, while almost doubling the improvement on the human-annotated ones.

The offline approach that uses absorbing Markov chains has rewritten tens of millions of outputs from Alexa’s automatic-speech-recognition models, and it has a win-loss ratio of 8.5:1, meaning that for every one incorrect rewrite, it has 8.5 correct ones.

And finally, in a series of A/B tests of the online rewrite engine, we found that the global rewrite alone reduced the defect rate by 13%, while the addition of the personal rewrite model reduced defects by a further 4%.

Teachable AI

Query rewrites depend on implicit signals from customers, but customers can also explicitly teach Alexa their personal preferences, such as “I’m a Warriors fan” or “I like Italian restaurants.”

Related content
Deep learning and reasoning enable customers to explicitly teach Alexa how to interpret their novel requests.

Alexa’s teachable-AI mechanism can be either customer-initiated or Alexa-initiated. Alexa proactively senses teachable moments — as when, for instance, a customer repeats the same request multiple times or declares Alexa’s response unsatisfactory. And a customer can initiate a guided Q&A with Alexa with a simple cue like “Alexa, learn my preferences.”

In either case, Alexa can use the customer’s preferences to guide the very next customer interaction.

Failure point isolation

Besides recovering from defects through query rewriting, we also want to understand the root cause of failures for defects.

Dialogue assistants like Alexa depend on multiple models that process customer requests in stages. First, a voice trigger (or “wake word”) model determines whether the user is speaking to the assistant. Then an automatic-speech-recognition (ASR) module converts the audio stream into text. This text passes to a natural-language-understanding (NLU) component that determines the user request. An entity recognition model recognizes and resolves entities, and the assistant generates the best possible response using several subsystems. Finally, the text-to-speech (TTS) model renders the response into human-like speech.

For Alexa, part of self-learning is automatically determining, when a failure occurs, which component has failed. An error in an upstream component can propagate through the pipeline, in which case multiple components may fail. Thus, we focus on the first component that fails in a way that is irrecoverable, which we call the “failure point”.

In our initial work on failure point isolation, we recognize five error points as well as a “correct” class (meaning no component failed). The possible failure points are false wake (errors in voice trigger); ASR errors; NLU errors (for example, incorrectly routing “play Harry Potter” to video instead of audiobook); entity resolution and recognition errors; and result errors (for example, playing the wrong Harry Potter movie).

To better illustrate failure point problem, let's examine a multiturn dialogue:

Failure point isolation slide.png
Failure point isolation identifies the earliest point in the processing pipeline at which a failure occurs, and errors that the conversational agent recovers from are not classified as failures.

In the first turn, the customer is trying to open a garage door, and the conversational assistant recognizes the speech incorrectly. The entity resolution model doesn't recover from this error and also fails. Finally, the dialogue assistant fails to perform the correct action. In this case, ASR is the failure point, despite the other models’ subsequent failure.

On the second turn, the customer repeats the request. ASR makes a small error by not recognizing the article "the" in the speech, but the dialogue assistant takes the correct action. We would mark this turn as correct, as the ASR error didn't lead to downstream failure.

The last turn highlights one of the limitations of our method. The user is asking the dialogue assistant to make a sandwich, which dialogue assistants cannot do — yet. All models have worked correctly, but the user is not satisfied. In our work, we do not consider such turns defective.

On average, our best failure point isolation model achieves close to human performance across different categories (>92% vs human). This model uses extended dialogue context, features derived from logs of the assistants (e.g., ASR confidence), and traces of decision-making components (e.g., NLU modules). We outperform humans in result and correct-class detection. ASR, entity resolution, and NLU are in the 90-95% range.

The day when computing fades into the environment, and we walk from room to room casually instructing embedded computing devices how we want them to behave, may still lie in the future. But at Alexa AI, we’re already a long way down that path. And we’re moving farther forward every day.

Related content

US, CA, Palo Alto
We’re working to improve shopping on Amazon using the conversational capabilities of large language models, and are searching for pioneers who are passionate about technology, innovation, and customer experience, and are ready to make a lasting impact on the industry. You’ll be working with talented scientists, engineers, and technical program managers (TPM) to innovate on behalf of our customers. If you’re fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey!
US, WA, Redmond
Project Kuiper is an initiative to increase global broadband access through a constellation of 3,236 satellites in low Earth orbit (LEO). Its mission is to bring fast, affordable broadband to unserved and underserved communities around the world. Project Kuiper will help close the digital divide by delivering fast, affordable broadband to a wide range of customers, including consumers, businesses, government agencies, and other organizations operating in places without reliable connectivity. The Kuiper Global Capacity Planning team owns designing, implementing, and operating systems that support the planning, management, and optimization of Kuiper network resources worldwide. We are looking for a talented principal scientist to lead Kuiper’s long-term vision and strategy for capacity simulations and inventory optimization. This effort will be instrumental in helping Kuiper execute on its business plans globally. As one of our valued team members, you will be obsessed with matching our standards for operational excellence with a relentless focus on delivering results. Key job responsibilities In this role, you will: Work cross-functionally with product, business development, and various technical teams (engineering, science, R&D, simulations, etc.) to establish the long-term vision, strategy, and architecture for capacity simulations and inventory optimization. Design and deliver modern, flexible, scalable solutions to complex optimization problems for operating and planning satellite resources. Lead short and long terms technical roadmap definition efforts to predict future inventory availability and key operational and financial metrics across the network. Design and deliver systems that can keep up with the rapid pace of optimization improvements and simulating how they interact with each other. Analyze large amounts of satellite and business data to identify simulation and optimization opportunities. Synthesize and communicate insights and recommendations to audiences of varying levels of technical sophistication to drive change across Kuiper. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum.
GB, London
Disrupting the way Amazon fulfills our customers’ orders. Amazon operations is changing the way we improve Customer Experience through flawless fulfillment focused on 1) successful on-time delivery, 2) at speed and 3) at the lowest possible cost. Being the engine of Amazon Operational excellence, driving zero defects through ideal operation, being the heart of the Fulfillment network and its center of excellence, being proactive and aspiring for zero defects across the network with 100% organizational engagement. For example, our applied science team leverage a variety of advanced machine learning and cloud computing techniques to power Amazon's operations performance management. This includes building algorithms and cloud services using LLMs, deep neural networks, and other ML approaches to make root cause analysis of incidents and defects better. They develop machine learning models to predict inbound capacity forecasts and select the optimal order of unloading and stowing the incoming items in the Fulfilment Center. The teams also utilize Langchain, Amazon Bedrock, Amazon Textract, ElasticCache Redis, Opensearch and Kubernetes to extract insights from big data and deliver recommendations to operations managers, continuously improving through offline analysis and impact evaluation. Underpinning these efforts are unique technical challenges, such as operating at unprecedented scale (100k requests per second with SNS/SQS and <1ms latency with Redis) while respecting privacy and customer trust guarantees, and solving a wide variety of complex computational operational problems related to inbound management for unloading and stowing before stow time SLA, outbound for picking and packing before SLAM PAD time and shipping for staging and loading before Critical Pull Time. Key job responsibilities GOX team is looking for a Senior Applied Scientist to support our vision of giving our customers the best fulfillment experience in the world, and our mission of delighting our customers by providing capabilities, tools and mechanisms to fulfillment operations. As Skynet Sr. APSCI, you would be providing resources and expertise for all data related reports (dashboard, scorecards…), analysis (statistical approach), and Machine Learning products and tools development. On top of your internal customers within GOX team you would be supporting more widely with your experience and skills all across the org, partnering with a wide range of departments within Ops Integration (Packaging, Sustainability) within the company mainly with ICQA, ATS, AMZL, GTS… on several projects. You will be part of the community of Scientists within Amazon Operations including other AS, BIEs, SDEs, … split across the different departments. You will be part of projects requiring your close collaboration and interactions with Operations that require you to have a good understanding of product flow and process all along the distribution chain. The GOX team is now recognized for its expertise and excellence in creating tools that improve massively the customer experience. Several of them now rolled out in other regions with some of these tools becoming worldwide standard. Reporting to the GOX Senior Manager, you will be responsible for developing the data-driven decision process from historical data and ML based predictive analysis and maintaining accurate and reliable data infrastructure. You will work across the entire business, and be exposed to a wide range of functions from Operations, Finance, Technology, and Change management. The successful candidate will be able to work with minimal instruction and oversight, manage multiple tasks and support projects simultaneously. Maintaining your relationships with the customers in operations and within the team, while owning deliverables end-to-end is expected. Critical to the success of this role is your ability to work with big data, develop insightful analysis, communicate findings in a clear and compelling way and work effectively as part of the team, raising the bar and insisting on high standards. About the team GOX DEA team is the engine of Amazon Operational excellence at the heart of the fulfillment network operations, aspiring zero defects. It is our purpose to improve Customer Experience through flawless fulfillment focused on 1) successful on-time delivery, 2) at speed and 3) at the lowest possible cost. Our Solutions support on-time delivery of billions of packages to our customers across the globe leveraging AI & Generative AI technology.
US, VA, Herndon
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply cutting edge Generative AI algorithms to solve real world problems with significant impact? The Generative AI Innovation Center at AWS is a new strategic team that helps AWS customers implement Generative AI solutions and realize transformational business opportunities. This is a team of strategists, data scientists, engineers, and solution architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. The team helps customers imagine and scope the use cases that will create the greatest value for their businesses, select and train and fine tune the right models, define paths to navigate technical or business challenges, develop proof-of-concepts, and make plans for launching solutions at scale. The GenAI Innovation Center team provides guidance on best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for Data Scientists capable of using GenAI and other techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. This position requires that the candidate selected be a US Citizen. Key job responsibilities As an Data Scientist, you will * Collaborate with AI/ML scientists and architects to Research, design, develop, and evaluate cutting-edge generative AI algorithms to address real-world challenges * Interact with customers directly to understand the business problem, help and aid them in implementation of generative AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production * Create and deliver best practice recommendations, tutorials, blog posts, sample code, and presentations adapted to technical, business, and executive stakeholder * Provide customer and market feedback to Product and Engineering teams to help define product direction About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Sales, Marketing and Global Services (SMGS) AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. The AWS Global Support team interacts with leading companies and believes that world-class support is critical to customer success. AWS Support also partners with a global list of customers that are building mission-critical applications on top of AWS services.
US, WA, Redmond
Have you ever wanted to be part of a team that is building industry changing technology? Amazon’s Project Kuiper is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband network connectivity to unserved and underserved communities around the world. The Kuiper Business Solutions team owns a suite of products and services to operate and scale Kuiper. We are looking for a passionate, talented, and inventive Data Scientist with a background in AI, Gen AI, Machine Learning, NLP, to lead delivering best in class automated customer service and business analytic solutions for Kuiper Customer Service. As a Data Scientist, you will be responsible for the development, fine-tuning, and evaluation of AI models that power our chatbot and IVR solutions. Your work will ensure the chatbot and IVR is accurate, reliable, and continually improving to meet customer needs. This role involves collaborating with cross-functional teams to integrate AI solutions into our customer service platform, monitor their performance, and implement ongoing enhancements. The ideal candidate has experience in successfully building chat bots using AI technologies, measuring their performance and delivering ongoing improvements. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Key job responsibilities * Build and validate data pipelines for training and evaluating the LLMs * Extensively clean and explore the datasets * Train and evaluate LLMs in a robust manner * Design and conduct A/B tests to validate model performance * Automate model inference on AWS infrastructure
US, WA, Seattle
AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. Come work for M13 - an AWS team specializing in the deception and disruption of cyber threats. We are looking for an Applied Scientist who is passionate about the security domain. You will build services and tools for security engineers and developers that leverage artificial intelligence and machine learning to pull unique insights about the cyber threat landscape. You will be part of a team building Large Language Model (LLM)-based services with the focus on enabling AWS teams to interact with our threat data. The team works in close collaboration with other AWS security services to power mitigations that protect the global AWS network and features in external security services such as Amazon GuardDuty, AWS WAF, and AWS Network Firewall. If you are excited about combating the ever evolving threat landscape then we'd love to talk to you. As an Applied Scientist, you are recognized for your expertise, advise team members on a range of machine learning topics, and work closely with software engineers to drive the delivery of end-to-end modeling solutions. Your work focuses on ambiguous problem areas where the business problem or opportunity may not yet be defined. The problems that you take on require scientific breakthroughs. You take a long-term view of the business objectives, product roadmaps, technologies, and how they should evolve. You drive mindful discussions with customers, engineers, and scientist peers. You bring perspective and provide context for current technology choices, and make recommendations on the right modeling and component design approach to achieve the desired customer experience and business outcome. Key job responsibilities • Understand the challenges that security engineers and developers face when building software today, and develop generalizable solutions. • Collaborate with the team to pave the way towards bringing your solution into production systems. Lead cross team projects and ensure technical blockers are resolved • Communicate and document your research via publishing papers in external scientific venues. About the team *Why AWS* Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. *Diverse Experiences* Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. *Work/Life Balance* We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. *Inclusive Team Culture* Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. *Mentorship and Career Growth* We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, WA, Seattle
We are seeking a senior scientist with demonstrated experience in A/B testing along with related experience with observational causal modeling (e.g. synthetic controls, causal matrix completion). Our team owns "causal inference as a service" for the Pricing and Promotions organization; we run A/B tests on new pricing, promotions, and pricing/promotions CX algorithms and, where experimentation is impractical, conduct observational causal studies. Key job responsibilities We are seeking a senior scientist to help envision, design, and build the next generation of pricing, promotions, and pricing/promotions CX for Amazon. On our team, you will work at the intersection of economic theory, statistical inference, and machine learning to design and implement in production new statistical methods for measuring causal effects of an extensive array of business policies. This position is perfect for someone who has a deep and broad analytic background, is passionate about using mathematical modeling and statistical analysis to make a real difference. You should be familiar with modern tools for data science and business analysis and have experience coding with engineers to put projects into production. We are particularly interested in candidates with research background in experimental statistics. A day in the life - Discuss with business problems with business partners, product managers, and tech leaders - Brainstorm with other scientists to design the right model for the problem at hand - Present the results and new ideas for existing or forward looking problems to leadership - Dive deep into the data - Build working prototypes of models - Work with engineers to implement prototypes in production - Analyze the results and review with partners About the team We are a team of scientists who design and implement the econometrics powering pricing, promotions, and pricing/promotions CX.
US, WA, Seattle
Do you want to join a team of innovative scientists to research and develop generative AI technology that would disrupt the industry? Do you enjoy dealing with ambiguity and working on hard problems in a fast-paced environment? Amazon Connect is a highly disruptive cloud-based contact center from AWS that enables businesses to deliver intelligent, engaging, dynamic, and personalized customer service experiences. As an Applied Scientist on our team, you will work closely with senior technical and business leaders from within the team and across AWS. You distill insight from huge data sets, conduct cutting edge research, foster ML models from conception to deployment. You have deep expertise in machine learning and deep learning broadly, and extensive domain knowledge in natural language processing, generative AI and LLMs, etc. The ideal candidate has the ability to understand, implement, innovate and on the state-of-the-art generative AI based systems. You are comfortable with quickly prototyping and iterating your ideas to build robust ML models using technology such as PyTorch, Tensorflow, AWS Sagemaker, and SparkML. Our team is at an early stage, so you will have significant impact on our ML deliverables with little operational load from existing models/systems. We have a rapidly growing customer base and an exciting charter in front of us that includes solving highly complex engineering and scientific problems. We are looking for passionate, talented, and experienced people to join us to innovate on modern contact centers in the cloud. The position represents a rare opportunity to be a part of a fast-growing business soon after launch, and help shape the technology and product as we grow. You will be playing a crucial role in developing the next generation contact center, and get the opportunity to design and deliver scalable, resilient systems while maintaining a constant customer focus. Our team is leading ML and optimization features in Amazon Connect. We are a team of scientists and engineers working on multiple science projects for Amazon Connect. We use state-of-the-art science and engineering practices to address the hard problems in contact center operation and management for our customers, and we move fast to implement solutions and refine them based on customer feedback. Learn more about Amazon Connect here: https://aws.amazon.com/connect/ About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, MA, Cambridge
Amazon Lab126 is an inventive research and development company that designs and engineers high-profile consumer electronics. Lab126 began in 2004 as a subsidiary of Amazon.com, Inc., originally creating the best-selling Kindle family of products. Since then, we have produced groundbreaking devices like Fire tablets, Fire TV and Amazon Echo. What will you help us create? The Role: We are looking for a high caliber Applied Scientist to join our team. As part of the larger technology team working on new consumer technology, your work will have a large impact to hardware, internal software developers, ecosystem, and ultimately the lives of Amazon customers. In this role, you will: - Propose new research projects, get buy-in from stakeholders, plan and budget the project and lead the team for successful execution - Work closely with an inter-disciplinary product development team including outside partners to bring the prototype algorithm into commercialization - Take a big part in the mission to create earth's best employer - Be a respectable team leader in an open and collaborative environment
US, CA, Sunnyvale
The Edge CV team under Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with computer vision and multimodal perception models for various edge applications. Key job responsibilities As an Applied Scientist with the Edge CV team under AGI, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with multimodal models with an emphasis on computer vision. Your work will directly impact our customers in the form of products and services that make use of CV technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in AGI in within perception domain. A day in the life An Applied Scientist with the AGI team will support the science solution design, run experiments, research new algorithms, and find new ways of optimizing the customer experience; while setting examples for the team on good science practice and standards. Besides theoretical analysis and innovation, an Applied Scientist will also work closely with talented engineers and scientists to put algorithms and models into practice. About the team The Edge CV team has a mission to deliver best in class CV and multimodal models in support of various low latency perception based applications for devices like Echo Show series within Amazon.