Why ambient computing needs self-learning

To become the interface for the Internet of things, conversational agents will need to learn on their own. Alexa has already started down that path.

Today at the annual meeting of the ACM Special Interest Group on Information Retrieval (SIGIR), Ruhi Sarikaya, the director of applied science for Alexa AI, delivered a keynote address titled “Intelligent Conversational Agents for Ambient Computing”. This is an edited version of that talk.

For decades, the paradigm of personal computing was a desktop machine. Then came the laptop, and finally mobile devices so small we can hold them in our hands and carry them in our pockets, which felt revolutionary.

All these devices, however, tether you to a screen. For the most part, you need to physically touch them to use them, which does not seem natural or convenient in a number of situations.

So what comes next?

The most likely answer is the Internet of things (IOT) and other intelligent, connected systems and services. What will the interface with the IOT be? Will you need a separate app on your phone for each connected device? Or when you walk into a room, will you simply speak to the device you want to reconfigure?

At Alexa, we’re betting that conversational AI will be the interface for the IOT. And this will mean a shift in our understanding of what conversational AI is.

Related content
Alexa’s chief scientist on how customer-obsessed science is accelerating general intelligence.

In particular, the IOT creates new forms of context for conversational-AI models. By “context”, we mean the set of circumstances and facts that surround a particular event, situation, or entity, which an AI model can exploit to improve its performance.

For instance, context can help resolve ambiguities. Here are some examples of what we mean by context:

  • Device state: If the oven is on, then the question “What is the temperature?” is more likely to refer to oven temperature than it is in other contexts.
  • Device types: If the device has a screen, it’s more likely that “play Hunger Games” refers to the movie than if the device has no screen.
  • Physical/digital activity: If a customer listens only to jazz, “Play music” should elicit a different response than if the customer listens only to hard rock; if the customer always makes coffee after the alarm goes off, that should influence the interpretation of a command like “start brewing”. 

The same type of reasoning applies to other contextual signals, such as time of day, device and user location, environmental changes as measured by sensors, and so on.

Training a conversational agent to factor in so many contextual signals is much more complicated than training it to recognize, say, song titles. Ideally, we would have a substantial number of training examples for every combination of customer, device, and context, but that’s obviously not practical. So how do we scale the training of contextually aware conversational agents?

Self-learning

The answer is self-learning. By self-learning, we mean a framework that enables an autonomous agent to learn from customer-system interactions, system signals, and predictive models.

Related content
Self-learning system uses customers’ rephrased requests as implicit error signals.

Customer-system interactions can provide both implicit feedback and explicit feedback. Alexa already handles both. If a customer interrupts Alexa’s response to a request — a “barge-in”, as we call it — or rephrases the request, that’s implicit feedback. Aggregated across multiple customers, barge-ins and rephrases indicate requests that aren’t being processed correctly.

Customers can also explicitly teach Alexa how to handle particular requests. This can be customer-initiated, as when customers use Alexa’s interactive-teaching capability, or Alexa-initiated, as when Alexa asks, “Did I answer your question?”

The great advantages of self-learning are that it doesn’t require data annotation, so it scales better while protecting customer privacy; it minimizes the time and cost of updating models; and it relies on high-value training data, because customers know best what they mean and want.

We have a few programs targeting different applications of self-learning, including automated generation of ground truth annotations, defect reduction, teachable AI, and determining root causes of failure.

Automated ground truth generation

At Alexa, we have launched a multiyear initiative to shift Alexa’s ML model development from manual-annotation-based to primarily self-learning-based. The challenge we face is to convert customer feedback, which is often binary or low dimensional (yes/no, defect/non-defect), into high-dimensional synthetic labels such as transcriptions and named-entity annotations.

Our approach has two major components: (1) an exploration module and (2) a feedback collection and label generation module. Here’s the architecture of the label generation model:

Label generation model.png
The ground truth generation model converts customer feedback, which is often binary or low dimensional, into high-dimensional synthetic labels.

The input features include the dialogue context (user utterance, Alexa response, previous turns, next turns), categorical features (domain, intent, dialogue status), numerical features (number of tokens, speech recognition and natural-language-understanding confidence scores), and raw audio data. The model consists of a turn-level encoder and a dialogue-level Transformer-based encoder. The turn-level textual encoder is a pretrained RoBERTa model.

We pretrain the model in a self-supervised way, using synthetic contrastive data. For instance, we randomly swap answers from different dialogues as defect samples. After pretraining, the model is trained in a supervised fashion on multiple tasks, using explicit and implicit user feedback.

Related content
Prime Video beats previous state of the art on the MovieNet dataset by 13% with a new model that is 90% smaller and 84% faster.

We evaluate the label generation model on several tasks. Two of these are goal segmentation, or determining which utterances in a dialogue are relevant to the accomplishment of a particular task, and goal evaluation, or determining whether the goal was successfully achieved.

As a baseline for these tasks, we used a set of annotations each of which was produced in a single pass by a single annotator. Our ground truth, for both the model and the baseline, was a set of annotations each of which had been corroborated by three different human annotators.

Our model’s outputs on both tasks were comparable to the human annotators’: our model was slightly more accurate but had a slightly lower F1 score. We can set a higher threshold, exceeding human performance significantly, and still achieve much larger annotation throughput than manual labeling does.

In addition to the goal-related labels, our model also labels utterances according to intent (the action the customer wants performed, such as playing music), slots (the data types the intent operates on, such as song names), and slot-values (the particular values of the slots, such as “Purple Haze”).

As a baseline for slot and intent labeling, we used a RoBERTa-based model that didn’t incorporate contextual information, and we found that our model outperformed it across the board.

Self-learning-based defect reduction

Three years ago, we deployed a self-learning mechanism that automatically corrects defects in Alexa’s interpretation of customer utterances based purely on implicit signals.

Related content
More-autonomous machine learning systems will make Alexa more self-aware, self-learning, and self-service.

This mechanism — unlike the ground truth generation module — doesn’t involve retraining Alexa’s natural-language-understanding models. Instead, it overwrites those models’ outputs, to improve their accuracy.

There are two ways to provide rewrites:

  • Precomputed rewriting produces request-rewrite pairs offline and loads them at run time. This process has no latency constraints, so it can use complex models, and during training, it can take advantage of rich offline signals such as user follow-up turns, user rephrases, Alexa responses, and video click-through rate. Its drawback is that at run time, it can’t take advantage of contextual information.
  • Online rewriting leverages contextual information (e.g., previous dialogue turns, dialogue location, times) at run time to produce rewrites. It enables rewriting of long-tail-defect queries, but it must meet latency constraints, and its training can’t take advantage of offline information.

Precomputed rewriting

We’ve experimented with two different approaches to precomputing rewrite pairs, one that uses pretrained BERT models and one that uses absorbing Markov chains.

This slide illustrates the BERT-based approach:

Rephrase detection.png
The contextual rephrase detection model casts rephrase detection as a span prediction problem, predicting the probability that each token is the start or end of a span.

At left is a sample dialogue in which an Alexa customer rephrases a query twice. The second rephrase elicits the correct response, so it’s a good candidate for a rewrite of the initial query. The final query is not a rephrase, and the rephrase extraction model must learn to differentiate rephrases from unrelated queries.

We cast rephrase detection as a span prediction problem, where we predict the probability that each token is the start or end of a span, using the embedding output of the final BERT layer. We also use timestamping to threshold the number of subsequent customer requests that count as rephrase candidates.

We use absorbing Markov chains to extract rewrite pairs from rephrase candidates that recur across a wide range of interactions.

Absorbing Markov chains.png
The probabilities of sequences of rephrases across customer interactions can be encoded in absorbing Markov chains.

A Markov chain models a dynamic system as a sequence of states, each of which has a certain probability of transitioning to any of several other states. An absorbing Markov chain is one that has a final state, with zero probability of transitioning to any other, which is accessible from any other system state.

We use absorbing Markov chains to encode the probabilities that any given rephrase of the same query will follow any other across a range of interactions. Solving the Markov chain gives us the rewrite for any given request that is most likely to be successful.

Online rewriting

Instead of relying on customers’ own rephrasings, the online rewriting mechanism uses retrieval and ranking models to generate rewrites.

Rewrites are based on customers’ habitual usage patterns with the agent. In the example below, for instance, based on the customer’s interaction history, we rewrite the query “What’s the weather in Wilkerson?” as “What’s the weather in Wilkerson, California?” — even though “What’s the weather in Wilkerson, Washington?” is the more common query across interactions.

The model does, however, include a global layer as well as a personal layer, to prevent overindexing on personalized cases (for instance, inferring that a customer who likes the Selena Gomez song “We Don’t Talk Anymore” will also like the song from Encanto “We Don’t Talk about Bruno”) and to enable the model to provide rewrites when the customer’s interaction history provides little or no guidance.

Online rewriting.png
The online rewriting model’s personal layer factors in customer context, while the global prevents overindexing on personalized cases.

The personalized workstream and the global workstream include both retrieval and ranking models:

  • The retrieval model uses a dense-passage-retrieval (DPR) model, which maps texts into a low-dimensional, continuous space, to extract embeddings for both the index and the query. Then it uses some similarity measurement to decide the rewrite score.
  • The ranking model combines fuzzy match (e.g., through a single-encoder structure) with various metadata to make a reranking decision.

We’ve deployed all three of these self-learning approaches — BERT- and Markov-chain-based offline rewriting and online rewriting — and all have made a significant difference in the quality of Alexa customers’ experience.

Related content
With a new machine learning system, Alexa can infer that an initial question implies a subsequent request.

In experiments, we compared the BERT-based offline approach to four baseline models on six machine-annotated and two human-annotated datasets, and it outperformed all baselines across the board, with improvements of as much as 16% to 17% on some of the machine-annotated datasets, while almost doubling the improvement on the human-annotated ones.

The offline approach that uses absorbing Markov chains has rewritten tens of millions of outputs from Alexa’s automatic-speech-recognition models, and it has a win-loss ratio of 8.5:1, meaning that for every one incorrect rewrite, it has 8.5 correct ones.

And finally, in a series of A/B tests of the online rewrite engine, we found that the global rewrite alone reduced the defect rate by 13%, while the addition of the personal rewrite model reduced defects by a further 4%.

Teachable AI

Query rewrites depend on implicit signals from customers, but customers can also explicitly teach Alexa their personal preferences, such as “I’m a Warriors fan” or “I like Italian restaurants.”

Related content
Deep learning and reasoning enable customers to explicitly teach Alexa how to interpret their novel requests.

Alexa’s teachable-AI mechanism can be either customer-initiated or Alexa-initiated. Alexa proactively senses teachable moments — as when, for instance, a customer repeats the same request multiple times or declares Alexa’s response unsatisfactory. And a customer can initiate a guided Q&A with Alexa with a simple cue like “Alexa, learn my preferences.”

In either case, Alexa can use the customer’s preferences to guide the very next customer interaction.

Failure point isolation

Besides recovering from defects through query rewriting, we also want to understand the root cause of failures for defects.

Dialogue assistants like Alexa depend on multiple models that process customer requests in stages. First, a voice trigger (or “wake word”) model determines whether the user is speaking to the assistant. Then an automatic-speech-recognition (ASR) module converts the audio stream into text. This text passes to a natural-language-understanding (NLU) component that determines the user request. An entity recognition model recognizes and resolves entities, and the assistant generates the best possible response using several subsystems. Finally, the text-to-speech (TTS) model renders the response into human-like speech.

For Alexa, part of self-learning is automatically determining, when a failure occurs, which component has failed. An error in an upstream component can propagate through the pipeline, in which case multiple components may fail. Thus, we focus on the first component that fails in a way that is irrecoverable, which we call the “failure point”.

In our initial work on failure point isolation, we recognize five error points as well as a “correct” class (meaning no component failed). The possible failure points are false wake (errors in voice trigger); ASR errors; NLU errors (for example, incorrectly routing “play Harry Potter” to video instead of audiobook); entity resolution and recognition errors; and result errors (for example, playing the wrong Harry Potter movie).

To better illustrate failure point problem, let's examine a multiturn dialogue:

Failure point isolation slide.png
Failure point isolation identifies the earliest point in the processing pipeline at which a failure occurs, and errors that the conversational agent recovers from are not classified as failures.

In the first turn, the customer is trying to open a garage door, and the conversational assistant recognizes the speech incorrectly. The entity resolution model doesn't recover from this error and also fails. Finally, the dialogue assistant fails to perform the correct action. In this case, ASR is the failure point, despite the other models’ subsequent failure.

On the second turn, the customer repeats the request. ASR makes a small error by not recognizing the article "the" in the speech, but the dialogue assistant takes the correct action. We would mark this turn as correct, as the ASR error didn't lead to downstream failure.

The last turn highlights one of the limitations of our method. The user is asking the dialogue assistant to make a sandwich, which dialogue assistants cannot do — yet. All models have worked correctly, but the user is not satisfied. In our work, we do not consider such turns defective.

On average, our best failure point isolation model achieves close to human performance across different categories (>92% vs human). This model uses extended dialogue context, features derived from logs of the assistants (e.g., ASR confidence), and traces of decision-making components (e.g., NLU modules). We outperform humans in result and correct-class detection. ASR, entity resolution, and NLU are in the 90-95% range.

The day when computing fades into the environment, and we walk from room to room casually instructing embedded computing devices how we want them to behave, may still lie in the future. But at Alexa AI, we’re already a long way down that path. And we’re moving farther forward every day.

Related content

US, WA, Seattle
The Artificial General Intelligent team (AGI) seeks a passionate, talented, and resourceful Applied Scientist in the field of LLM, Artificial Intelligence (AI), Natural Language Processing (NLP) and/or Information Retrieval, to invent and build scalable solutions for a state-of-the-art context-aware conversational AI. As part of this team, you will collaborate with talented peers to create scalable solutions for an innovative conversational assistant, aiming to revolutionize user experiences for millions of Alexa customers. The ideal candidate possesses a solid understanding of machine learning fundamentals and a passion for pushing boundaries in the field. They thrive in fast-paced environments, possess the drive to tackle complex challenges, and excel at swiftly delivering impactful solutions while iterating based on user feedback. Join us in our mission to redefine industry standards and provide unparalleled experiences for our customers. Key job responsibilities . You will analyze, understand and improve user experiences by leveraging Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in artificial intelligence. . You will work on core LLM technologies, including developing best-in-class modeling, prompt optimization algorithms to enable Conversation AI use cases · Build and measure novel online & offline metrics for personal digital assistants and customer scenarios, on diverse devices and endpoints · Create, innovate and deliver deep learning, policy-based learning, and/or machine learning based algorithms to deliver customer-impacting results · Perform model/data analysis and monitor metrics through online A/B testing · Research and implement novel machine learning and deep learning algorithms and models. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Boston, MA, USA | Seattle, WA, USA
US, WA, Redmond
Project Kuiper is an initiative to increase global broadband access through a constellation of 3,236 satellites in low Earth orbit (LEO). Its mission is to bring fast, affordable broadband to unserved and underserved communities around the world. Project Kuiper will help close the digital divide by delivering fast, affordable broadband to a wide range of customers, including consumers, businesses, government agencies, and other organizations operating in places without reliable connectivity. As an Applied Scientist on the team you will responsible for building out and maintaining the algorithms and software services behind one of the world’s largest satellite constellations. You will be responsible for developing algorithms and applications that provide mission critical information derived from past and predicted satellite orbits to other systems and organizations rapidly, reliably, and at scale. You will be focused on contributing to the design and analysis of software systems responsible across a broad range of areas required for automated management of the Kuiper constellation. You will apply knowledge of mathematical modeling, optimization algorithms, astrodynamics, state estimation, space systems, and software engineering across a wide variety of problems to enable space operations at an unprecedented scale. You will develop features for systems to interface with internal and external teams, predict and plan communication opportunities, manage satellite orbits determination and prediction systems, develop analysis and infrastructure to monitor and support systems performance. Your work will interface with various subsystems within Project Kuiper and Amazon, as well as with external organizations, to enable engineers to safely and efficiently manage the satellite constellation. The ideal candidate will be detail oriented, strong organizational skills, able to work independently, juggle multiple tasks at once, and maintain professionalism under pressure. You should have proven knowledge of mathematical modeling and optimization along with strong software engineering skills. You should be able to independently understand customer requirements, and use data-driven approaches to identify possible solutions, select the best approach, and deliver high-quality applications. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. About the team The Constellation Management & Space Safety team maintains and builds the software services responsible for maintaining situational awareness of Kuiper satellites through their entire lifecycle in space. We coordinate with internal and external organizations to maintain the nominal operational state of the constellation. We build automated systems that use satellite telemetry and other relevant data to predict future orbits, plan maneuvers to avoid high risk close approaches with other objects in space, keep satellites in the desired locations, and exchange data with external organizations. We provide visibility information that is used to predict and establish communication channels for Kuiper satellites. We are open to hiring candidates to work out of one of the following locations: Redmond, WA, USA
US, WA, Seattle
Join us in the evolution of Amazon’s Seller business! The Selling Partner Recruitment and Success organization is the growth and development engine for our Store. Partnering with business, product, and engineering, we catalyze SP growth with comprehensive and accurate data, unique insights, and actionable recommendations and collaborate with WW SP facing teams to drive adoption and create feedback loops. We strongly believe that any motivated SP should be able to grow their businesses and reach their full potential by using our scaled, automated, and self-service tools. We aim to accelerate the growth of Sellers by providing tools and insights that enable them to make better and faster decisions at each step of selection management. To accomplish this, we offer intelligent insights that are both detailed and actionable, allowing Sellers to introduce new products and engage with customers effectively. We leverage extensive structured and unstructured data to generate science-based insights about their business. Furthermore, we provide personalized recommendations tailored to individual Sellers' business objectives in a user-friendly format. These insights and recommendations are integrated into our products, including Amazon Brand Analytics (ABA), Product Opportunity Explorer (OX), and Manage Your Growth (MYG). We are looking for a talented and passionate Sr. Research Scientist to lead our research endeavors and develop world-class statistical and machine learning models. The successful candidate will work closely with Product Managers (PM), User Experience (UX) designers, engineering teams, and Seller Growth Consulting teams to provide actionable insights that drive improvements in Seller businesses. Key job responsibilities You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. About the team The Seller Growth science team aims to provide data and science solutions to drive Seller growth and create better Seller experiences. We structure our science domain with three key themes and two horizontal components. We discover the opportunity space by identifying opportunities with unrealized potential, then generate actionable analytics to identify high value actions (HVAs) that unlock the opportunity space, and finally, empower Sellers with personalized Growth Plans and differentiated treatment that help them realize their potential. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
GB, London
Amazon Advertising is looking for a Senior Applied Scientist to join its brand new initiative that powers Amazon’s contextual advertising product. Advertising at Amazon is a fast-growing multi-billion dollar business that spans across desktop, mobile and connected devices; encompasses ads on Amazon and a vast network of hundreds of thousands of third party publishers; and extends across US, EU and an increasing number of international geographies. We are looking for a dynamic, innovative and accomplished Senior Applied Scientist to work on machine learning and data science initiatives for contextual data processing and classification that power our contextual advertising solutions. Are you excited by the prospect of analyzing terabytes of data and leveraging state-of-the-art data science and machine learning techniques to solve real world problems? Do you like to own business problems/metrics of high ambiguity where yo get to define the path forward for success of a new initiative? As an applied scientist, you will invent ML and Artificial General Intelligence based solutions to power our contextual classification technology. As this is a new initiative, you will get an opportunity to act as a thought leader, work backwards from the customer needs, dive deep into data to understand the issues, conceptualize and build algorithms and collaborate with multiple cross-functional teams. Key job responsibilities * Design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both analysis and business judgment. * Collaborate with software engineering teams to integrate successful experiments into large-scale, highly complex Amazon production systems. * Promote the culture of experimentation and applied science at Amazon. * Demonstrated ability to meet deadlines while managing multiple projects. * Excellent communication and presentation skills working with multiple peer groups and different levels of management * Influence and continuously improve a sustainable team culture that exemplifies Amazon’s leadership principles. About the team The Supply Quality organization has the charter to solve optimization problems for ad-programs in Amazon and ensure high-quality ad-impressions. We develop advanced algorithms and infrastructure systems to optimize performance for our advertisers and publishers. We are focused on solving a wide variety of problems in computational advertising like Contextual data processing and classification, traffic quality prediction (robot and fraud detection), Security forensics and research, Viewability prediction, Brand Safety and experimentation. Our team includes experts in the areas of distributed computing, machine learning, statistics, optimization, text mining, information theory and big data systems. We are open to hiring candidates to work out of one of the following locations: London, GBR
ES, M, Madrid
At Amazon, we are committed to being the Earth’s most customer-centric company. The International Technology group (InTech) owns the enhancement and delivery of Amazon’s cutting-edge engineering to all the varied customers and cultures of the world. We do this through a combination of partnerships with other Amazon technical teams and our own innovative new projects. You will be joining the Tools and Machine learning (Tamale) team. As part of InTech, Tamale strives to solve complex catalog quality problems using challenging machine learning and data analysis solutions. You will be exposed to cutting edge big data and machine learning technologies, along to all Amazon catalog technology stack, and you'll be part of a key effort to improve our customers experience by tackling and preventing defects in items in Amazon's catalog. We are looking for a passionate, talented, and inventive Scientist with a strong machine learning background to help build industry-leading machine learning solutions. We strongly value your hard work and obsession to solve complex problems on behalf of Amazon customers. Key job responsibilities We look for applied scientists who possess a wide variety of skills. As the successful applicant for this role, you will with work closely with your business partners to identify opportunities for innovation. You will apply machine learning solutions to automate manual processes, to scale existing systems and to improve catalog data quality, to name just a few. You will work with business leaders, scientists, and product managers to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of highly scalable distributed services. You will be part of team of 5 scientists and 13 engineers working on solving data quality issues at scale. You will be able to influence the scientific roadmap of the team, setting the standards for scientific excellence. You will be working with state-of-the-art models, including image to text, LLMs and GenAI. Your work will improve the experience of millions of daily customers using Amazon in Europe and in other regions. You will have the chance to have great customer impact and continue growing in one of the most innovative companies in the world. You will learn a huge amount - and have a lot of fun - in the process! This position will be based in Madrid, Spain We are open to hiring candidates to work out of one of the following locations: Madrid, M, ESP
IN, KA, Bangalore
Appstore Quality tech team builds tools, using AI and engineering techniques to provide the best quality apps to Amazon Appstore users. We are a team of highly-motivated, engaged, and responsive professionals who enable the core testing and quality infrastructure of Amazon Appstore. Come join our team and be a part of history as we deliver results for our customers. Appstore Quality team's mission is to automate all types of functional, non functional, and compliance checks on apps submitted by appstore app developers to enable north star vision of publishing apps in under 5 hours. Our team uses various ML/AI/Generative AI techniques to automatically detect violations in images and text metadata submitted by developers. We are working on ambitious project AI projects such as building LLM, auto navigate a mobile app to detect inside app issues and violations. We are seeking an innovative and technically strong data scientist with a background in optimization, machine learning, and statistical modeling/analysis. This role requires a team member to have strong quantitative modeling skills and the ability to apply optimization/statistical/machine learning methods to complex decision-making problems, with data coming from various data sources. The candidate should have strong communication skills, be able to work closely with stakeholders and translate data-driven findings into actionable insights. The successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and ability to work in a fast-paced and ever-changing environment. This role involves working closely with Sr Data Scientist, Principal engineer, and engineering team to build ML and AL based solutions in meeting our north start vision. Key job responsibilities • Implement statistical methods to solve specific business problems utilizing code (Python, Scala, etc.). • Improve upon existing methodologies by developing new data sources, testing model enhancements, and fine-tuning model parameters. • Collaborate with program management, product management, software developers, data engineering, and business leaders to provide science support, and communicate feedback; develop, test and deploy a wide range of statistical, econometric, and machine learning models. • Build customer-facing reporting tools to provide insights and metrics which track model performance and explain variance. • Communicate verbally and in writing to business customers with various levels of technical knowledge, educating them about our solutions, as well as sharing insights and recommendations. • Earn the trust of your customers by continuing to constantly obsess over their needs and helping them solve their problems by leveraging technology • Excellent prompt engineering skillset with a deep knowledge of LLMs, embeddings, transformer models. • Work with distributed machine learning and statistical algorithms to harness enormous volumes of data at scale to serve our customers About the team In Appstore, “We entertain, and delight, hundreds of millions of people across devices with a vast selection of relevant apps, games, and services by making it trivially easy for developers to deliver”. Appstore team enables the customer and developer flywheel on devices by enabling developers to seamlessly launch and manage their apps/ in-app content on Amazon. It helps customers discover, buy and engage with these apps on Fire TV, Fire Tablets and mobile devices. The technologies we build on vary from device software, to high scale services, to efficient tools for developers. We are open to hiring candidates to work out of one of the following locations: Bangalore, KA, IND
US, WA, Bellevue
Want to be part of the team whose mission is to expand Alexa to new countries, languages, devices and cultures? The Alexa International team makes it happen. Our customers are very diverse in where they live, the languages they speak to Alexa, the devices they use and the content that matters most. In turn, our problems are diverse and need innovative solutions. We are seeking a Senior Applied Science Manager who will play a key role in the next generation of AI powered Conversational Assistants. Key job responsibilities Lead and manage a team of applied and research scientists responsible for building multilingual experiences Collaborate with cross-functional teams to ensure that Amazon’s AI models are aligned with human preferences. Identify and prioritize research opportunities that have the potential to significantly impact our AI systems. Mentor and guide team members to achieve their career goals and objectives. Communicate research findings and progress to senior leadership and stakeholders. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a highly-skilled Senior Applied Scientist, to lead the development and implementation of cutting-edge algorithms and push the boundaries of efficient inference for Generative Artificial Intelligence (GenAI) models. As a Senior Applied Scientist, you will play a critical role in driving the development of GenAI technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Design and execute experiments to evaluate the performance of different decoding algorithms and models, and iterate quickly to improve results - Develop deep learning models for compression, system optimization, and inference - Collaborate with cross-functional teams of engineers and scientists to identify and solve complex problems in GenAI - Mentor and guide junior scientists and engineers, and contribute to the overall growth and development of the team We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Boston, MA, USA | New York, NY, USA | Sunnyvale, CA, USA
US, NJ, Newark
Employer: Audible, Inc. Title: Data Scientist II Location: One Washington Park, Newark, NJ, 07102 Duties: Design and implement scalable and reliable approaches to support or automate decision making throughout the business. Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult business problems and cases in which the solution approach is unclear. Acquire data by building the necessary SQL / ETL queries. Import processes through various company specific interfaces for accessing RedShift, and S3 / edX storage systems. Build relationships with stakeholders and counterparts, and communicate model outputs, observations, and key performance indicators (KPIs) to the management to develop sustainable and consumable products. Explore and analyze data by inspecting univariate distributions and multivariate interactions, constructing appropriate transformations, and tracking down the source and meaning of anomalies. Build production-ready models using statistical modeling, mathematical modeling, econometric modeling, machine learning algorithms, network modeling, social network modeling, natural language processing, or genetic algorithms. Validate models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. Implement models that comply with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production. Position reports into Newark, NJ office; however, telecommuting from a home office may be allowed. Requirements: Requires a Master’s in Statistics, Computer Science, Data Science, Machine Learning, Applied Math, Operations Research, Economics, or a related field plus two (2) years of experience as a Data Scientist, Data Engineer, or other occupation/position/job title involving research and data analysis. Experience may be gained concurrently and must include one (1) year in each of the following: - Building statistical models and machine learning models using large datasets from multiple resources - Working with Customer, Content, or Product data modeling and extraction - Using database technologies such as SQL or ETL - Applying specialized modelling software including Python, R, SAS, MATLAB, or Stata. Alternatively, will accept a Bachelor's and four (4) years of experience. Multiple positions. Apply online: www.amazon.jobs Job Code: ADBL157. We are open to hiring candidates to work out of one of the following locations: Newark, NJ, USA
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a highly-skilled Senior Applied Scientist, to lead the development and implementation of cutting-edge algorithms and push the boundaries of efficient inference for Generative Artificial Intelligence (GenAI) models. As a Senior Applied Scientist, you will play a critical role in driving the development of GenAI technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Design and execute experiments to evaluate the performance of different decoding algorithms and models, and iterate quickly to improve results - Develop deep learning models for compression, system optimization, and inference - Collaborate with cross-functional teams of engineers and scientists to identify and solve complex problems in GenAI - Mentor and guide junior scientists and engineers, and contribute to the overall growth and development of the team We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Boston, MA, USA | New York, NY, USA | Sunnyvale, CA, USA