Rohit Prasad, vice president and head scientist for Alexa AI, demonstrates interactive teaching by customers, a new Alexa capability announced last fall.

Alexa: The science must go on

Throughout the pandemic, the Alexa team has continued to invent on behalf of our customers.

COVID-19 has cost us precious lives and served a harsh reminder that so much more needs to be done to prepare for unforeseen events. In these difficult times, we have also seen heroic efforts — from frontline health workers working night and day to take care of patients, to rapid development of vaccines, to delivery of groceries and essential items in the safest possible way given the circumstances.

Communication features.gif
Alexa’s communications capabilities are helping families connect with their loved ones during lockdown.

Alexa has also tried to help where it can. We rapidly added skills that provide information about resources for dealing with COVID-19. We donated Echo Shows and Echo Dots to healthcare providers, patients, and assisted-living facilities around the country, and Alexa’s communications capabilities — including new calling features (e.g., group calling), and the new Care Hub — are helping providers coordinate care and families connect with their loved ones during lockdown.

It has been just over a year since our schools closed down and we started working remotely. With our homes turned into offices and classrooms, one of the challenges has been keeping our kids motivated and on-task for remote learning. Skills such as the School Schedule Blueprint are helping parents like me manage their children’s remote learning and keep them excited about the future.

Despite the challenges of the pandemic, the Alexa team has shown incredible adaptability and grit, delivering scientific results that are already making a difference for our customers and will have long-lasting effects. Over the past 12 months, we have made advances in four thematic areas, making Alexa more

  1. natural and conversational: interactions with Alexa should be as free-flowing as interacting with another person, without requiring customers to use strict linguistic constructs to communicate with Alexa’s ever-growing set of skills. 
  2. self-learning and data efficient: Alexa’s intelligence should improve without requiring manually labeled data, and it should strive to learn directly from customers. 
  3. insightful and proactive: Alexa should assist and/or provide useful information to customers by anticipating their needs.
  4. trustworthy: Alexa should have attributes like those we cherish in trustworthy people, such as discretion, fairness, and ethical behavior.

Natural and conversational 

Accurate far-field automatic speech recognition (ASR) is critical for natural interactions with Alexa. We have continued to make advances in this area, and at Interspeech 2020, we presented 12 papers, including improvements in end-to-end ASR using the recurrent-neural-network-transducer (RNN-T) architecture. ASR advances, coupled with improvements in natural-language understanding (NLU), have reduced the worldwide error rate for Alexa by more than 24% in the past 12 months.

DashHashLM.png
One of Alexa Speech’s Interspeech 2020 papers, “Rescore in a flash: compact, cache efficient hashing data structures for n-gram language models”, proposes a new data structure, DashHashLM, for encoding the probabilities of word sequences in language models with a minimal memory footprint.

Customers depend on Alexa’s ability to answer single-shot requests, but to continue to provide new, delightful experiences, we are teaching Alexa to accomplish complex goals that require multiturn dialogues. In February, we announced the general release of Alexa Conversations, a capability that makes it easy for developers to build skills that engage customers in dialogues. The developer simply provides APIs (application programming interfaces), a list of entity types invoked in the skill, and a small set of sample dialogues that illustrate interactions with the skills’ capabilities. 

Alexa Conversations’ deep-learning-based dialogue manager takes care of the rest by predicting numerous alternate ways in which a customer might engage with the skill. Nearly 150 skills — such as iRobot Home and Art Museum — have now been built with Alexa Conversations, with another 100 under way, and our internal teams have launched capabilities such as Alexa Greetings (where Alexa answers the Ring doorbell on behalf of customers) and “what to read” with the same underlying capability.  

Further, to ensure that existing skills built without Alexa Conversations understand customer requests more accurately, we migrated hundreds of skills to deep neural networks (as opposed to conditional random fields). Migrated skills are seeing increases in understanding accuracy of 15% to 23% across locales. 

Alexa’s skills are ever expanding, with over 100,000 skills built worldwide by external developers. As that number has grown, discovering new skills has become a challenge. Even when customers know of a skill, they can have trouble remembering its name or how to interact with it. 

To make skills more discoverable and eliminate the need to say “Alexa, ask <skill X> to do <Y>,” we launched a deep-learning-based capability for routing utterances that do not have explicit mention of a skill’s name to relevant skills. Thousands of skills are now being discovered naturally, and in preview, they received an average of 15% more traffic. At last year’s International Conference on Acoustics, Speech, and Signal Processing (ICASSP), we presented a novel method for automatically labeling training data for Alexa’s skill selection model, which is crucial to improving utterance routing accuracy as the number of skills continues to grow.  

A constituency tree featuring syntactic-distance measures.
To make the prosody of Alexa's speech more natural, the Amazon Text-to-Speech team uses constituency trees to measure the syntactic distance (orange circles) between words of an utterance, a good indicator of where phrasing breaks or prosodic resets should occur.
Credit: Glynis Condon

As we’ve been improving Alexa’s understanding capabilities, our Text-to-Speech (TTS) synthesis team has been working to increase the naturalness of Alexa’s speech. We have developed prosodic models that enable Alexa to vary patterns of intonation and inflection to fit different conversational contexts. 

This is a first milestone on the path to contextual language generation and speech synthesis. Depending on the conversational context and the speaking attributes of the customer, Alexa will vary its response — both the words chosen and the speaking style, including prosody, stress, and intonation. We also made progress in detecting tone of voice, which can be an additional signal for adapting Alexa’s responses.

Humor is a critical element of human-like conversational abilities. However, recognizing humor and generating humorous responses is one of the most challenging tasks in conversational AI. University teams participating in the Alexa Prize socialbot challenge have made significant progress in this area by identifying opportunities to use humor in conversation and selecting humorous phrases and jokes that are contextually appropriate.

One of our teams is identifying humor in product reviews by detecting incongruity between product titles and questions asked by customers. For instance, the question “Does this make espresso?” might be reasonable when applied to a high-end coffee machine, but applied to a Swiss Army knife, it’s probably a joke. 

We live in a multilingual and multicultural world, and this pandemic has made it even more important for us to connect across language barriers. In 2019, we had launched a bilingual version of Alexa — i.e., customers could address the same device in US English or Spanish without asking Alexa to switch languages on every request. However, the Spanish responses from Alexa were in a different voice than the English responses.  

By leveraging advances in neural text-to-speech (much the way we had used multilingual learning techniques to improve language understanding), we taught the original Alexa voice — which was based on English-only recordings — to speak perfectly accented U.S. Spanish. 

To further break down language barriers, in December we launched two-way language translation, which enables Alexa to act as an interpreter for customers speaking different languages. Alexa can now translate on the fly between English and six other languages on the same device.

In September 2020, I had the privilege of demonstrating natural turn-taking (NTT), a new capability that has the potential to make Alexa even more useful and delightful for our customers. With NTT, Alexa uses visual cues, in combination with acoustic and linguistic information, to determine whether a customer is addressing Alexa or other people in the household — even when there is no wake word. Our teams are working hard on bringing NTT to our customers later this year so that Alexa can participate in conversations just like a family member or a friend.  

Self-learning and data-efficient 

In AI, one definition of generalization is the ability to robustly handle novel situations and learn from them with minimal human supervision. Two years back, we introduced the ability for Alexa to automatically correct errors in its understanding without requiring any manual labeling. This self-learning system uses implicit feedback (e.g., when a customer interrupts a response to rephrase a request) to automatically revise Alexa’s handling of requests that fail. This learning method is automatically addressing 15% of defects, as quickly as a few hours after detection; with supervised learning, these defects would have taken weeks to address. 

Diagram depicting example of paraphrase alignment
We won a best-paper award at last year's International Conference on Computational Linguistics for a self-learning system that finds the best mapping from a successful request to an unsuccessful one, then transfers the training labels automatically.
Credit: Glynis Condon

At December 2020’s International Conference on Computational Linguistics, our scientists won a best-paper award for a complementary approach to self-learning. Where the earlier system overwrites the outputs of Alexa’s NLU models, the newer system uses implicit feedback to create automatically labeled training examples for those models. This approach is particularly promising for the long tail of unusually phrased requests, and it can be used in conjunction with the existing self-learning system.

In parallel, we have been inventing methods that enable Alexa to add new capabilities, intents, and concepts with as little manually labeled data as possible — often by generalizing from one task to another. For example, in a paper at last year’s ACL Workshop on NLP for Conversational AI, we demonstrated the value of transfer learning from reading comprehension to other natural-language-processing tasks, resulting in the best published results on few-shot learning for dialogue state tracking in low-data regimes.

Similarly, at this year’s Spoken Language Technology conference, we showed how to combine two existing approaches to few-shot learning — prototypical networks and data augmentation — to quickly and accurately learn new intents.

Human-like conversational abilities require common sense — something that is still elusive for conversational-AI services, despite the massive progress due to deep learning. We received the best-paper award at the Empirical Methods in Natural Language Processing (EMNLP) 2020 Workshop on Deep Learning Inside Out (DeeLIO) for our work on infusing commonsense knowledge graphs explicitly and implicitly into large pre-trained language models to give machines greater social intelligence. We will continue to build on such techniques to make interactions with Alexa more intuitive for our customers, without requiring a large quantity of annotated data. 

In December 2020, we launched a new feature that allows customers to teach Alexa new concepts. For instance, if a customer says, “Alexa, set the living room light to study mode”, Alexa might now respond, “I don't know what study mode is. Can you teach me?” Alexa extracts a definition from the customer’s answer, and when the customer later makes the same request — or a similar request — Alexa responds with the learned action. 

Alexa uses multiple deep-learning-based parsers to enable such explicit teaching. First, Alexa detects spans in requests that it has trouble understanding. Next, it engages in a clarification dialogue to learn the new concept. Thanks to this novel capability, customers are able to customize Alexa for their needs, and Alexa is learning thousands of new concepts in the smart-home domain every day, without any manual labeling. We will continue to build on this success and develop more self-learning techniques to make Alexa more useful and personal for our customers.

Insightful and proactive

Alexa-enabled ambient devices have revolutionized daily convenience, enabling us to get what we need simply by asking for it. However, the utility of these devices and endpoints does not need to be limited to customer-initiated requests. Instead, Alexa should anticipate customer needs and seamlessly assist in meeting those needs. Smart huncheslocation-based reminders, and discovery of routines are a few ways in which Alexa is already helping customers. 

Illustration of Alexa inferring a customer asking about weather at the beach may be planning a beach trip.
In this interaction, Alexa infers that a customer who asks about the weather at the beach may be interested in other information that could be useful for planning a beach trip.
credit: Glynis Condon

Another way for Alexa to be more useful to our customers is to predict customers’ goals that span multiple disparate skills. For instance, if a customer asks, “How long does it take to steep tea?”, Alexa might answer, “Five minutes is a good place to start", then follow up by asking, "Would you like me to set a timer for five minutes?” In 2020, we launched an initial version of Alexa’s ability to anticipate and complete multi-skill goals without any explicit preprogramming.  

While this ability makes the complex seem simple, underneath, it depends on multiple deep-learning models. A “trigger model” decides whether to predict the customer’s goal at all, and if it decides it should, it suggests a skill to handle the predicted goal. But the skills it suggests are identified by another model that relies on information-theoretic analyses of input utterances, together with subsidiary models that assess features such as whether the customer was trying to rephrase a prior command, or whether the direct goal and the latent goal have common entities or values.  

Trustworthy

We have made significant advances in areas that are key to making Alexa more trusted by customers. In the field of privacy-preserving machine learning, for instance, we have been exploring differential privacy, a theoretical framework for evaluating the privacy protections offered by systems that generate aggregate statistics from individuals’ data. 

At the EMNLP 2020 Workshop on Privacy in Natural Language Processing, we presented a paper that proposes a new way to offer metric-differential-privacy assurances by adding so-called elliptical noise to training data for machine learning systems, and at this year’s Conference of the European Chapter of the Association for Computational Linguistics, we’ll present a technique for transforming texts that preserves their semantic content but removes potentially identifying information. Both methods significantly improve on the privacy protections afforded by older approaches while leaving the performance of the resulting systems unchanged.

Elliptical vs. spherical noise.png
A new approach to protecting privacy in machine learning systems that uses elliptical noise (right) rather than the conventional spherical noise (left) to perturb training data significantly improves privacy protections while leaving the performance of the resulting systems unchanged.


We have also made Alexa’s answers to information-centric questions more trustworthy by expanding our knowledge graph and improving our neural semantic parsing and web-based information retrieval. If, however, the sources of information used to produce a knowledge graph encode harmful social biases — even as a matter of historical accident — the knowledge graph may as well. In a pair of papers presented last year, our scientists devised techniques for both identifying and remediating instances of bias in knowledge graphs, to help ensure that those biases don’t leak into Alexa’s answers to questions.

A two-dimensional representation of our method for measuring bias in knowledge graph embeddings.
A two-dimensional representation of the method for measuring bias in knowledge graph embeddings that we presented last year. In each diagram, the blue dots labeled person1 indicate the shift in an embedding as we tune its parameters. The orange arrows represent relation vectors and the orange dots the sums of those vectors and the embeddings. As we shift the gender relation toward maleness, the profession relation shifts away from nurse and closer to doctor, indicating gender bias.
Credit: Glynis Condon

Similarly, the language models that many speech recognition and natural-language-understanding applications depend on are trained on corpora of publicly available texts; if those data reflect biases, so will the resulting models. At the recent ACM Conference on Fairness, Accountability, and Transparency, Alexa AI scientists presented a new data set that can be used to test language models for bias and a new metric for quantitatively evaluating the test results.

Still, we recognize that a lot more needs to be done in AI in the areas of fairness and ethics, and to that end, partnership with universities and other dedicated research organizations can be a force multiplier. As a case in point, our collaboration with the National Science Foundation to accelerate research on fairness in AI recently entered its second year, with a new round of grant recipients named in February 2021.

And in January 2021, we announced the creation of the Center for Secure and Trusted Machine Learning, a collaboration with the University of Southern California that will support USC and Amazon researchers in the development of novel approaches to privacy-preserving ML solutions

Strengthening the research community

I am particularly proud that, despite the effort required to bring all these advances to fruition, our scientists have remained actively engaged with the broader research community in many other areas. To choose just a few examples:

  • In August, we announced the winners of the third instance of the Alexa Prize Grand Challenge to develop conversational-AI systems, or socialbots, and in September, we opened registration for the fourth instance. Earlier this month, we announced another track of research for Alexa Prize called the TaskBot Challenge, in which university teams will compete to develop multimodal agents that assist customers in completing tasks requiring multiple steps and decisions.
  • In September, we announced the creation of the Columbia Center of Artificial Intelligence Technology, a collaboration with Columbia Engineering that will be a hub of research, education, and outreach programs.
  • In October, we launched the DialoGLUE challenge, together with a set of benchmark models, to encourage research on conversational generalizability, or the ability of dialogue agents trained on one task to adapt easily to new tasks.

Come work with us

Amazon is looking for data scientists, research scientists, applied scientists, interns, and more. Check out our careers page to find all of the latest job listings around the world.

We are grateful for the amazing work of our fellow researchers in the medical, pharmaceutical, and biotech communities who have developed COVID-19 vaccines in record time.

Thanks to their scientific contributions, we now have the strong belief that we will prevail against this pandemic. 

I am looking forward to the end of this pandemic and the chance to work even more closely with the Alexa teams and the broader scientific community to make further advances in conversational AI and enrich our customers’ lives. 

Research areas

Related content

CA, ON, Toronto
Are you motivated to explore research in ambiguous spaces? Are you interested in conducting research that will improve associate, employee and manager experiences at Amazon? Do you want to work on an interdisciplinary team of scientists that collaborate rather than compete? Join us at PXT Central Science! The People eXperience and Technology Central Science Team (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, wellbeing, and the value of work to Amazonians. We are an interdisciplinary team that combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. Key job responsibilities As an Applied Scientist for People Experience and Technology (PXT) Central Science, you will be working with our science and engineering teams, specifically on re-imagining Generative AI Applications and Generative AI Infrastructure for HR. Applying Generative AI to HR has unique challenges such as privacy, fairness, and seamlessly integrating Enterprise Knowledge and World Knowledge and knowing which to use when. In addition, the team works on some of Amazon’s most strategic technical investments in the people space and support Amazon’s efforts to be Earth’s Best Employer. In this role you will have a significant impact on 1.5 million Amazonians and the communities Amazon serves and ample scope to demonstrate scientific thought leadership and scientific impact in addition to business impact. You will also play a critical role in the organization's business planning, work closely with senior leaders to develop goals and resource requirements, influence our long-term technical and business strategy, and help hire and develop science and engineering talent. You will also provide support to business partners, helping them use the best scientific methods and science-driven tools to solve current and upcoming challenges and deliver efficiency gains in a changing marke About the team The AI/ML team in PXTCS is working on building Generative AI solutions to reimagine Corp employee and Ops associate experience. Examples of state-of-the-art solutions are Coaching for Amazon employees (available on AZA) and reinventing Employee Recruiting and Employee Listening.
CA, ON, Toronto
Conversational AI ModEling and Learning (CAMEL) team is part of Amazon Devices organization where our mission is to build a best-in-class Conversational AI that is intuitive, intelligent, and responsive, by developing superior Large Language Models (LLM) solutions and services which increase the capabilities built into the model and which enable utilizing thousands of APIs and external knowledge sources to provide the best experience for each request across millions of customers and endpoints. We are looking for a passionate, talented, and resourceful Applied Scientist in the field of LLM, Artificial Intelligence (AI), Natural Language Processing (NLP), Recommender Systems and/or Information Retrieval, to invent and build scalable solutions for a state-of-the-art context-aware conversational AI. A successful candidate will have strong machine learning background and a desire to push the envelope in one or more of the above areas. The ideal candidate would also have hands-on experiences in building Generative AI solutions with LLMs, enjoy operating in dynamic environments, be self-motivated to take on challenging problems to deliver big customer impact, moving fast to ship solutions and then iterating on user feedback and interactions. Key job responsibilities As an Applied Scientist, you will leverage your technical expertise and experience to collaborate with other talented applied scientists and engineers to research and develop novel algorithms and modeling techniques to reduce friction and enable natural and contextual conversations. You will analyze, understand and improve user experiences by leveraging Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in artificial intelligence. You will work on core LLM technologies, including Prompt Engineering and Optimization, Supervised Fine-Tuning, Learning from Human Feedback, Evaluation, Self-Learning, etc. Your work will directly impact our customers in the form of novel products and services.
CA, ON, Toronto
Conversational AI ModEling and Learning (CAMEL) team is part of Amazon Devices organization where our mission is to build a best-in-class Conversational AI that is intuitive, intelligent, and responsive, by developing superior Large Language Models (LLM) solutions and services which increase the capabilities built into the model and which enable utilizing thousands of APIs and external knowledge sources to provide the best experience for each request across millions of customers and endpoints. We are looking for a passionate, talented, and resourceful Applied Scientist in the field of LLM, Artificial Intelligence (AI), Natural Language Processing (NLP), Recommender Systems and/or Information Retrieval, to invent and build scalable solutions for a state-of-the-art context-aware conversational AI. A successful candidate will have strong machine learning background and a desire to push the envelope in one or more of the above areas. The ideal candidate would also have hands-on experiences in building Generative AI solutions with LLMs, enjoy operating in dynamic environments, be self-motivated to take on challenging problems to deliver big customer impact, moving fast to ship solutions and then iterating on user feedback and interactions. Key job responsibilities As an Applied Scientist, you will leverage your technical expertise and experience to collaborate with other talented applied scientists and engineers to research and develop novel algorithms and modeling techniques to reduce friction and enable natural and contextual conversations. You will analyze, understand and improve user experiences by leveraging Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in artificial intelligence. You will work on core LLM technologies, including Prompt Engineering and Optimization, Supervised Fine-Tuning, Learning from Human Feedback, Evaluation, Self-Learning, etc. Your work will directly impact our customers in the form of novel products and services.
US, CA, San Diego
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Buyer Risk Prevention (BRP) Machine Learning group. We are looking for a talented scientist who is passionate to build advanced algorithmic systems that help manage safety of millions of transactions every day. Key job responsibilities Use machine learning and statistical techniques to create scalable risk management systems Learning and understanding large amounts of Amazon’s historical business data for specific instances of risk or broader risk trends Design, development and evaluation of highly innovative models for risk management Working closely with software engineering teams to drive real-time model implementations and new feature creations Working closely with operations staff to optimize risk management operations, Establishing scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Tracking general business activity and providing clear, compelling management reporting on a regular basis Research and implement novel machine learning and statistical approaches
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in spoken language understanding. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, WA, Seattle
The XCM (Cross Channel Cross-Category Marketing) team seeks an Applied Scientist to revolutionize our marketing strategies. XCM's mission is to build the most measurably effective, creatively impactful, and cross-channel campaigning capabilities possible, with the aim of growing "big-bet" programs, strengthening positive brand perceptions, and increasing long-term free cash flow. As a science team, we're tackling complex challenges in marketing incrementality measurement, optimization and audience segmentation. In this role, you'll collaborate with a diverse team of scientists and economists to build and enhance causal measurement, optimization and prediction models for Amazon's global multi-billion dollar fixed marketing budget. You'll also work closely with various teams to develop scientific roadmaps, drive innovation, and influence key resource allocation decisions. Key job responsibilities 1) Innovating scalable marketing methodologies using causal inference and machine learning. 2) Developing interpretable models that provide actionable business insights. 3) Collaborating with engineers to automate and scale scientific solutions. 4) Engaging with stakeholders to ensure effective adoption of scientific products. 5) Presenting findings to the Amazon Science community to promote excellence and knowledge-sharing.
US, WA, Seattle
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Buyer Risk Prevention (BRP) Machine Learning group. We are looking for a talented scientist who is passionate to build advanced algorithmic systems that help manage safety of millions of transactions every day. Key job responsibilities Use machine learning and statistical techniques to create scalable risk management systems Learning and understanding large amounts of Amazon’s historical business data for specific instances of risk or broader risk trends Design, development and evaluation of highly innovative models for risk management Working closely with software engineering teams to drive real-time model implementations and new feature creations Working closely with operations staff to optimize risk management operations, Establishing scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Tracking general business activity and providing clear, compelling management reporting on a regular basis Research and implement novel machine learning and statistical approaches
US, WA, Seattle
The Global Cross-Channel and Cross- Category Marketing (XCM) org are seeking an experienced Economist to join our team. XCM’s mission is to be the most measurably effective and creatively breakthrough marketing organization in the world in order to strengthen the brand, grow the business, and reduce cost for Amazon overall. We achieve this through scaled campaigning in support of brands, categories, and audiences which aim to create the maximum incremental impact for Amazon as a whole by driving the Amazon flywheel. This is a high impact role with the opportunities to lead the development of state-of-the-art, scalable models to measure the efficacy and effectiveness of a new marketing channel. In this critical role, you will leverage your deep expertise in causal inference to design and implement robust measurement frameworks that provide actionable insights to drive strategic business decisions. Key Responsibilities: Develop advanced econometric and statistical models to rigorously evaluate the causal incremental impact of marketing campaigns on customer perception and customer behaviors. Collaborate cross-functionally with marketing, product, data science and engineering teams to define the measurement strategy and ensure alignment on objectives. Leverage large, complex datasets to uncover hidden patterns and trends, extracting meaningful insights that inform marketing optimization and investment decisions. Work with engineers, applied scientists and product managers to automate the model in production environment. Stay up-to-date with the latest research and methodological advancements in causal inference, causal ML and experiment design to continuously enhance the team's capabilities. Effectively communicate analysis findings, recommendations, and their business implications to key stakeholders, including senior leadership. Mentor and guide junior economists, fostering a culture of analytical excellence and innovation.
US, WA, Seattle
We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA Do you love using data to solve complex problems? Are you interested in innovating and developing world-class big data solutions? We have the career for you! EPP Analytics team is seeking an exceptional Data Scientist to recommend, design and deliver new advanced analytics and science innovations end-to-end partnering closely with our security/software engineers, and response investigators. Your work enables faster data-driven decision making for Preventive and Response teams by providing them with data management tools, actionable insights, and an easy-to-use reporting experience. The ideal candidate will be passionate about working with big data sets and have the expertise to utilize these data sets to derive insights, drive science roadmap and foster growth. Key job responsibilities - As a Data Scientist (DS) in EPP Analytics, you will do causal data science, build predictive models, conduct simulations, create visualizations, and influence data science practice across the organization. - Provide insights by analyzing historical data - Create experiments and prototype implementations of new learning algorithms and prediction techniques. - Research and build machine learning algorithms that improve Insider Threat risk A day in the life No two days are the same in Insider Risk teams - the nature of the work we do and constantly shifting threat landscape means sometimes you'll be working with an internal service team to find anomalous use of their data, other days you'll be working with IT teams to build improved controls. Some days you'll be busy writing detections, or mentoring or running design review meetings. The EPP Analytics team is made up of SDEs and Security Engineers who partner with Data Scientists to create big data solutions and continue to raise the bar for the EPP organization. As a member of the team you will have the opportunity to work on challenging data modeling solutions, new and innovative Quicksight based reporting, and data pipeline and process improvement projects. About the team Diverse Experiences Amazon Security values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why Amazon Security? At Amazon, security is central to maintaining customer trust and delivering delightful customer experiences. Our organization is responsible for creating and maintaining a high bar for security across all of Amazon’s products and services. We offer talented security professionals the chance to accelerate their careers with opportunities to build experience in a wide variety of areas including cloud, devices, retail, entertainment, healthcare, operations, and physical stores Inclusive Team Culture In Amazon Security, it’s in our nature to learn and be curious. Ongoing DEI events and learning experiences inspire us to continue learning and to embrace our uniqueness. Addressing the toughest security challenges requires that we seek out and celebrate a diversity of ideas, perspectives, and voices. Training & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, training, and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities - Use machine learning and analytical techniques to create scalable solutions for business problems - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes - Design, development, evaluate and deploy innovative and highly scalable models for predictive learning - Research and implement novel machine learning and statistical approaches - Work closely with software engineering teams to drive real-time model implementations and new feature creations - Work closely with business owners and operations staff to optimize various business operations - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation - Mentor other scientists and engineers in the use of ML techniques