Advances in trustworthy machine learning at Alexa AI

The team’s latest research on privacy-preserving machine learning, federated learning, and bias mitigation.

At Amazon, we take the protection of customer data very seriously. We are also committed to eliminating the biases that can exist in off-the-shelf language models — such as GPT-3 and RoBERTa — that are the basis of most modern natural-language processing. Trained on public texts, these language models are known to reflect the biases implicit in those texts.

Related content
Calibrating noise addition to word density in the embedding space improves utility of privacy-protected text.

These two topics — privacy protection and fairness — are at the core of trustworthy machine learning, an important area of research at Alexa AI. In 2021, we made contributions in the following areas:

  • Privacy-preserving machine learningDifferential privacy provides a rigorous way to quantify the privacy of machine learning models. We investigated vulnerabilities presented in the differential-privacy literature and propose computationally efficient mechanisms for protecting against them.
  • Federated learning: Federated learning (FL) is a distributed-training technique that keeps customer data on-device. Devices send only model parameter updates to the cloud, not raw data. We studied several FL challenges arising in an industrial setting.
  • Fairness in machine learning: Machine learning (ML) models should perform equally well regardless of who’s using them. But even knowing how to quantify fairness is a challenge. We introduced measures of fairness and methods to mitigate bias in ML models.
Counterfactuals.png
To reduce binary-gender disparity in a distilled GPT-2 language model, we introduce counterfactual examples, in which binary genders in real-world training examples are swapped.

Below, we summarize our research in these areas, which will be presented at ACL and ICASSP later this year. We also invite readers to participate in workshops and sessions we are organizing at NAACL 2022 and Interspeech 2022.

1. Privacy-preserving ML

The intuition behind differential privacy (DP) is that access to the outputs of a model should not provide any hint about what inputs were used to train the model. DP quantifies that intuition as a difference (in probabilities) between the outputs of a model trained on a given dataset and the outputs of the same model trained on the same dataset after a single input is removed.

One way to meet a DP privacy guarantee is to add some noise to the model parameters during training in order to obfuscate their relationship to training data. But this can compromise accuracy. The so-called privacy/utility tradeoff appears in every DP application.

Another side effect of adding a DP mechanism is increased training time. Given that training natural-language-understanding (NLU) models with large volumes of data can be prohibitively slow and that industry standards require fast training and deployment — e.g., when new features are being released — we developed a training method that meets DP requirements but remains efficient. We describe the method in a paper we’re presenting at this year’s ICASSP, “An efficient DP-SGD mechanism for large scale NLP models”.

In this work, we study the most popular DP mechanism for deep neural networks, DP-SGD, and build a computationally efficient alternative, eDP-SGD, in which we use a batch-processing scheme that leverages the GPU architecture and automates part of the hyperparameter-tuning process. While both DP-SGD and eDP-SGD provide the same privacy guarantees, we show that the training time for our mechanism is very similar to its non-DP counterpart’s. The original DP-SGD extends training time as much as 130-fold.

Related content
ADePT model transforms the texts used to train natural-language-understanding models while preserving semantic coherence.

Since we did our study, researchers have developed methods with stronger theoretical DP guarantees than the ones we impose in our paper, but our approach is consistent with those methods. Overall, this work makes DP more generally accessible and helps us integrate NLU models with DP guarantees into our production systems, where new models are frequently released, and a significant increase in training time is prohibitive.

While DP provides theoretical privacy guarantees, we are also interested in practical guarantees, i.e., measuring the amount of information that could potentially leak from a given model. In addition to the performance and training time of eDP-SGD, we also studied the correlation between theoretical and practical privacy guarantees. We measured practical privacy leakage using the most common method in the field, the success rate of membership inference attacks on a given model. Our experiments provide a general picture of how to optimize the privacy/utility trade-off using DP techniques for NLU models.

We also expanded the set of mechanisms for protecting NLU models against other types of attacks. In “Canary extraction in natural language understanding models”, which we will present at ACL 2022, we study the vulnerability of text classification models to a certain kind of white-box attack called a model inversion attack (ModIvA), where a fictional attack has access to the entire set of model parameters and intends to retrieve examples used during training. Existing model inversion techniques are applied to models with either continuous inputs or continuous outputs. In our work, we adopt a similar approach to text classification tasks where both inputs and outputs are discrete.

As new model architectures are developed that might display new types of vulnerabilities, we will continue innovating efficient ways of protecting our customers’ privacy.

Upcoming activities

2. Federated Learning

The idea behind federated learning (FL) is that, during the training of an ML model, part of the computation is delegated to customers’ devices, leveraging the processing power of those devices while avoiding the centralization of privacy-sensitive datasets. Each device modifies a common, shared model according to locally stored data, then sends an updated model to a central server that aggregates model updates and sends a new shared model to all the devices. At each round, the central server randomly selects a subset of active devices and requests that they perform updates.

Federated Learning Animation.gif
With federated learning, devices send model updates, not data, to a central server.

In the past year, we have made progress toward more-efficient FL and adapted common FL techniques to the industrial setting. For instance, in “Learnings from federated learning in the real world”, which we will present at ICASSP this year, we explore device selection strategies that differ from the standard uniform selection. In particular, we present the first study of device selection based on device “activity” — i.e., the number of available training samples.

These simple selection strategies are lightweight compared to existing methods, which require heavy computation from all the devices. They are thus more suitable to industrial applications, where millions of devices are involved. We study two different settings: the standard “static” setting, where all the data are available at once, and the more realistic “continual” setting, where customers generate new data over time, and past examples might have to be deleted to save storage space. Our experiments on training a language model with FL show that non-uniform sampling outperforms uniform sampling when applied to real-world data, for both the static and continual settings.

Related content
Amazon researchers optimize the distributed-training tool to run efficiently on the Elastic Fabric Adapter network interface.

We also expanded our understanding of FL for natural-language processing (NLP) and, in the process, made FL more accessible to the NLP community. In “FedNLP: A research platform for federated learning in natural language processing”, which will be presented later this year at NAACL, we and our colleagues at the University of Southern California and FedML systematically compare the most popular FL algorithms for four mainstream NLP tasks. We also present different methods to generate dataset partitions that are not independent and identically distributed (IID), as real-world FL methods must be robust against shifts in the distributions of the data used to train ML models.

Our analysis reveals that there is still a large gap between centralized and decentralized training under various settings, and we highlight several directions in which FL for NLP can advance. The paper represents Amazon’s contribution to the open-source framework FedNLP, which is capable of evaluating, analyzing, and developing FL methods for NLP. The codebase contains non-IID partitioning methods, enabling easy experimentation to advance the state of FL research for NLP.

We also designed methods to account for the naturally heterogeneous character of customer-generated data and applied FL to a wide variety of NLP tasks. We are aware that FL still presents many challenges, such as how to do evaluation when access to data is removed, on-device label generation for supervised tasks, and privacy-preserving communication between the server and the different devices. We are actively addressing each of these and plan to leverage our findings to improve FL-based model training and enhance associated capabilities such as analytics and model evaluation.

Upcoming activities

3. Fairness in ML

Natural-language-processing applications’ increased reliance on large language models trained on intrinsically biased web-scale corpora has amplified the importance of accurate fairness metrics and procedures for building more robust models.

In “On the intrinsic and extrinsic fairness evaluation metrics for contextualized language representations”, which we are presenting at ACL 2022, we compare two families of fairness metrics — namely extrinsic and intrinsic — that are widely used for language models. Intrinsic metrics directly probe into the fairness of language models, while extrinsic metrics evaluate the fairness of a whole system through predictions on downstream tasks.

Related content
Method significantly reduces bias while maintaining comparable performance on machine learning tasks.

For example, the contextualized embedding association test (CEAT), an intrinsic metric, measures bias through word embedding distances in semantic vector spaces, and the extrinsic metric HateXPlain measures the bias in a downstream hate speech detection system.

Our experiments show that inconsistencies between intrinsic and extrinsic metrics often reflect inconsistencies between the datasets used to evaluate them, and a clear understanding of bias in ML models requires more careful alignment of evaluation data. The results we report in the paper can help guide the NLP community as to how to best conduct fairness evaluations.

We have also designed new measures of fairness that are adapted to language-processing applications. In “Measuring fairness of text classifiers via prediction sensitivity”, which we will present at ACL 2022, we looked at sensitivity to perturbations of input as a way to measure fairness in ML models. The metric attempts to quantify the extent to which a single prediction depends on an input feature that encodes membership in an underrepresented group.

Accumulated prediction sensitivity.png
Our new bias measure, accumulated prediction sensitivity, combines the outputs of tow models, a task classifier (TC) and a protected status model (PSM).

We provide a theoretical analysis of our formulation and show a statistically significant difference between our metric’s correlation with the human notion of fairness and the existing counterfactual fairness metric’s.

Finally, we proposed a method to mitigate the biases of large language models during knowledge distillation, in which a smaller, more efficient model is trained to match the language model’s output on a particular task. Because large language models are trained on public texts, they can be biased in multiple ways, including the unfounded association of male or female genders with gender-neutral professions.

Distillation examples.png
Examples of texts generated by language models in response to gendered prompts before and after the application of our distillation method.

In another ACL paper, “Mitigating gender bias in distilled language models via counterfactual role reversal”, we introduce two modifications to the standard distillation mechanisms: data augmentation and teacher prediction perturbation.

We use our method to distill a GPT-2 language model for a text-generation task and demonstrate a substantial reduction in gender disparity, with only a minor reduction in utility. Interestingly, we find that reduced disparity in open-ended text generation may not necessarily lead to fairness on other downstream tasks. This finding underscores the importance of evaluating language model fairness along multiple metrics and tasks.

Our work on fairness in ML for NLP applications should help enable models that are more robust against the inherent biases of text datasets. There remain plenty of challenges in this field, but we strive to build models that offer the same experience to any customer, wherever and however they choose to interact with Alexa.

Upcoming activities

Related content

IN, KA, Bangalore
AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. The AWS Global Support team interacts with leading companies and believes that world-class support is critical to customer success. AWS Support also partners with a global list of customers that are building mission-critical applications on top of AWS services. Do you have proven analytical capabilities to identify business opportunities, develop predictive models and optimization algorithms to help us build state of the art Support organization? At Amazon, we are working to be the most customer-centric company on earth. To get there, we need exceptionally talented, bright, and driven people. We set big goals and are looking for people who can help us reach and exceed them. Amazon Web Services (AWS) is one of the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Amazon Web Services, Inc. provides services for broad range of applications including compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), security, and application development, deployment, and management. Global AWS Support BizOPs team is looking for a strong, talented Data Scientist to model contact forecasting, discovering insights and identifying opportunities through the use of statistics, machine learning, and deep learning to drive business and operational improvements. A successful candidate must be passionate about building solutions that will help drive a more efficient operations network and optimize cost. In this role, you will partner with data engineering, Tooling team, operations, Training, Customer Service, Capacity planning and finance teams, driving optimization and prediction solutions across the network. Key job responsibilities We are looking for an experienced and motivated Data Scientist with proven abilities to build and manage modeling projects, identify data requirements, build methodology and tools that are statistically grounded The candidate will be an expert in the areas of data science, optimization, machine learning and statistics, and is comfortable facilitating ideation and working from concept through execution. The candidate is customer obsessed, innovative, independent, results-oriented and enjoys working in a fast-paced growing organization. An interest in operations, manufacturing or process improvement is helpful. The ability to embrace this ambiguity and work with a highly distributed team of experts is critical. While this is a small team, there is opportunity to own globally impactful work and grow your career in technical, programmatic or people leadership. You will likely work with Python or R, though specific particular modelling language. Your problem-solving ability, knowledge of data models and ability to drive results through ambiguity are more important to us. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. We are open to hiring candidates to work out of one of the following locations: Bangalore, KA, IND
US, WA, Bellevue
The Worldwide Design Engineering (WWDE) organization delivers innovative, effective and efficient engineering solutions that continually improve our customers’ experience. WWDE optimizes designs throughout the entire Amazon value chain providing overall fulfillment solutions from order receipt to last mile delivery. We are seeking a Simulation Scientist to assist in designing and optimizing the fulfillment network concepts and process improvement solutions using discrete event simulations for our World Wide Design Engineering Team. Successful candidates will be visionary technical expert and natural self-starter who have the drive to apply simulation and optimization tools to solve complex flow and buffer challenges during the development of next generation fulfillment solutions. The Simulation Scientist is expected to deep dive into complex problems and drive relentlessly towards innovative solutions working with cross functional teams. Be comfortable interfacing and influencing various functional teams and individuals at all levels of the organization in order to be successful. Lead strategic modelling and simulation projects related to drive process design decisions. Responsibilities: - Lead the design, implementation, and delivery of the simulation data science solutions to perform system of systems discrete event simulations for significantly complex operational processes that have a long-term impact on a product, business, or function using FlexSim, Demo 3D, AnyLogic or any other Discrete Event Simulation (DES) software packages - Lead strategic modeling and simulation research projects to drive process design decisions - Be an exemplary practitioner in simulation science discipline to establish best practices and simplify problems to develop discrete event simulations faster with higher standards - Identify and tackle intrinsically hard process flow simulation problems (e.g., highly complex, ambiguous, undefined, with less existing structure, or having significant business risk or potential for significant impact - Deliver artifacts that set the standard in the organization for excellence, from process flow control algorithm design to validation to implementations to technical documents using simulations - Be a pragmatic problem solver by applying judgment and simulation experience to balance cross-organization trade-offs between competing interests and effectively influence, negotiate, and communicate with internal and external business partners, contractors and vendors for multiple simulation projects - Provide simulation data and measurements that influence the business strategy of an organization. Write effective white papers and artifacts while documenting your approach, simulation outcomes, recommendations, and arguments - Lead and actively participate in reviews of simulation research science solutions. You bring clarity to complexity, probe assumptions, illuminate pitfalls, and foster shared understanding within simulation data science discipline - Pay a significant role in the career development of others, actively mentoring and educating the larger simulation data science community on trends, technologies, and best practices - Use advanced statistical /simulation tools and develop codes (python or another object oriented language) for data analysis , simulation, and developing modeling algorithms - Lead and coordinate simulation efforts between internal teams and outside vendors to develop optimal solutions for the network, including equipment specification, material flow control logic, process design, and site layout - Deliver results according to project schedules and quality Key job responsibilities • You influence the scientific strategy across multiple teams in your business area. You support go/no-go decisions, build consensus, and assist leaders in making trade-offs. You proactively clarify ambiguous problems, scientific deficiencies, and where your team’s solutions may bottleneck innovation for other teams. A day in the life The dat-to-day activities include challenging and problem solving scenario with fun filled environment working with talented and friendly team members. The internal stakeholders are IDEAS team members, WWDE design vertical and Global robotics team members. The team solve problems related to critical Capital decision making related to Material handling equipment and technology design solutions. About the team World Wide Design EngineeringSimulation Team’s mission is to apply advanced simulation tools and techniques to drive process flow design, optimization, and improvement for the Amazon Fulfillment Network. Team develops flow and buffer system simulation, physics simulation, package dynamics simulation and emulation models for various Amazon network facilities, such as Fulfillment Centers (FC), Inbound Cross-Dock (IXD) locations, Sort Centers, Airhubs, Delivery Stations, and Air hubs/Gateways. These intricate simulation models serve as invaluable tools, effectively identifying process flow bottlenecks and optimizing throughput. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, WA, Seattle
Amazon's Global Fixed Marketing Campaign Measurement & Optimization (CMO) team is looking for a senior economic expert in causal inference and applied ML to advance the economic measurement, accuracy validation and optimization methodologies of Amazon's global multi-billion dollar fixed marketing spend. This is a thought leadership position to help set the long-term vision, drive methods innovation, and influence cross-org methods alignment. This role is also an expert in modeling and measuring marketing and customer value with proven capacity to innovate, scale measurement, and mentor talent. This candidate will also work closely with senior Fixed Marketing tech, product, finance and business leadership to devise science roadmaps for innovation and simplification, and adoption of insights to influence important resource allocation, fixed marketing spend and prioritization decisions. Excellent communication skills (verbal and written) are required to ensure success of this collaboration. The candidate must be passionate about advancing science for business and customer impact. Key job responsibilities - Advance measurement, accuracy validation, and optimization methodology within Fixed Marketing. - Motivate and drive data generation to size. - Develop novel, innovative and scalable marketing measurement techniques and methodologies. - Enable product and tech development to scale science solutions and approaches. A day in the life - Propose and refine economic and scientific measurement, accuracy validation, and optimization methodology to improve Fixed Marketing models, outputs and business results - Brief global fixed marketing and retails executives about FM measurement and optimization approaches, providing options to address strategic priorities. - Collaborate with and influence the broader scientific methodology community. About the team CMO's vision is to maximizing long-term free cash flow by providing reliable, accurate and useful global fixed marketing measurement and decision support. The team measures and helps optimize the incremental impact of Amazon (Stores, AWS, Devices) fixed marketing investment across TV, Digital, Social, Radio, and many other channels globally. This is a fully self supported team composed of scientists, economists, engineers, and product/program leaders with S-Team visibility. We are open to hiring candidates to work out of one of the following locations: Irvine, CA, USA | San Francisco, CA, USA | Seattle, WA, USA | Sunnyvale, CA, USA
GB, Cambridge
Our team builds generative AI solutions that will produce some of the future’s most influential voices in media and art. We develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video, with Amazon Game Studios and Alexa, the ground-breaking service that powers the audio for Echo. Do you want to be part of the team developing the future technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language, Audio and Video technology. As an Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and generative AI models to drive the state of the art in audio (and vocal arts) generation. Position Responsibilities: * Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. * Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. * Research and implement novel ML and statistical approaches to add value to the business. * Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR
US, TX, Austin
The Workforce Solutions Analytics and Tech team is looking for a senior Applied Scientist who is interested in solving challenging optimization problems in the labor scheduling and operations efficiency space. We are actively looking to hire senior scientists to lead one or more of these problem spaces. Successful candidates will have a deep knowledge of Operations Research and Machine Learning methods, experience in applying these methods to large-scale business problems, the ability to map models into production-worthy code in Python or Java, the communication skills necessary to explain complex technical approaches to a variety of stakeholders and customers, and the excitement to take iterative approaches to tackle big research challenges. As a member of our team, you'll work on cutting-edge projects that directly impact over a million Amazon associates. This is a high-impact role with opportunities to designing and improving complex labor planning and cost optimization models. The successful candidate will be a self-starter comfortable with ambiguity, with strong attention to detail and outstanding ability in balancing technical leadership with strong business judgment to make the right decisions about model and method choices. Successful candidates must thrive in fast-paced environments, which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, and align research focuses with the Amazon's strategic needs. Key job responsibilities • Candidates will be responsible for developing solutions to better manage and optimize flexible labor capacity. The successful candidate should have solid research experience in one or more technical areas of Operations Research or Machine Learning. As a senior scientist, you will also help coach/mentor junior scientists on the team. • In this role, you will be a technical leader in applied science research with significant scope, impact, and high visibility. You will lead science initiatives for strategic optimization and capacity planning. They require superior logical thinkers who are able to quickly approach large ambiguous problems, turn high-level business requirements into mathematical models, identify the right solution approach, and contribute to the software development for production systems. • Invent and design new solutions for scientifically-complex problem areas and identify opportunities for invention in existing or new business initiatives. • Successfully deliver large or critical solutions to complex problems in the support of medium-to-large business goals. • Apply mathematical optimization techniques and algorithms to design optimal or near optimal solution methodologies to be used for labor planning. • Research, prototype, simulate, and experiment with these models and participate in the production level deployment in Python or Java. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Austin, TX, USA | Bellevue, WA, USA | Nashville, TN, USA | Seattle, WA, USA | Tempe, AZ, USA
CA, BC, Vancouver
Do you want to be part of the team developing the future technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Applied Scientist with a background in AI, Gen AI, Machine Learning, NLP, to help build LLM solutions for Amazon core shopping. Our team works on a variety of projects, including state of the art generative AI, LLM finetuning, alignment, prompt engineering, benchmarking solutions. Key job responsibilities As a Applied Scientist will be expected to work on state of the art technologies which will result in papers publications, however you will not be only theorizing about the algorithms, but you will also have the opportunity to implement them and see how they behave in the field. As a tech lead, this Applied scientist will also be expected to define the research direction, and influence multiple teams to build solutions that improve Amazon and Alexa customer experience. This is an incredible opportunity to validate your research on one of the most exciting Amazon AI products, where assumptions can be tested against real business scenarios and supported by an abundance of data. We are open to hiring candidates to work out of one of the following locations: Vancouver, BC, CAN
US, WA, Seattle
At Amazon, a large portion of our business is driven by third-party Sellers who set their own prices. The Pricing science team is seeking a Sr. Applied Scientist to use statistical and machine learning techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems, helping Marketplace Sellers offer Customers great prices. This role will be a key member of an Advanced Analytics team supporting Pricing related business challenges based in Seattle, WA. The Sr. Applied Scientist will work closely with other research scientists, machine learning experts, and economists to design and run experiments, research new algorithms, and find new ways to improve Seller Pricing to optimize the Customer experience. The Applied Scientist will partner with technology and product leaders to solve business and technology problems using scientific approaches to build new services that surprise and delight our customers. An Applied Scientist at Amazon applies scientific principles to support significant invention, develops code and are deeply involved in bringing their algorithms to production. They also work on cross-disciplinary efforts with other scientists within Amazon. The key strategic objectives for this role include: - Understanding drivers, impacts, and key influences on Pricing dynamics. - Optimizing Seller Pricing to improve the Customer experience. - Drive actions at scale to provide low prices and increased selection for customers using scientifically-based methods and decision making. - Helping to support production systems that take inputs from multiple models and make decisions in real time. - Automating feedback loops for algorithms in production. - Utilizing Amazon systems and tools to effectively work with terabytes of data. You can also learn more about Amazon science here - https://www.amazon.science/ We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, MD, Virtual Location - Maryland
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. This is a part time position, 29 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. We are open to hiring candidates to work out of one of the following locations: Virtual Location - MD
US, NY, New York
Where will Amazon's growth come from in the next year? What about over the next five? Which product lines are poised to quintuple in size? Are we investing enough in our infrastructure, or too much? How do our customers react to changes in prices, product selection, or delivery times? These are among the most important questions at Amazon today. The Topline Forecasting team in the Supply Chain Optimization Technologies (SCOT) group is looking for innovative, passionate and results-oriented Economists to answer these questions. You will have an opportunity to own the long-run outlook for Amazon’s global consumer business and shape strategic decisions at the highest level. The successful candidate will be able to formalize problem definitions from ambiguous requirements, build econometrics models using Amazon’s world-class data systems, and develop cutting-edge solutions for non-standard problems. Key job responsibilities · Develop new econometric models or improve existing approaches using scalable techniques. · Extract data for analysis and model development from large, complex datasets. · Closely work with engineering teams to build scalable, efficient systems that implement prototypes in production. · Apply economic theory to solve business problems in a fast moving environment. · Distill problem definitions from informal business requirements and communicate technical solutions to senior business leaders. · Drive innovation and best practices in applied research across the Amazon research science community. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA
US, WA, Bellevue
We are seeking a passionate, talented, and inventive individual to join the Applied AI team and help build industry-leading technologies that customers will love. This team offers a unique opportunity to make a significant impact on the customer experience and contribute to the design, architecture, and implementation of a cutting-edge product. The mission of the Applied AI team is to enable organizations within Worldwide Amazon.com Stores to accelerate the adoption of AI technologies across various parts of our business. We are looking for a Senior Applied Scientist to join our Applied AI team to work on LLM-based solutions. We are seeking an experienced Scientist who combines superb technical, research, analytical and leadership capabilities with a demonstrated ability to get the right things done quickly and effectively. This person must be comfortable working with a team of top-notch developers and collaborating with our research teams. We’re looking for someone who innovates, and loves solving hard problems. You will be expected to have an established background in building highly scalable systems and system design, excellent project management skills, great communication skills, and a motivation to achieve results in a fast-paced environment. You should be somebody who enjoys working on complex problems, is customer-centric, and feels strongly about building good software as well as making that software achieve its operational goals. Key job responsibilities You will be responsible for developing and maintaining the systems and tools that enable us to accelerate knowledge operations and work in the intersection of Science and Engineering. A day in the life On our team you will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA