Amazon at ICLR: Graphs, time series, and more

Other paper topics include natural-language processing, dataset optimization, and the limits of existing machine learning techniques.

Time series forecasting and graph representations of data are both major topics of research at Amazon: time series forecasting is crucial to both supply chain optimization and product recommendation, and graph representations help make sense of the large datasets that are common at Amazon’s scale, such as the Amazon product catalogue.

Related content
Amazon’s Stefano Soatto on how learning representations came to dominate machine learning.

So it’s no surprise that both topics are well represented among the Amazon papers at the 2022 International Conference on Learning Representations (ICLR), which takes place this week. Another paper also touches on one of Amazon’s core scientific interests, natural-language processing, or computation involving free-form text inputs.

The remaining Amazon papers discuss more general machine learning techniques, such as data augmentation, or automatically selecting or generating training examples that can improve the performance of machine learning models. Another paper looks at dataset optimization more generally, proposing a technique that could be used to evaluate individual examples for inclusion in a dataset or exclusion from it. And two papers from Amazon Web Services’ Causal-Representation Learning team, which includes Amazon vice president and distinguished scientist Bernhard Schölkopf, examine the limitations of existing approaches to machine learning.

Graphs

Graphs represent data as nodes, usually depicted as circles, and edges, usually depicted as line segments connecting nodes. Graph-structured data can make machine learning more efficient, because the graph explicitly encodes relationships that a machine learning model would otherwise have to infer from data correlations.

Graph neural networks (GNNs) are a powerful tool for working with graph-structured data. Like most neural networks, GNNs produce embeddings, or fixed-length vector representations of input data, that are useful for particular computational tasks. In the case of GNNs, the embeddings capture information about both the object associated with a given node and the structure of the graph.

In real-world applications — say, a graph indicating which products tend to be purchased together — some nodes may not be connected to any others, and some connections may be spurious inferences from sparse data. In “Cold Brew: Distilling graph node representations with incomplete or missing neighborhoods”, Amazon scientists present a method for handling nodes whose edge data is absent or erroneous.

Cold Brew data distribution 16x9.png
Cold Brew addresses the real-world problem in which graph representations of data feature potentially spurious connections (tail nodes) or absent connections (cold start). Figure from "Cold Brew: Distilling graph node representations with incomplete or missing neighborhoods".

In a variation on knowledge distillation, they use a conventional GNN, which requires that each input node be connected to the rest of the graph, to train a teacher network that can produce embeddings for connected nodes. Then they train a standard multilayer perceptron — a student network — to mimic the teacher’s outputs. Unlike a conventional GNN, the student network doesn’t explicitly use structural data to produce embeddings, so it can also handle unconnected nodes. The method demonstrates significant improvements over existing methods of inferring graph structure on several benchmark datasets.

Across disciplines, AI research has recently seen a surge in the popularity of self-supervised learning, in which a machine learning model is first trained on a “proxy task”, which is related to but not identical to the target task, using unlabeled or automatically labeled data. Then the model is fine-tuned on labeled data for the target task.

With GNNs, the proxy tasks generally teach the network only how to represent node data. But in “Node feature extraction by self-supervised multi-scale neighborhood prediction”, Amazon researchers and their colleagues at the University of Illinois and UCLA present a proxy task that teaches the GNN how to represent information about graph structure as well. Their approach is highly scalable, working with graphs with hundreds of millions of nodes, and in experiments, they show that it improves GNN performance on three benchmark datasets, by almost 30% on one of them.

XRT for graph neighborhoods.png
XR-Transformer creates a hierarchical tree that sorts data into finer- and finer-grained clusters. In the context of graph neural networks, the clusters represent graph neighborhoods. Figure from "Node feature extraction by self-supervised multi-scale neighborhood prediction".

The approach, which builds on Amazon’s XR-Transformer model and is known as GIANT-XRT, has already been widely adopted and is used by the leading teams in several of the public Open Graph Benchmark competitions hosted by Stanford University (leaderboard 1 | leaderboard 2 | leaderboard 3).

Domain graph.png
Where traditional domain adaptation (left) treats all target domains the same, a new method (right) uses graphs to represent relationships between source and target domains. For instance, weather patterns in adjacent U.S. states tend to be more similar than the weather patterns in states distant from each other. Figure from “Graph-relational domain adaptation”.

A third paper, “Graph-relational domain adaptation”, applies graphs to the problem of domain adaptation, or optimizing a machine learning model to work on data with a different distribution than the data it was trained on. Conventional domain adaptation techniques treat all target domains the same, but the Amazon researchers and their colleagues at Rutgers and MIT instead use graphs to represent relationships among all source and target domains. For instance, weather patterns in adjacent U.S. states tend to be more similar than the weather patterns in states distant from each other. In experiments, the researchers show that their method improves on existing domain adaptation methods on both synthetic and real-world datasets.

Time series

Time series forecasting is essential to demand prediction, which Amazon uses to manage inventory, and it’s also useful for recommendation, which can be interpreted as continuing a sequence of product (say, music or movie) selections.

In “Bridging recommendation and marketing via recurrent intensity modeling”, Amazon scientists adapt existing mechanisms for making personal recommendations on the basis of time series data (purchase histories) to the problem of identifying the target audience for a new product.

UserRec 16x9.png
Product recommendation can be interpreted as a time-series-forecasting problem, in which a product is recommended according to its likelihood of continuing a sequence of purchases. Figure from "Bridging recommendation and marketing via recurrent intensity modeling".

Where methods for identifying a product’s potential customers tend to treat customers as atemporal collections of purchase decisions, the Amazon researchers instead frame the problem as optimizing both the product’s relevance to the customer and the customer’s activity level, or likelihood of buying any product in a given time span. In experiments, this improved the accuracy of a prediction model on several datasets.

One obstacle to the development of machine learning models that base predictions on time series data is the availability of training examples. In “PSA-GAN: Progressive self attention GANs for synthetic time series”, Amazon researchers propose a method for using generative adversarial networks (GANs) to artificially produce time series training data.

Related content
In 2017, when the journal IEEE Internet Computing was celebrating its 20th anniversary, its editorial board decided to identify the single paper from its publication history that had best withstood the “test of time”. The honor went to a 2003 paper called “Amazon.com Recommendations: Item-to-Item Collaborative Filtering”, by then Amazon researchers Greg Linden, Brent Smith, and Jeremy York.

GANs pit generators, which produce synthetic data, against discriminators, which try to distinguish synthetic data from real. The two are trained together, each improving the performance of the other.

The Amazon researchers show how to synthesize plausible time series data by progressively growing — or adding network layers to — both the generator and the discriminator. This enables the generator to first learn general characteristics that the time series as a whole should have, then learn how to produce series that exhibit those characteristics.

Data augmentation

In addition to the paper on synthetic time series, one of Amazon’s other papers at ICLR, “Deep AutoAugment”, also focuses on data augmentation.

It’s become standard practice to augment the datasets used to train machine learning models by subjecting real data to sequences of transformations. For instance, a training image for a computer vision task might be flipped, stretched, rotated or cropped, or its color or contrast might be modified. Typically, the first few transformations are selected automatically, based on experiments in which a model is trained and retrained, and then domain experts add a few additional transformations to try to make the modified data look like real data.

Related content
New method enables users to specify properties such as subject age, light direction, and pose in images produced by generative adversarial networks.

In “Deep AutoAugment”, former Amazon senior applied scientist Zhi Zhang and colleagues at Michigan State University propose a method for fully automating the construction of a data augmentation pipeline. The goal is to continuously add transformations that steer the feature distribution of the synthetic data toward that of the real data. To do that, the researchers use gradient matching, or identifying training data whose sequential updates to the model parameters look like those of the real data. In tests, this approach improved on 10 other data augmentation techniques across four sets of real data.

Natural-language processing

Many natural-language-processing tasks involve pairwise comparison of sentences. Cross-encoders, which map pairs of sentences against each other, yield the most accurate comparison, but they’re computationally intensive, as they need to compute new mappings for every sentence pair. Moreover, converting a pretrained language model into a cross-encoder requires fine-tuning it on labeled data, which is resource intensive to acquire.

Bi-encoders, on the other hand, embed sentences in a common representational space and measure the distances between them. This is efficient but less accurate.

In “Trans-encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations”, Amazon researchers, together with a former intern, propose a model that is trained in an entirely unsupervised way — that is, without unlabeled examples — and captures advantages of both approaches.

Trans-encoder.png
The trans-encoder training process, in which a bi-encoder trained in an unsupervised fashion creates training targets for a cross-encoder, which in turn outputs training targets for the bi-encoder.

The researchers begin with a pretrained language model, fine-tune it in an unsupervised manner using bi-encoding, then use the fine-tuned model to generate training targets for cross-encoding. They then use the outputs of the cross-encoding model to fine-tune the bi-encoder, iterating back and forth between the two approaches until training converges. In experiments, their model outperformed multiple state-of-the-art unsupervised sentence encoders on several benchmark tasks, with improvements of up to 5% over the best-performing prior models.

Dataset optimization

Weeding errors out of a dataset, selecting new training examples to augment a dataset, and determining how to weight the data in a dataset to better match a target distribution are all examples of dataset optimization. Assessing individual training examples’ contribution to the accuracy of a model, however, is difficult: retraining the model on a dataset with and without every single example is hardly practical.

In “DIVA: Dataset derivative of a learning task”, Amazon researchers show how to compute the dataset derivative: a function that can be used to assess a given training example’s utility relative to a particular neural-network model. During training, the model learns not only the weights of network parameters but also weights for individual training examples. The researchers show that, using a linearization technique, they can derive a closed-form equation for the dataset derivative, allowing them to assess the utility of a given training example without retraining the network.

DIVA weighting.png
Training examples that DIVA assigns high weights (left) and low (right) for the task of classifying aircraft. Figure from "DIVA: Dataset derivative of a learning task".

Limitations

“Machine learning ultimately is based on statistical dependencies,” Bernhard Schölkopf recently told Amazon Science. “Oftentimes, it's enough if we work at the surface and just learn from these dependencies. But it turns out that it's only enough as long as we're in this setting where nothing changes.”

The two ICLR papers from the Causal Representation Learning team explore contexts in which learning statistical dependencies is not enough. “Visual representation learning does not generalize strongly within the same domain” describes experiments with image datasets in which each image is defined by specific values of a set of variables — say, different shapes of different sizes and colors, or faces that are either smiling or not and differ in hair color or age.

The researchers test 17 machine learning models and show that, if certain combinations of variables or specific variable values are held out of the training data, all 17 have trouble recognizing them in the test data. For instance, a model trained to recognize small hearts and large squares has trouble recognizing large hearts and small squares. This suggests that we need revised training techniques or model designs to ensure that machine learning systems are really learning what they’re supposed to.

Visual representation learning.png
An illustration of the four methods of separating training data (black dots) and test data (red dots) in "Visual representation learning does not generalize strongly within the same domain".

Similarly, in “You mostly walk alone: Analyzing feature attribution in trajectory prediction”, members of the team consider the problem of predicting the trajectories of moving objects as they interact with other objects, an essential capacity for self-driving cars and other AI systems. For instance, if a person is walking down the street, and a ball bounces into her path, it could be useful to know that the person might deviate from her trajectory to retrieve the ball.

Adapting the game-theoretical concept of Shapley values, which enable the isolation of different variables’ contributions to an outcome, the researchers examine the best-performing recent models for predicting trajectories in interactive contexts and show that, for the most part, their predictions are based on past trajectories; they pay little attention to the influence of interactions.

Trajectory interactions.png
A new method enables the comparison of different trajectory prediction models according to the extent to which they use social interactions for making predictions (left: none; middle: weak; right: strong). The target agent, whose future trajectory is to be predicted, is shown in red, and modeled interactions are represented by arrows whose width indicates interaction strength. From "You mostly walk alone: Analyzing feature attribution in trajectory prediction".

The one exception is a models trained on a dataset of basketball video, where all the players’ movements are constantly coordinated. There, existing models do indeed learn to recognize the influence of interaction. This suggests that careful curation of training data could enable existing models to account for interactions when predicting trajectories.

Research areas

Related content

US, CA, Santa Clara
Job summaryAmazon is looking for a passionate, talented, and inventive Applied Scientist with a strong machine learning background to help build industry-leading language technology.Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Natural Language Processing (NLP), Natural Language Understanding (NLU), Dialog management, conversational AI and Machine Learning (ML).As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services, as well as contributing to the wider research community. You will gain hands on experience with Amazon’s heterogeneous text and structured data sources, and large-scale computing resources to accelerate advances in language understanding.We are hiring primarily in Conversational AI / Dialog System Development areas: NLP, NLU, Dialog Management, NLG.This role can be based in NYC, Seattle or Palo Alto.Inclusive Team CultureHere at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences.Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future.
US, NY, New York
Job summaryAmazon is looking for a passionate, talented, and inventive Applied Scientist with a strong machine learning background to help build industry-leading language technology.Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Natural Language Processing (NLP), Natural Language Understanding (NLU), Dialog management, conversational AI and Machine Learning (ML).As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services, as well as contributing to the wider research community. You will gain hands on experience with Amazon’s heterogeneous text and structured data sources, and large-scale computing resources to accelerate advances in language understanding.We are hiring primarily in Conversational AI / Dialog System Development areas: NLP, NLU, Dialog Management, NLG.This role can be based in NYC, Seattle or Palo Alto.Inclusive Team CultureHere at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences.Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future.
US, CA, Santa Clara
Job summaryAWS AI/ML is looking for world class scientists and engineers to join its AI Research and Education group working on building automated ML solutions for planetary-scale sustainability and geospatial applications. Our team's mission is to develop ready-to-use and automated solutions that solve important sustainability and geospatial problems. We live in a time wherein geospatial data, such as climate, agricultural crop yield, weather, landcover, etc., has become ubiquitous. Cloud computing has made it easy to gather and process the data that describes the earth system and are generated by satellites, mobile devices, and IoT devices. Our vision is to bring the best ML/AI algorithms to solve practical environmental and sustainability-related R&D problems at scale. Building these solutions require a solid foundation in machine learning infrastructure and deep learning technologies. The team specializes in developing popular open source software libraries like AutoGluon, GluonCV, GluonNLP, DGL, Apache/MXNet (incubating). Our strategy is to bring the best of ML based automation to the geospatial and sustainability area.We are seeking an experienced Applied Scientist for the team. This is a role that combines science knowledge (around machine learning, computer vision, earth science), technical strength, and product focus. It will be your job to develop ML system and solutions and work closely with the engineering team to ship them to our customers. You will interact closely with our customers and with the academic and research communities. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. You are also expected to work closely with other applied scientists and demonstrate Amazon Leadership Principles (https://www.amazon.jobs/en/principles). Strong technical skills and experience with machine learning and computer vision are required. Experience working with earth science, mapping, and geospatial data is a plus. Our customers are extremely technical and the solutions we build for them are strongly coupled to technical feasibility.About the teamInclusive Team CultureAt AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded scientist and enable them to take on more complex tasks in the future.Interested in this role? Reach out to the recruiting team with questions or apply directly via amazon.jobs.
US, CA, Santa Clara
Job summaryAWS AI/ML is looking for world class scientists and engineers to join its AI Research and Education group working on building automated ML solutions for planetary-scale sustainability and geospatial applications. Our team's mission is to develop ready-to-use and automated solutions that solve important sustainability and geospatial problems. We live in a time wherein geospatial data, such as climate, agricultural crop yield, weather, landcover, etc., has become ubiquitous. Cloud computing has made it easy to gather and process the data that describes the earth system and are generated by satellites, mobile devices, and IoT devices. Our vision is to bring the best ML/AI algorithms to solve practical environmental and sustainability-related R&D problems at scale. Building these solutions require a solid foundation in machine learning infrastructure and deep learning technologies. The team specializes in developing popular open source software libraries like AutoGluon, GluonCV, GluonNLP, DGL, Apache/MXNet (incubating). Our strategy is to bring the best of ML based automation to the geospatial and sustainability area.We are seeking an experienced Applied Scientist for the team. This is a role that combines science knowledge (around machine learning, computer vision, earth science), technical strength, and product focus. It will be your job to develop ML system and solutions and work closely with the engineering team to ship them to our customers. You will interact closely with our customers and with the academic and research communities. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. You are also expected to work closely with other applied scientists and demonstrate Amazon Leadership Principles (https://www.amazon.jobs/en/principles). Strong technical skills and experience with machine learning and computer vision are required. Experience working with earth science, mapping, and geospatial data is a plus. Our customers are extremely technical and the solutions we build for them are strongly coupled to technical feasibility.About the teamInclusive Team CultureAt AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded scientist and enable them to take on more complex tasks in the future.Interested in this role? Reach out to the recruiting team with questions or apply directly via amazon.jobs.
US, WA, Seattle
Job summaryHow can we create a rich, data-driven shopping experience on Amazon? How do we build data models that helps us innovate different ways to enhance customer experience? How do we combine the world's greatest online shopping dataset with Amazon's computing power to create models that deeply understand our customers? Recommendations at Amazon is a way to help customers discover products. Our team's stated mission is to "grow each customer’s relationship with Amazon by leveraging our deep understanding of them to provide relevant and timely product, program, and content recommendations". We strive to better understand how customers shop on Amazon (and elsewhere) and build recommendations models to streamline customers' shopping experience by showing the right products at the right time. Understanding the complexities of customers' shopping needs and helping them explore the depth and breadth of Amazon's catalog is a challenge we take on every day. Using Amazon’s large-scale computing resources you will ask research questions about customer behavior, build models to generate recommendations, and run these models directly on the retail website. You will participate in the Amazon ML community and mentor Applied Scientists and software development engineers with a strong interest in and knowledge of ML. Your work will directly benefit customers and the retail business and you will measure the impact using scientific tools. We are looking for passionate, hard-working, and talented Applied scientist who have experience building mission critical, high volume applications that customers love. You will have an enormous opportunity to make a large impact on the design, architecture, and implementation of cutting edge products used every day, by people you know.Key job responsibilitiesScaling state of the art techniques to Amazon-scaleWorking independently and collaborating with SDEs to deploy models to productionDeveloping long-term roadmaps for the team's scientific agendaDesigning experiments to measure business impact of the team's effortsMentoring scientists in the departmentContributing back to the machine learning science community
US, NY, New York
Job summaryAmazon Web Services is looking for world class scientists to join the Security Analytics and AI Research team within AWS Security Services. This group is entrusted with researching and developing core data mining and machine learning algorithms for various AWS security services like GuardDuty (https://aws.amazon.com/guardduty/) and Macie (https://aws.amazon.com/macie/). In this group, you will invent and implement innovative solutions for never-before-solved problems. If you have passion for security and experience with large scale machine learning problems, this will be an exciting opportunity.The AWS Security Services team builds technologies that help customers strengthen their security posture and better meet security requirements in the AWS Cloud. The team interacts with security researchers to codify our own learnings and best practices and make them available for customers. We are building massively scalable and globally distributed security systems to power next generation services.Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop and enable them to take on more complex tasks in the future.A day in the lifeAbout the hiring groupJob responsibilities* Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative and business judgment.* Collaborate with software engineering teams to integrate successful experiments into large scale, highly complex production services.* Report results in a scientifically rigorous way.* Interact with security engineers, product managers and related domain experts to dive deep into the types of challenges that we need innovative solutions for.
US, MA, Westborough
Job summaryAre you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart team of doers who work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling, and fun.Amazon.com empowers a smarter, faster, more consistent customer experience through automation. Amazon Robotics automates fulfillment center operations using various methods of robotic technology including autonomous mobile robots, sophisticated control software, language perception, power management, computer vision, depth sensing, machine learning, object recognition, and semantic understanding of commands. Amazon Robotics has a dedicated focus on research and development to continuously explore new opportunities to extend its product lines into new areas.This role is a 6-month Co-Op to join AR full-time (40 hours/week) from January 9, 2023 to June 23, 2023. Amazon Robotics co-op opportunity will be Hybrid (2-3 days onsite) and based out of the Greater Boston Area in our two state-of-the-art facilities in Westborough, MA and North Reading, MA. Both campuses provide a unique opportunity to have direct access to robotics testing labs and manufacturing facilities.Key job responsibilitiesWe are seeking data scientist co-ops to help us analyze data, quantify uncertainty, and build machine learning models to make quick prediction.
US, WA, Seattle
Job summaryDo you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Buyer Risk Prevention (BRP) Machine Learning group. We are looking for a talented scientist who is passionate to build advanced algorithmic systems that help manage safety of millions of transactions every day.Major responsibilities Use statistical and machine learning techniques to create scalable risk management systemsLearning and understanding large amounts of Amazon’s historical business data for specific instances of risk or broader risk trendsDesign, development and evaluation of highly innovative models for risk managementWorking closely with software engineering teams to drive real-time model implementations and new feature creationsWorking closely with operations staff to optimize risk management operations,Establishing scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementationTracking general business activity and providing clear, compelling management reporting on a regular basisResearch and implement novel machine learning and statistical approaches
US, CA, Palo Alto
Job summaryAmazon is investing heavily in building a customer centric, world class advertising business across its many unique audio, video, and display surfaces. We are looking for an Applied Scientist who has a deep passion for building machine-learning solutions in our advertising decision system. In this role, you will be on the cutting edge of developing monetization solutions for Live TV, Connected TV and streaming Audio. These are nascent, high growth areas, where advertising monetization is an important, fully integrated part of the core strategy for each business.Key job responsibilitiesRapidly design, prototype and test machine learning algorithms for optimizing advertising reach, frequency and return on advertising spendBuild systems that extract and process volumes of disparate data using a variety of econometric and machine learning approaches. These systems should be designed to scale with exponential growth in data and run continuously.Leverage knowledge of advanced software system and algorithm development to build our measurement and optimization engine.Contribute intellectual property through patent generation.Functionally decompose complex problems into simple, straight-forward solutions.Understand system inter-dependencies and limitations as well as analytic inter-dependencies to build efficient solutions.A day in the lifeAs an Applied Scientist, you will be tasked with leading innovations in machine learning algorithms to deliver ads across platforms influencing product features and architectural choices for decision making systems. You will need to work with data scientists to invent elegant metrics and associated measurement models, and develop algorithms that help advertisers test and learn the impact of advertising strategies across channels on these metrics while ensuring a great customer experience.
US, WA, Seattle
Job summaryThe Amazon Devices Demand Science team is looking for an energetic, focused and skilled, truly innovative and technically strong research scientist with a background in data analytics, machine learning, data science, decision science and statistical modeling/analysis to help with demand forecasting and planning for the entire Amazon device family of products, services and accessories.Amazon is looking for a talented Senior Research Scientist to join the Amazon Devices team. We materially impact Amazon’s device businesses by forecasting demand, influencing promotion pricing and identifying optimal inventory allocation of all Amazon Devices using ML, operations research and big data.Key job responsibilitiesIn this role, you will have an opportunity to both develop advanced scientific solutions and drive critical customer and business impacts. You will play a key role to drive end-to-end solutions from understanding our business requirements, exploring a large amount of historical data and ML models, building prototypes and exploring conceptually new solutions, to working with partner teams for prod deployment. You will collaborate closely with scientists, engineering peers as well as business stakeholders. You will be responsible for researching, prototyping, experimenting, analyzing predictive models and developing artificial intelligence-enabled automation solutions.As a Senior Research Scientist, you will:• research and develop new methodologies for demand forecasting, alarms, alerts and automation.• apply your advanced data analytics, machine learning skills to solve complex demand planning and allocation problems.• work closely with stakeholders and translate data-driven findings into actionable insights.• improve upon existing methodologies by adding new data sources and implementing model enhancements.• create and track accuracy and performance metrics.• create, enhance, and maintain technical documentation, and present to other scientists, engineers and business leaders.• drive best practices on the team; mentor and guide junior members to achieve their career growth potential.A day in the lifeThis role will be a Problem Solver, Doer, Detail Oriented, Communicator and Influencer.Problem Solver: Ability to utilize exceptional modeling and problem-solving skills to work through different challenges in ambiguous situations.Doer: You’ve successfully delivered end-to-end operations research projects, working through conflicting viewpoints and data limitations.Detail Oriented: You have an enviable level of attention to details.Communicator: Ability to communicate analytical results to senior leaders, and peers.Influencer: Innovative scientist with the ability to identify opportunities and develop novel modeling approaches in a fast-paced and ever-changing environment, and gain support with data and storytelling.About the teamWe are a growing team continues to operate in "startup" mode to prove new business ideas, while strengthening our core ML platforms.This role is available for the following locations: Seattle/Bellevue, Washington; Arlington, Virginia (HQ2); Denver, Colorado; Bay Area/Los Angeles Metro, California; and Nashville, Tennessee. (other US Locations can be discussed further)