Amazon at ICLR: Graphs, time series, and more

Other paper topics include natural-language processing, dataset optimization, and the limits of existing machine learning techniques.

Time series forecasting and graph representations of data are both major topics of research at Amazon: time series forecasting is crucial to both supply chain optimization and product recommendation, and graph representations help make sense of the large datasets that are common at Amazon’s scale, such as the Amazon product catalogue.

Related content
Amazon’s Stefano Soatto on how learning representations came to dominate machine learning.

So it’s no surprise that both topics are well represented among the Amazon papers at the 2022 International Conference on Learning Representations (ICLR), which takes place this week. Another paper also touches on one of Amazon’s core scientific interests, natural-language processing, or computation involving free-form text inputs.

The remaining Amazon papers discuss more general machine learning techniques, such as data augmentation, or automatically selecting or generating training examples that can improve the performance of machine learning models. Another paper looks at dataset optimization more generally, proposing a technique that could be used to evaluate individual examples for inclusion in a dataset or exclusion from it. And two papers from Amazon Web Services’ Causal-Representation Learning team, which includes Amazon vice president and distinguished scientist Bernhard Schölkopf, examine the limitations of existing approaches to machine learning.

Graphs

Graphs represent data as nodes, usually depicted as circles, and edges, usually depicted as line segments connecting nodes. Graph-structured data can make machine learning more efficient, because the graph explicitly encodes relationships that a machine learning model would otherwise have to infer from data correlations.

Graph neural networks (GNNs) are a powerful tool for working with graph-structured data. Like most neural networks, GNNs produce embeddings, or fixed-length vector representations of input data, that are useful for particular computational tasks. In the case of GNNs, the embeddings capture information about both the object associated with a given node and the structure of the graph.

In real-world applications — say, a graph indicating which products tend to be purchased together — some nodes may not be connected to any others, and some connections may be spurious inferences from sparse data. In “Cold Brew: Distilling graph node representations with incomplete or missing neighborhoods”, Amazon scientists present a method for handling nodes whose edge data is absent or erroneous.

Cold Brew data distribution 16x9.png
Cold Brew addresses the real-world problem in which graph representations of data feature potentially spurious connections (tail nodes) or absent connections (cold start). Figure from "Cold Brew: Distilling graph node representations with incomplete or missing neighborhoods".

In a variation on knowledge distillation, they use a conventional GNN, which requires that each input node be connected to the rest of the graph, to train a teacher network that can produce embeddings for connected nodes. Then they train a standard multilayer perceptron — a student network — to mimic the teacher’s outputs. Unlike a conventional GNN, the student network doesn’t explicitly use structural data to produce embeddings, so it can also handle unconnected nodes. The method demonstrates significant improvements over existing methods of inferring graph structure on several benchmark datasets.

Across disciplines, AI research has recently seen a surge in the popularity of self-supervised learning, in which a machine learning model is first trained on a “proxy task”, which is related to but not identical to the target task, using unlabeled or automatically labeled data. Then the model is fine-tuned on labeled data for the target task.

With GNNs, the proxy tasks generally teach the network only how to represent node data. But in “Node feature extraction by self-supervised multi-scale neighborhood prediction”, Amazon researchers and their colleagues at the University of Illinois and UCLA present a proxy task that teaches the GNN how to represent information about graph structure as well. Their approach is highly scalable, working with graphs with hundreds of millions of nodes, and in experiments, they show that it improves GNN performance on three benchmark datasets, by almost 30% on one of them.

XRT for graph neighborhoods.png
XR-Transformer creates a hierarchical tree that sorts data into finer- and finer-grained clusters. In the context of graph neural networks, the clusters represent graph neighborhoods. Figure from "Node feature extraction by self-supervised multi-scale neighborhood prediction".

The approach, which builds on Amazon’s XR-Transformer model and is known as GIANT-XRT, has already been widely adopted and is used by the leading teams in several of the public Open Graph Benchmark competitions hosted by Stanford University (leaderboard 1 | leaderboard 2 | leaderboard 3).

Domain graph.png
Where traditional domain adaptation (left) treats all target domains the same, a new method (right) uses graphs to represent relationships between source and target domains. For instance, weather patterns in adjacent U.S. states tend to be more similar than the weather patterns in states distant from each other. Figure from “Graph-relational domain adaptation”.

A third paper, “Graph-relational domain adaptation”, applies graphs to the problem of domain adaptation, or optimizing a machine learning model to work on data with a different distribution than the data it was trained on. Conventional domain adaptation techniques treat all target domains the same, but the Amazon researchers and their colleagues at Rutgers and MIT instead use graphs to represent relationships among all source and target domains. For instance, weather patterns in adjacent U.S. states tend to be more similar than the weather patterns in states distant from each other. In experiments, the researchers show that their method improves on existing domain adaptation methods on both synthetic and real-world datasets.

Time series

Time series forecasting is essential to demand prediction, which Amazon uses to manage inventory, and it’s also useful for recommendation, which can be interpreted as continuing a sequence of product (say, music or movie) selections.

In “Bridging recommendation and marketing via recurrent intensity modeling”, Amazon scientists adapt existing mechanisms for making personal recommendations on the basis of time series data (purchase histories) to the problem of identifying the target audience for a new product.

UserRec 16x9.png
Product recommendation can be interpreted as a time-series-forecasting problem, in which a product is recommended according to its likelihood of continuing a sequence of purchases. Figure from "Bridging recommendation and marketing via recurrent intensity modeling".

Where methods for identifying a product’s potential customers tend to treat customers as atemporal collections of purchase decisions, the Amazon researchers instead frame the problem as optimizing both the product’s relevance to the customer and the customer’s activity level, or likelihood of buying any product in a given time span. In experiments, this improved the accuracy of a prediction model on several datasets.

One obstacle to the development of machine learning models that base predictions on time series data is the availability of training examples. In “PSA-GAN: Progressive self attention GANs for synthetic time series”, Amazon researchers propose a method for using generative adversarial networks (GANs) to artificially produce time series training data.

Related content
In 2017, when the journal IEEE Internet Computing was celebrating its 20th anniversary, its editorial board decided to identify the single paper from its publication history that had best withstood the “test of time”. The honor went to a 2003 paper called “Amazon.com Recommendations: Item-to-Item Collaborative Filtering”, by then Amazon researchers Greg Linden, Brent Smith, and Jeremy York.

GANs pit generators, which produce synthetic data, against discriminators, which try to distinguish synthetic data from real. The two are trained together, each improving the performance of the other.

The Amazon researchers show how to synthesize plausible time series data by progressively growing — or adding network layers to — both the generator and the discriminator. This enables the generator to first learn general characteristics that the time series as a whole should have, then learn how to produce series that exhibit those characteristics.

Data augmentation

In addition to the paper on synthetic time series, one of Amazon’s other papers at ICLR, “Deep AutoAugment”, also focuses on data augmentation.

It’s become standard practice to augment the datasets used to train machine learning models by subjecting real data to sequences of transformations. For instance, a training image for a computer vision task might be flipped, stretched, rotated or cropped, or its color or contrast might be modified. Typically, the first few transformations are selected automatically, based on experiments in which a model is trained and retrained, and then domain experts add a few additional transformations to try to make the modified data look like real data.

Related content
New method enables users to specify properties such as subject age, light direction, and pose in images produced by generative adversarial networks.

In “Deep AutoAugment”, former Amazon senior applied scientist Zhi Zhang and colleagues at Michigan State University propose a method for fully automating the construction of a data augmentation pipeline. The goal is to continuously add transformations that steer the feature distribution of the synthetic data toward that of the real data. To do that, the researchers use gradient matching, or identifying training data whose sequential updates to the model parameters look like those of the real data. In tests, this approach improved on 10 other data augmentation techniques across four sets of real data.

Natural-language processing

Many natural-language-processing tasks involve pairwise comparison of sentences. Cross-encoders, which map pairs of sentences against each other, yield the most accurate comparison, but they’re computationally intensive, as they need to compute new mappings for every sentence pair. Moreover, converting a pretrained language model into a cross-encoder requires fine-tuning it on labeled data, which is resource intensive to acquire.

Bi-encoders, on the other hand, embed sentences in a common representational space and measure the distances between them. This is efficient but less accurate.

In “Trans-encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations”, Amazon researchers, together with a former intern, propose a model that is trained in an entirely unsupervised way — that is, without unlabeled examples — and captures advantages of both approaches.

Trans-encoder.png
The trans-encoder training process, in which a bi-encoder trained in an unsupervised fashion creates training targets for a cross-encoder, which in turn outputs training targets for the bi-encoder.

The researchers begin with a pretrained language model, fine-tune it in an unsupervised manner using bi-encoding, then use the fine-tuned model to generate training targets for cross-encoding. They then use the outputs of the cross-encoding model to fine-tune the bi-encoder, iterating back and forth between the two approaches until training converges. In experiments, their model outperformed multiple state-of-the-art unsupervised sentence encoders on several benchmark tasks, with improvements of up to 5% over the best-performing prior models.

Dataset optimization

Weeding errors out of a dataset, selecting new training examples to augment a dataset, and determining how to weight the data in a dataset to better match a target distribution are all examples of dataset optimization. Assessing individual training examples’ contribution to the accuracy of a model, however, is difficult: retraining the model on a dataset with and without every single example is hardly practical.

In “DIVA: Dataset derivative of a learning task”, Amazon researchers show how to compute the dataset derivative: a function that can be used to assess a given training example’s utility relative to a particular neural-network model. During training, the model learns not only the weights of network parameters but also weights for individual training examples. The researchers show that, using a linearization technique, they can derive a closed-form equation for the dataset derivative, allowing them to assess the utility of a given training example without retraining the network.

DIVA weighting.png
Training examples that DIVA assigns high weights (left) and low (right) for the task of classifying aircraft. Figure from "DIVA: Dataset derivative of a learning task".

Limitations

“Machine learning ultimately is based on statistical dependencies,” Bernhard Schölkopf recently told Amazon Science. “Oftentimes, it's enough if we work at the surface and just learn from these dependencies. But it turns out that it's only enough as long as we're in this setting where nothing changes.”

The two ICLR papers from the Causal Representation Learning team explore contexts in which learning statistical dependencies is not enough. “Visual representation learning does not generalize strongly within the same domain” describes experiments with image datasets in which each image is defined by specific values of a set of variables — say, different shapes of different sizes and colors, or faces that are either smiling or not and differ in hair color or age.

The researchers test 17 machine learning models and show that, if certain combinations of variables or specific variable values are held out of the training data, all 17 have trouble recognizing them in the test data. For instance, a model trained to recognize small hearts and large squares has trouble recognizing large hearts and small squares. This suggests that we need revised training techniques or model designs to ensure that machine learning systems are really learning what they’re supposed to.

Visual representation learning.png
An illustration of the four methods of separating training data (black dots) and test data (red dots) in "Visual representation learning does not generalize strongly within the same domain".

Similarly, in “You mostly walk alone: Analyzing feature attribution in trajectory prediction”, members of the team consider the problem of predicting the trajectories of moving objects as they interact with other objects, an essential capacity for self-driving cars and other AI systems. For instance, if a person is walking down the street, and a ball bounces into her path, it could be useful to know that the person might deviate from her trajectory to retrieve the ball.

Adapting the game-theoretical concept of Shapley values, which enable the isolation of different variables’ contributions to an outcome, the researchers examine the best-performing recent models for predicting trajectories in interactive contexts and show that, for the most part, their predictions are based on past trajectories; they pay little attention to the influence of interactions.

Trajectory interactions.png
A new method enables the comparison of different trajectory prediction models according to the extent to which they use social interactions for making predictions (left: none; middle: weak; right: strong). The target agent, whose future trajectory is to be predicted, is shown in red, and modeled interactions are represented by arrows whose width indicates interaction strength. From "You mostly walk alone: Analyzing feature attribution in trajectory prediction".

The one exception is a models trained on a dataset of basketball video, where all the players’ movements are constantly coordinated. There, existing models do indeed learn to recognize the influence of interaction. This suggests that careful curation of training data could enable existing models to account for interactions when predicting trajectories.

Research areas

Related content

ES, M, Madrid
Amazon's International Technology org in EU (EU INTech) is creating new ways for Amazon customers discovering Amazon catalog through new and innovative Customer experiences. Our vision is to provide the most relevant content and CX for their shopping mission. We are responsible for building the software and machine learning models to surface high quality and relevant content to the Amazon customers worldwide across the site. The team, mainly located in Madrid Technical Hub, London and Luxembourg, comprises Software Developer and ML Engineers, Applied Scientists, Product Managers, Technical Product Managers and UX Designers who are experts on several areas of ranking, computer vision, recommendations systems, Search as well as CX. Are you interested on how the experiences that fuel Catalog and Search are built to scale to customers WW? Are interesting on how we use state of the art AI to generate and provide the most relevant content? Key job responsibilities We are looking for Applied Scientists who are passionate to solve highly ambiguous and challenging problems at global scale. You will be responsible for major science challenges for our team, including working with text to image and image to text state of the art models to scale to enable new Customer Experiences WW. You will design, develop, deliver and support a variety of models in collaboration with a variety of roles and partner teams around the world. You will influence scientific direction and best practices and maintain quality on team deliverables. We are open to hiring candidates to work out of one of the following locations: Madrid, M, ESP
US, WA, Bellevue
Imagine being part of an agile team where your ideas have the potential to reach millions of customers. Picture working on cutting-edge, customer-facing solutions, where every team member is a critical voice in the decision making process. Envision being able to leverage the resources of a Fortune 500 company within the atmosphere of a start-up. Welcome to Amazon’s NCRC team. We solve complex problems in an ambiguous space, focusing on reducing return costs and improving the customer experience. We build solutions that are distributed on a large scale, positively impacting experiences for our customers and sellers. Come innovate with the NCRC team! The Net Cost of Refunds and Concessions (NCRC) team is looking for a Senior Manager Data Science to lead a team of economists, business intelligence engineers and business analysts who investigate business problems, develop insights and build models & algorithms that predict and quantify new opportunity. The team instigates and productionalizes nascent solutions around four pillars: outbound defects, inbound defects, yield optimization and returns reduction. These four pillars interact, resulting in impacts to our overall return rate, associated costs, and customer satisfaction. You may have seen some downstream impacts of our work including Amazon.com customer satisfaction badges on the website and app, new returns drop off optionality, and faster refunds for low cost items. In this role, you will set the science vision and direction for the team, collaborating with internal stakeholders across our returns and re-commerce teams to scale and advance science solutions. This role is based in Bellevue, WA Key job responsibilities * Single threaded leader responsible for setting and driving science strategy for the organization. * Lead and provide coaching to a team of Scientists, Economists, Business Intelligence Engineers and Business Analysts. * Partner with Engineering, Product and Machine Learning leaders to deliver insights and recommendations across NCRC initiatives. * Lead research and development of models and science products powering return cost reduction. * Educate and evangelize across internal teams on analytics, insights and measurement by writing whitepapers, knowledge documentation and delivering learning sessions. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, WA, Bellevue
We are designing the future. If you are in quest of an iterative fast-paced environment, where you can drive innovation through scientific inquiry, and provide tangible benefit to hundreds of thousands of our associates worldwide, this is your opportunity. Come work on the Amazon Worldwide Fulfillment Design & Engineering Team! We are looking for an experienced and Research Scientist with background in Ergonomics and Industrial Human Factors, someone that is excited to work on complex real-world challenges for which a comprehensive scientific approach is necessary to drive solutions. Your investigations will define human factor / ergonomic thresholds resulting in design and implementation of safe and efficient workspaces and processes for our associates. Your role will entail assessment and design of manual material handling tasks throughout the entire Amazon network. You will identify fundamental questions pertaining to the human capabilities and tolerances in a myriad of work environments, and will initiate and lead studies that will drive decision making on an extreme scale. .You will provide definitive human factors/ ergonomics input and participate in design with every single design group in our network, including Amazon Robotics, Engineering R&D, and Operations Engineering. You will work closely with our Worldwide Health and Safety organization to gain feedback on designs and work tenaciously to continuously improve our associate’s experience. Key job responsibilities - Collaborating and designing work processes and workspaces that adhere to human factors / ergonomics standards worldwide. - Producing comprehensive and assessments of workstations and processes covering biomechanical, physiological, and psychophysical demands. - Effectively communicate your design rationale to multiple engineering and operations entities. - Identifying gaps in current human factors standards and guidelines, and lead comprehensive studies to redefine “industry best practices” based on solid scientific foundations. - Continuously strive to gain in-depth knowledge of your profession, as well as branch out to learn about intersecting fields, such as robotics and mechatronics. - Travelling to our various sites to perform thorough assessments and gain in-depth operational feedback, approximately 25%-50% of the time. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, NY, New York
Amazon Advertising exists at the intersection of marketing and e-commerce and offers advertisers a rich array of innovative advertising solutions across Amazon-owned and third party properties. We believe that advertising, when done well, can greatly enhance the value of the customer experience and generate a positive return on investment for our advertising partners. We are currently looking for a highly skilled and motivated Data Scientist to help scale our growing advertising business. The Data Scientist is a key member of the Global Marketing Insights team at Amazon Ads, working with marketing, product, retail and other Amazon business partners to analyze and improve advertisers’ performance on Amazon, in support of their marketing objectives. You will work with Amazon's unique data and translate it into high-quality and actionable insights and recommendations to improve the effectiveness of advertiser campaigns and unlock business opportunities. Day to day activities include analyzing advertiser behaviors to develop data-driven insights on what tactics and strategies lead to success. You will also build automated solutions to generate science driven insights at scale, that are distributed to our advertisers across channels. Basic qualifications - Bachelor's or Master's degree in Engineering, Statistics, Economics, or a related technical field - Proven experience in data analytics or data science roles - Proficiency with SQL and Python - Strong understanding of basic statistical techniques and methodologies such as distributions, hypothesis testing, regressions, experimentation, A/B Testing etc. - Excellent organizational, interpersonal, and communication skills (both written and verbal) - Ability to work cross-functionally and with technical and non-technical stakeholders Preferred qualifications - Understanding of advanced statistical techniques and methodologies such as causal inference, propensity score matching, machine learning etc. - Experience with developing and deploying production machine learning models, especially on cloud platforms - Experience building and managing data pipelines - Experience with digital advertising products, performance analytics , marketing and advertising campaigns - MBA, Master’s, or Doctoral degree in Economics, Engineering, Marketing, Statistics, Advertising, or related fields - Publication track record/writing experience (ex. published a paper in a technical journal or trade publication) About the team The Marketing Insights team is responsible for delivering science backed insights to millions of advertisers via our marketing messages. Our team is distributed across the globe and is building cutting edge data science to identify and communicate the impact of various advertising strategies for our products. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA
US, WA, Seattle
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and Scala would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. We are open to hiring candidates to work out of one of the following locations: Chicago, IL, USA | Seattle, WA, USA | Washington, DC, USA
US, WA, Seattle
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and Scala would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. We are open to hiring candidates to work out of one of the following locations: Chicago, IL, USA | Seattle, WA, USA | Washington, DC, USA
US, WA, Seattle
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and Scala would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. We are open to hiring candidates to work out of one of the following locations: Chicago, IL, USA | Seattle, WA, USA | Washington, DC, USA
US, CA, Santa Clara
Machine learning (ML) has been strategic to Amazon from the early years. We are pioneers in areas such as recommendation engines, product search, eCommerce fraud detection, and large-scale optimization of fulfillment center operations. The Generative AI team helps AWS customers accelerate the use of Generative AI to solve business and operational challenges and promote innovation in their organization. As an applied scientist, you are proficient in designing and developing advanced ML models to solve diverse challenges and opportunities. You will be working with terabytes of text, images, and other types of data to solve real-world problems. You'll design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for talented scientists capable of applying ML algorithms and cutting-edge deep learning (DL) and reinforcement learning approaches to areas such as drug discovery, customer segmentation, fraud prevention, capacity planning, predictive maintenance, pricing optimization, call center analytics, player pose estimation, event detection, and virtual assistant among others. AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. The AWS Global Support team interacts with leading companies and believes that world-class support is critical to customer success. AWS Support also partners with a global list of customers that are building mission-critical applications on top of AWS services. Key job responsibilities The primary responsibilities of this role are to: Design, develop, and evaluate innovative ML models to solve diverse challenges and opportunities across industries Interact with customer directly to understand their business problems, and help them with defining and implementing scalable Generative AI solutions to solve them Work closely with account teams, research scientist teams, and product engineering teams to drive model implementations and new solutions About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. We are open to hiring candidates to work out of one of the following locations: San Francisco, CA, USA | Santa Clara, CA, USA
US, WA, Bellevue
Amazon.com Services, Inc. is looking for a motivated individual with strong analytical skills and practical experience to join our Modeling and Optimization team. We are hiring specialists into our scientific team with expertise in network and combinatorial optimization, simulation-based design, and/or control theory. Amazon is growing rapidly and because we are driven by faster delivery to customers, a more efficient supply chain network, and lower cost of operations, our main focus is in the development of analytical strategic models and automation tools fed by massive amounts of data. You will be responsible for building these models/tools that improve the economics of Amazon’s worldwide fulfillment networks in North America, Europe, and Japan, China, and India as Amazon increases the speed and decreases the cost to deliver products to customers. You will identify and evaluate opportunities to reduce variable costs by improving fulfillment center processes, transportation operations and scheduling, and the execution to operational plans. You will also improve the efficiency of capital investment by helping the fulfillment centers to improve storage utilization and the effective use of automation. Finally, you will help create the metrics to quantify improvements to the fulfillment costs (e.g., transportation and labor costs) resulting from the application of these optimization models and tools. The ideal candidate will have good communication skills with both technical and business people with ability to speak at a level appropriate for the audience. Key job responsibilities * Understand ambiguous business problems, model it in the simplest and most effective manner with limited guidance. * Use new or existing tools to support internal partner-teams and provide the best, science-based guidance. * Contribute to existing tools with highly disciplined coding practices. * Contribute to the growth of knowledge of our team and the scientific community with internal and external publications or presentations. About the team * At the Modeling and Optimization (MOP) team, we use optimization, algorithm design, statistics, and machine learning to improve decision-making capabilities across WW Operations and Amazon Logistics. * We focus on transportation topology, labor and resource planning, routing science, visualization research, data science and development, and process optimization. * We create models to simulate, optimize, and control the fulfillment network with the objective of reducing cost while improving speed and reliability. * We support multiple business line, therefore maintain a comprehensive and objective view, coordinating solutions across organizational lines where possible. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, CA, Santa Clara
Amazon AI is looking for world class scientists and engineers to join its AWS AI. This group is entrusted with developing core natural language processing, generative AI, deep learning and machine learning algorithms for AWS. You will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and explore conceptually new solutions. You will interact closely with our customers and with the academic community. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. A day in the life Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. About the team The Amazon Web Services (AWS) Next Gen DevX (NGDE) team uses generative AI and foundation models to reimagine the experience of all builders on AWS. From the IDE to web-based tools and services, AI will help engineers work on large and small applications. We explore new technologies and find creative solutions. Curiosity and an explorative mindset can find a place here to impact the life of engineers around the world. If you are excited about this space and want to enlighten your peers with new capabilities, this is the team for you. We are open to hiring candidates to work out of one of the following locations: Santa Clara, CA, USA