Amazon at ICLR: Graphs, time series, and more

Other paper topics include natural-language processing, dataset optimization, and the limits of existing machine learning techniques.

Time series forecasting and graph representations of data are both major topics of research at Amazon: time series forecasting is crucial to both supply chain optimization and product recommendation, and graph representations help make sense of the large datasets that are common at Amazon’s scale, such as the Amazon product catalogue.

Related content
Amazon’s Stefano Soatto on how learning representations came to dominate machine learning.

So it’s no surprise that both topics are well represented among the Amazon papers at the 2022 International Conference on Learning Representations (ICLR), which takes place this week. Another paper also touches on one of Amazon’s core scientific interests, natural-language processing, or computation involving free-form text inputs.

The remaining Amazon papers discuss more general machine learning techniques, such as data augmentation, or automatically selecting or generating training examples that can improve the performance of machine learning models. Another paper looks at dataset optimization more generally, proposing a technique that could be used to evaluate individual examples for inclusion in a dataset or exclusion from it. And two papers from Amazon Web Services’ Causal-Representation Learning team, which includes Amazon vice president and distinguished scientist Bernhard Schölkopf, examine the limitations of existing approaches to machine learning.

Graphs

Graphs represent data as nodes, usually depicted as circles, and edges, usually depicted as line segments connecting nodes. Graph-structured data can make machine learning more efficient, because the graph explicitly encodes relationships that a machine learning model would otherwise have to infer from data correlations.

Graph neural networks (GNNs) are a powerful tool for working with graph-structured data. Like most neural networks, GNNs produce embeddings, or fixed-length vector representations of input data, that are useful for particular computational tasks. In the case of GNNs, the embeddings capture information about both the object associated with a given node and the structure of the graph.

In real-world applications — say, a graph indicating which products tend to be purchased together — some nodes may not be connected to any others, and some connections may be spurious inferences from sparse data. In “Cold Brew: Distilling graph node representations with incomplete or missing neighborhoods”, Amazon scientists present a method for handling nodes whose edge data is absent or erroneous.

Cold Brew data distribution 16x9.png
Cold Brew addresses the real-world problem in which graph representations of data feature potentially spurious connections (tail nodes) or absent connections (cold start). Figure from "Cold Brew: Distilling graph node representations with incomplete or missing neighborhoods".

In a variation on knowledge distillation, they use a conventional GNN, which requires that each input node be connected to the rest of the graph, to train a teacher network that can produce embeddings for connected nodes. Then they train a standard multilayer perceptron — a student network — to mimic the teacher’s outputs. Unlike a conventional GNN, the student network doesn’t explicitly use structural data to produce embeddings, so it can also handle unconnected nodes. The method demonstrates significant improvements over existing methods of inferring graph structure on several benchmark datasets.

Across disciplines, AI research has recently seen a surge in the popularity of self-supervised learning, in which a machine learning model is first trained on a “proxy task”, which is related to but not identical to the target task, using unlabeled or automatically labeled data. Then the model is fine-tuned on labeled data for the target task.

With GNNs, the proxy tasks generally teach the network only how to represent node data. But in “Node feature extraction by self-supervised multi-scale neighborhood prediction”, Amazon researchers and their colleagues at the University of Illinois and UCLA present a proxy task that teaches the GNN how to represent information about graph structure as well. Their approach is highly scalable, working with graphs with hundreds of millions of nodes, and in experiments, they show that it improves GNN performance on three benchmark datasets, by almost 30% on one of them.

XRT for graph neighborhoods.png
XR-Transformer creates a hierarchical tree that sorts data into finer- and finer-grained clusters. In the context of graph neural networks, the clusters represent graph neighborhoods. Figure from "Node feature extraction by self-supervised multi-scale neighborhood prediction".

The approach, which builds on Amazon’s XR-Transformer model and is known as GIANT-XRT, has already been widely adopted and is used by the leading teams in several of the public Open Graph Benchmark competitions hosted by Stanford University (leaderboard 1 | leaderboard 2 | leaderboard 3).

Domain graph.png
Where traditional domain adaptation (left) treats all target domains the same, a new method (right) uses graphs to represent relationships between source and target domains. For instance, weather patterns in adjacent U.S. states tend to be more similar than the weather patterns in states distant from each other. Figure from “Graph-relational domain adaptation”.

A third paper, “Graph-relational domain adaptation”, applies graphs to the problem of domain adaptation, or optimizing a machine learning model to work on data with a different distribution than the data it was trained on. Conventional domain adaptation techniques treat all target domains the same, but the Amazon researchers and their colleagues at Rutgers and MIT instead use graphs to represent relationships among all source and target domains. For instance, weather patterns in adjacent U.S. states tend to be more similar than the weather patterns in states distant from each other. In experiments, the researchers show that their method improves on existing domain adaptation methods on both synthetic and real-world datasets.

Time series

Time series forecasting is essential to demand prediction, which Amazon uses to manage inventory, and it’s also useful for recommendation, which can be interpreted as continuing a sequence of product (say, music or movie) selections.

In “Bridging recommendation and marketing via recurrent intensity modeling”, Amazon scientists adapt existing mechanisms for making personal recommendations on the basis of time series data (purchase histories) to the problem of identifying the target audience for a new product.

UserRec 16x9.png
Product recommendation can be interpreted as a time-series-forecasting problem, in which a product is recommended according to its likelihood of continuing a sequence of purchases. Figure from "Bridging recommendation and marketing via recurrent intensity modeling".

Where methods for identifying a product’s potential customers tend to treat customers as atemporal collections of purchase decisions, the Amazon researchers instead frame the problem as optimizing both the product’s relevance to the customer and the customer’s activity level, or likelihood of buying any product in a given time span. In experiments, this improved the accuracy of a prediction model on several datasets.

One obstacle to the development of machine learning models that base predictions on time series data is the availability of training examples. In “PSA-GAN: Progressive self attention GANs for synthetic time series”, Amazon researchers propose a method for using generative adversarial networks (GANs) to artificially produce time series training data.

Related content
In 2017, when the journal IEEE Internet Computing was celebrating its 20th anniversary, its editorial board decided to identify the single paper from its publication history that had best withstood the “test of time”. The honor went to a 2003 paper called “Amazon.com Recommendations: Item-to-Item Collaborative Filtering”, by then Amazon researchers Greg Linden, Brent Smith, and Jeremy York.

GANs pit generators, which produce synthetic data, against discriminators, which try to distinguish synthetic data from real. The two are trained together, each improving the performance of the other.

The Amazon researchers show how to synthesize plausible time series data by progressively growing — or adding network layers to — both the generator and the discriminator. This enables the generator to first learn general characteristics that the time series as a whole should have, then learn how to produce series that exhibit those characteristics.

Data augmentation

In addition to the paper on synthetic time series, one of Amazon’s other papers at ICLR, “Deep AutoAugment”, also focuses on data augmentation.

It’s become standard practice to augment the datasets used to train machine learning models by subjecting real data to sequences of transformations. For instance, a training image for a computer vision task might be flipped, stretched, rotated or cropped, or its color or contrast might be modified. Typically, the first few transformations are selected automatically, based on experiments in which a model is trained and retrained, and then domain experts add a few additional transformations to try to make the modified data look like real data.

Related content
New method enables users to specify properties such as subject age, light direction, and pose in images produced by generative adversarial networks.

In “Deep AutoAugment”, former Amazon senior applied scientist Zhi Zhang and colleagues at Michigan State University propose a method for fully automating the construction of a data augmentation pipeline. The goal is to continuously add transformations that steer the feature distribution of the synthetic data toward that of the real data. To do that, the researchers use gradient matching, or identifying training data whose sequential updates to the model parameters look like those of the real data. In tests, this approach improved on 10 other data augmentation techniques across four sets of real data.

Natural-language processing

Many natural-language-processing tasks involve pairwise comparison of sentences. Cross-encoders, which map pairs of sentences against each other, yield the most accurate comparison, but they’re computationally intensive, as they need to compute new mappings for every sentence pair. Moreover, converting a pretrained language model into a cross-encoder requires fine-tuning it on labeled data, which is resource intensive to acquire.

Bi-encoders, on the other hand, embed sentences in a common representational space and measure the distances between them. This is efficient but less accurate.

In “Trans-encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations”, Amazon researchers, together with a former intern, propose a model that is trained in an entirely unsupervised way — that is, without unlabeled examples — and captures advantages of both approaches.

Trans-encoder.png
The trans-encoder training process, in which a bi-encoder trained in an unsupervised fashion creates training targets for a cross-encoder, which in turn outputs training targets for the bi-encoder.

The researchers begin with a pretrained language model, fine-tune it in an unsupervised manner using bi-encoding, then use the fine-tuned model to generate training targets for cross-encoding. They then use the outputs of the cross-encoding model to fine-tune the bi-encoder, iterating back and forth between the two approaches until training converges. In experiments, their model outperformed multiple state-of-the-art unsupervised sentence encoders on several benchmark tasks, with improvements of up to 5% over the best-performing prior models.

Dataset optimization

Weeding errors out of a dataset, selecting new training examples to augment a dataset, and determining how to weight the data in a dataset to better match a target distribution are all examples of dataset optimization. Assessing individual training examples’ contribution to the accuracy of a model, however, is difficult: retraining the model on a dataset with and without every single example is hardly practical.

In “DIVA: Dataset derivative of a learning task”, Amazon researchers show how to compute the dataset derivative: a function that can be used to assess a given training example’s utility relative to a particular neural-network model. During training, the model learns not only the weights of network parameters but also weights for individual training examples. The researchers show that, using a linearization technique, they can derive a closed-form equation for the dataset derivative, allowing them to assess the utility of a given training example without retraining the network.

DIVA weighting.png
Training examples that DIVA assigns high weights (left) and low (right) for the task of classifying aircraft. Figure from "DIVA: Dataset derivative of a learning task".

Limitations

“Machine learning ultimately is based on statistical dependencies,” Bernhard Schölkopf recently told Amazon Science. “Oftentimes, it's enough if we work at the surface and just learn from these dependencies. But it turns out that it's only enough as long as we're in this setting where nothing changes.”

The two ICLR papers from the Causal Representation Learning team explore contexts in which learning statistical dependencies is not enough. “Visual representation learning does not generalize strongly within the same domain” describes experiments with image datasets in which each image is defined by specific values of a set of variables — say, different shapes of different sizes and colors, or faces that are either smiling or not and differ in hair color or age.

The researchers test 17 machine learning models and show that, if certain combinations of variables or specific variable values are held out of the training data, all 17 have trouble recognizing them in the test data. For instance, a model trained to recognize small hearts and large squares has trouble recognizing large hearts and small squares. This suggests that we need revised training techniques or model designs to ensure that machine learning systems are really learning what they’re supposed to.

Visual representation learning.png
An illustration of the four methods of separating training data (black dots) and test data (red dots) in "Visual representation learning does not generalize strongly within the same domain".

Similarly, in “You mostly walk alone: Analyzing feature attribution in trajectory prediction”, members of the team consider the problem of predicting the trajectories of moving objects as they interact with other objects, an essential capacity for self-driving cars and other AI systems. For instance, if a person is walking down the street, and a ball bounces into her path, it could be useful to know that the person might deviate from her trajectory to retrieve the ball.

Adapting the game-theoretical concept of Shapley values, which enable the isolation of different variables’ contributions to an outcome, the researchers examine the best-performing recent models for predicting trajectories in interactive contexts and show that, for the most part, their predictions are based on past trajectories; they pay little attention to the influence of interactions.

Trajectory interactions.png
A new method enables the comparison of different trajectory prediction models according to the extent to which they use social interactions for making predictions (left: none; middle: weak; right: strong). The target agent, whose future trajectory is to be predicted, is shown in red, and modeled interactions are represented by arrows whose width indicates interaction strength. From "You mostly walk alone: Analyzing feature attribution in trajectory prediction".

The one exception is a models trained on a dataset of basketball video, where all the players’ movements are constantly coordinated. There, existing models do indeed learn to recognize the influence of interaction. This suggests that careful curation of training data could enable existing models to account for interactions when predicting trajectories.

Research areas

Related content

US, WA, Seattle
The Amazon Devices and Services organization designs, builds and markets Kindle e-readers, Fire Tablets, Fire TV Streaming Media Players and Echo devices. The Device Economics team is looking for an Economist to join our fast paced, start-up environment to help invent the future of product economics. We solve significant business problems in the devices and retail spaces by understanding customer behavior and developing business decision-making frameworks. You will build econometric and machine learning models for causal inference and prediction, using our world class data systems, and apply economic theory to solve business problems in a fast-moving environment. This involves analyzing Amazon Devices and Services customer behavior, and measuring and predicting the lifetime value of existing and future products. We build scalable systems to ensure that our models have broad applicability and large impact. You will work with Scientists, Economists, Product Managers, and Software Developers to provide meaningful feedback about stakeholder problems to inform business solutions and increase the velocity, quality, and scope behind our recommendations. Key job responsibilities Applies expertise in causal modeling to develop econometric/machine learning models to measure the economic value of devices and the business Reviews models and results for other scientists, mentors junior scientists Generates economic insights for the Devices and Services business and work with stakeholders to run the business for effectively Describes strategic importance of vision inside and outside of team. Identifies business opportunities, defines the problem and how to solve it. Engages with scientists, business leadership outside Devices and Services to understand interplay between different business units We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Seattle, WA, USA
US, WA, Seattle
Amazon Advertising's Publisher Technologies team is looking for an experienced Applied Scientist with proven research experience in control theory, online machine learning, and/or mechanism design to drive innovative algorithms for ad-delivery at scale. Your work will directly shape pacing, yield optimization, and ad-selection for Amazon's publishers and impact experiences for hundreds of millions of users and devices. About the team Amazon Advertising operates at the intersection of eCommerce, streaming, and advertising, offering a rich array of digital advertising solutions with the goal of helping our customers find and discover anything they want to buy. We help advertisers reach customers across Amazon's owned and operated sites (publishers) across the web and on millions of devices such as Amazon.com, Prime Video, FreeVee, Kindles, Fire tablets, Fire TV, Alexa, Mobile, Twitch, and more. Within Ads, Publisher Technologies is building the next generation of ad-serving products to allow our publishers to monetize their on-demand, streaming, and static content across Amazon’s ad network in a few clicks. Publishers interact directly with our technology, through programmatic APIs to optimize billions of impression opportunities per day. About the role Publisher Technologies is looking to build out our Publisher Ad Server Science + Simulation and Experimentation team to drive innovation across ad-server delivery algorithms for budget pacing, ad-selection, and yield optimization. We seek to ensure the highest quality experiences for Amazon's customers by matching them with most relevant ads while ensuring optimal yield for publishers. As a Senior Applied Scientist, you will research, invent, and apply cutting edge designs and methodologies in control theory, online optimization, and machine learning to improve publisher yield and customer experience. You will work closely with our engineering and product team to design and implement algorithms in production. In addition, you will contribute to the end state vision of AI enhanced ad-delivery. You will be a foundational member of the team that builds a world-class, green-field ad-delivery service for Amazon's video, audio, and display advertising. To be successful in this role, you must be customer obsessed, have a deep technical background in both online algorithms and distributed systems, comfort dealing with ambiguity, an eye for detail, and a passion to identify and solve for practical considerations that occur when complex control-loops have to operate autonomously and reliably to make millisecond level decisions at scale. You are a technical leader with track record of building control theoretic and/or machine learning models in production to drive business KPIs such as budget delivery. If you are interested working on challenging and practical problems that impact hundreds of millions of users and devices and span cutting edge areas of optimization and AI while having fun on a rapidly expanding team, come join us! Key job responsibilities * Developing new statistical, causal, machine learning, and simulation techniques and develop solution prototypes to drive innovation * Developing an understanding of key business metrics / KPIs and providing clear, compelling analysis that shapes the direction of our business * Working with technical and non-technical customers to design experiments, simulations, and communicate results * Collaborating with our dedicated software team to create production implementations for large-scale data analysis * Staying up-to-date with and contributing to the state-of-the-art research and methodologies in the area of advertising algorithms * Presenting research results to our internal research community * Leading training and informational sessions on our science and capabilities * Your contributions will be seen and recognized broadly within Amazon, contributing to the Amazon research corpus and patent portfolio. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Seattle
The Alexa Economics team is looking for a Senior Economics Manager who is able to provide structure around complex business problems, hone those complex problems into specific, scientific questions, and test those questions to generate insights. The candidate will work with various product, analytics, science, and engineering teams to develop models and algorithms on large scale data, design pilots and measure their impact, and transform successful prototypes into data products at scale. They will lead teams of researchers to produce robust, objective research results and insights which can be communicated to a broad audience inside and outside of Alexa. Key job responsibilities Ideal candidates will work closely with business partners to develop science that solves the most important business challenges. They will work well in a team setting with individuals from diverse disciplines and backgrounds. They will serve as an ambassador for science for business teams, so that leaders are equipped with the right data and mental model to make important business decisions. Ideal candidates will own the development of scientific models and manage the data analysis, modeling, and experimentation that is necessary for estimating and validating models. They will be customer centric – clearly communicating scientific approaches and findings to business leaders, listening to and incorporate their feedback, and delivering successful scientific solutions. A day in the life - Review new technical approaches to understand Engagement and associated benefits to Alexa. - Partner with Engineering and Product teams to inject econometric insights and models into customer-facing products. - Help business teams understand the key causal inputs that drive business outcome objectives. About the team The Alexa Engagement and Economics and Team uses data, analytics, economics, statistics, and machine learning to measure, report, and track business outputs and growth. We are a team that is obsessed with understanding customer behaviors, and leveraging all aspects from customers behaviors with Alexa and Amazon to develop and deliver solutions that can drive Alexa growth and long-term business success. We use causal inference to identify business optimization and product opportunities. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Seattle, WA, USA
US, WA, Bellevue
We are seeking a passionate, talented, and inventive individual to join the Applied AI team and help build industry-leading technologies that customers will love. This team offers a unique opportunity to make a significant impact on the customer experience and contribute to the design, architecture, and implementation of a cutting-edge product. The mission of the Applied AI team is to enable organizations within Worldwide Amazon.com Stores to accelerate the adoption of AI technologies across various parts of our business. We are looking for an Applied Scientist to join our Applied AI team to work on LLM-based solutions. Key job responsibilities You will be responsible for developing and maintaining the systems and tools that enable us to accelerate knowledge operations and work in the intersection of Science and Engineering. You will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. A day in the life We are seeking an experienced Scientist who combines superb technical, research, analytical and leadership capabilities with a demonstrated ability to get the right things done quickly and effectively. This person must be comfortable working with a team of top-notch developers and collaborating with our research teams. We’re looking for someone who innovates, and loves solving hard problems. You will be expected to have an established background in building highly scalable systems and system design, excellent project management skills, great communication skills, and a motivation to achieve results in a fast-paced environment. You should be somebody who enjoys working on complex problems, is customer-centric, and feels strongly about building good software as well as making that software achieve its operational goals. About the team On our team you will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Bellevue, WA, USA | Seattle, WA, USA
US, WA, Seattle
The ASFS Team is hiring an Intern in Economics. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Knowledge of econometrics and macroeconomics, as well as familiarity with Python, Matlab, or R is necessary. This is a full-time position at 40 hours per week, with compensation being awarded on an hourly basis. You will use internal and external data to estimate macroeconometric models to answer critical business questions, also you will have the opportunity to collaborate with economists and data scientists. Roughly 85% of interns from previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | New York City, NY, USA | Seattle, WA, USA
US, WA, Bellevue
As an Applied Scientist on our Learning and Development team, you will play a critical role in driving the design, development, and delivery of learning programs and initiatives aimed at enhancing leadership and associate development within the organization. You will leverage your expertise in learning science, data analysis, and statistical model design to create impactful learning journey roadmap that align with organizational goals and priorities. Key job responsibilities 1) Research and Analysis: Conduct research on learning and development trends, theories, and best practices related to leadership and associate development. Analyze data to identify learning needs, performance gaps, and opportunities for improvement within the organization. Use data-driven insights to inform the design and implementation of learning interventions. 2) Program Design and Development: Collaborate with cross-functional teams to develop comprehensive learning programs focused on leadership development and associate growth. Design learning experiences using evidence-based instructional strategies, adult learning principles, and innovative technologies. Create engaging and interactive learning materials, including e-learning modules, instructor-led workshops, and multimedia resources. 3) Evaluation and Continuous Improvement: Develop evaluation frameworks to assess the effectiveness and impact of learning programs on leadership development and associate performance. Collect and analyze feedback from participants and stakeholders to identify strengths, areas for improvement, and future learning needs. Iterate on learning interventions based on evaluation results and feedback to continuously improve program outcomes. 4) Thought Leadership and Collaboration: Serve as a subject matter expert on learning science, instructional design, and leadership development within the organization. Collaborate with stakeholders across the company to align learning initiatives with strategic priorities and business objectives. Share knowledge and best practices with colleagues to foster a culture of continuous learning and development. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Nashville, TN, USA
US, WA, Seattle
Amazon Web Services (AWS) is building a world-class marketing organization, and we are looking for an experienced Economist to join the central data and science organization for AWS Marketing. This candidate will develop innovative solutions to measure the return on marketing investments. They will work closely with business leaders, scientists, and engineers to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of innovative measurement solutions. They will interact with functional leaders owning events (e.g. re:Invent, summits, webinars), paid media (paid search, paid social, display), AWS-owned channels (email, website, console) as well as lead management organization to drive the development, fine-tuning and adoption of the consistent measurement framework across these diverse initiatives. We seek candidates with an entrepreneurial spirit who want to make a big impact on AWS growth. They will develop strong working relationships and thrive in a collaborative team environment. They will have the creativity, curiosity, and strong judgment to work on high-impact, high-visibility products to improve the experience of AWS leads and customers. Key job responsibilities - Apply your expertise in causal inference and ML to develop systems to measure B2B marketing impact - Develop and execute science products from concept, prototype to production incorporating feedback from customers, scientists and business leaders - Identify new opportunities for leveraging economic insights and models in the marketing space - Write technical white papers and business-facing documents to clearly explain complex technical concepts to audiences with diverse business/scientific backgrounds We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Austin, TX, USA | New York City, NY, USA | Seattle, WA, USA
US, GA, Atlanta
Looking for your next challenge? North America Sort Centers (NASC) are experiencing growth and looking for a skilled, highly motivated Data Scientist to join the NASC Engineering Data, Product and Simulation Team. The Sort Center network is the critical Middle-Mile solution in the Amazon Transportation Services (ATS) group, linking Fulfillment Centers to the Last Mile. The experience of our customers is dependent on our ability to efficiently execute volume flow through the middle-mile network. Key job responsibilities The Senior Data Scientist will design and implement solutions to address complex business questions using simulation. In this role, you will apply advanced analysis techniques and statistical concepts to draw insights from massive datasets, and create intuitive simulations and data visualizations. You can contribute to each layer of a data solution – you work closely with process design engineers, business intelligence engineers and technical product managers to obtain relevant datasets and create simulation models, and review key results with business leaders and stakeholders. Your work exhibits a balance between scientific validity and business practicality. On this team, you will have a large impact on the entire NASC organization, with lots of opportunity to learn and grow within the NASC Engineering team. This role will be the first dedicated simulation expert, so you will have an exceptional opportunity to define and drive vision for simulation best practices on our team. To be successful in this role, you must be able to turn ambiguous business questions into clearly defined problems, develop quantifiable metrics and deliver results that meet high standards of data quality, security, and privacy. About the team NASC Engineering’s Product and Analytics Team’s sole objective is to develop tools for under the roof simulation and optimization, supporting the needs of our internal and external stakeholders (i.e Process Design Engineering, NASC Engineering, ACES, Finance, Safety and Operations). We develop data science tools to evaluate what-if design and operations scenarios for new and existing sort centers to understand their robustness, stability, scalability, and cost-effectiveness. We conceptualize new data science solutions, using optimization and machine learning platforms, to analyze new and existing process, identify and reduce non-value added steps, and increase overall performance and rate. We work by interfacing with various functional teams to test and pilot new hardware/software solutions. We are open to hiring candidates to work out of one of the following locations: Atlanta, GA, USA | Bellevue, WA, USA
US, WA, Bellevue
Amazon’s Middle Mile Planning & Optimization team is looking for an exceptional Sr. Applied Scientist to solve complex optimization problems that ensure we exceed customer delivery promise expectations and minimize overall operational cost while supporting Amazon’s rapid growth globally. We use cutting edge technologies in large-scale optimization, predictive analytics, and generative AI to optimize the flow of packages within our network to efficiently match network capacity with shipment demand. Our services already handle thousands of requests per second, make business decisions impacting billions of dollars a year, and improve the delivery experience for millions of online shoppers. That said, this remains a fast-growing business and our journey has just started. Our mission is to build the most efficient and optimal transportation solution on the planet, using our technology and engineering muscle as our biggest advantage. Key job responsibilities You will work closely with product managers, research scientists, business/operations leaders, and technical leadership to build capabilities that transform our transportation network. This includes analyzing big data, building end-to-end workflows, prototype optimization/simulation models, and launch production capabilities. You will have exposure to senior leadership as you communicate results and provide scientific guidance to the business. Your insights will be a key influencer of our product strategy and roadmap and your experimental research will inform our future investment areas. About the team You will join the Surface Research Science (SRS) team, which is the science partner of the Middle-Mile Planning & Optimization tech organization. SRS is working on a fascinating range of problems, including some of the hardest and largest optimization, simulation, and prediction problems in the industry. Examples are long-term and short-term demand forecasting, capacity planning, driver scheduling, vehicle routing, and equipment rebalancing problems. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA