Graceful AI

How to make trained systems evolve gracefully.

As machine-learning-based decision systems improve rapidly, we are discovering that it is no longer enough for them to perform well on their own. They should also behave nicely toward their predecessors. When we replace an old trained classifier with a new one, we should expect a smooth transition and a peaceful transfer of decision powers.

Stefano 2.jpg
Stefano Soatto, vice president of applied science for AWS AI.
Credit: Todd Cheney

At Amazon Web Services (AWS), we are constantly working to improve the performance of our learning-based classification systems. Performance is typically measured by average error on test data that are representative of future use cases. We scientists get very excited when we can reduce the average error, and we hope that customers will be delighted when they replace the existing system with a new and improved one. 

However, it is possible for a new model to significantly improve average performance and yet introduce errors that the old model did not make. Those errors can be rare yet so detrimental as to nullify the benefit of the improved model. In some cases, post-processing pipelines built on top of a model can break. In other cases, users are so accustomed to the behavior of the old system that any introduced error contributes to a perceived “regression” in performance.

Regression in model update.png
When updating an old classifier (red) to a new one (dashed blue line), we correct mistakes (top right, white), but we also introduce new ones (negative flips, bottom-left, red). While on average, the errors decrease (from 57% to 42% in this toy example), regression can wreak havoc with downstream processing, nullifying the benefit of the update.
From "Positive-congruent training: Towards regression-free model updates"

You may have experienced this phenomenon when using the search feature in your photo collection. Occasionally, the provider updates the photo management software, presumably improving it. However, if an image that you were able to retrieve previously suddenly goes missing from the search, the natural reaction is surprise: How is this version any better? Give me the old one back!

When the software update occurs, the search feature is usually unavailable for a period of time; the larger your photo collection, the longer the interruption typically lasts. During this time, the system reprocesses old images to create indices and clusters them based on identities. If the model introduces new mistakes, old images may be left out of searches that used to retrieve them.

Which prompts the question, Why is it necessary to reprocess old data? Can we design and train new learning-based models in a manner that is compatible with previous ones, so that it is not necessary to reprocess the entire gallery?

These questions generally pertain to the need to train machine-learning-based systems, not in isolation, but in reference to other models. Specifically, we want the new models to be compatible with classifiers or clustering algorithms designed for the old models, and we want them to not introduce new mistakes. 

Compatible updates

Today, requirements beyond accuracy have begun to drive the machine learning process. These demands include explainability, transparency, fairness, and, now, compatibility and regression minimization. We call the ability to meet those demands “graceful AI”. 

We at AWS first faced this challenge when responding to a customer request to reduce the cost of re-indexing data, which can be significant for large photo collections. 

At the time, there was no literature on the topic. We trained a deep-learning model to minimize the average error while using the “classifier head” of an old model — the last few layers of the model, which issue the final classification decision. In other words, we forced the data representation computed by the new model to live in the same space as the old one, so the same clustering or decision rules could be used without the need to re-index old data. 

Backward-compatible model update.png
Without backward-compatible representation, updating the embedding model for a retrieval/search system means that all previously processed gallery features have to be recomputed by the new model (backfilling), as the new embedding cannot be directly compared with the old one. With a backward-compatible representation, direct comparison becomes possible, eliminating the need to backfill.
From "Towards backward-compatible representation learning"

If this approach worked, customers could start using new models immediately, with no re-indexing time or cost, and the old indexed data could be combined with the new. And it did work, as we described in the paper “Towards backward-compatible representation learning”, presented at last year's Conference on Computer Vision and Pattern Recognition (CVPR). It was the first paper in this increasingly important area of investigation in machine learning, around which we are organizing a tutorial at the upcoming International Conference on Computer Vision (ICCV).

For services that require more complex post-processing than clustering, it is paramount to minimize the number of new errors introduced by model updates. In a forthcoming oral presentation at CVPR, our team will present an approach that we call positive-congruent training, or PC training, which aims to train a new classifier without introducing errors relative to the old one. This is a first step towards regression constrained training. PC training is necessary to avoid rare but harmful mistakes that you wish to never make.

PC training is not just a matter of forcing the new model to mimic the old one — a process known as model distillation. Model distillation mimics the old model, including its errors; we want to be close to the old model only when it gets it right. 

Even when the average error is reduced to a minimum, it is still possible to reduce what we call the “negative flip rate” (NFR), which measures the percentage of new errors compared to the old model. This can be done by trading errors, keeping the average error rate constant (unless the average error rate is precisely zero, which is almost never the case in the real world). So minimizing the NFR is a separate criterion from the standard error rate, and PC training represents a new branch of research in machine learning.

It is possible for a new model to significantly improve average performance and yet introduce errors that the old model did not make. Those errors can be rare yet so detrimental as to nullify the benefit of the improved model.
Stefano Soatto

Machine-learning-based systems will continue to evolve, and eventually we will do away with the artificial separation of training (when the model parameters are learned from a fixed training dataset) and inference (when new data is presented to elicit a decision or action). As we make steps toward such “lifelong learning”, it is important for new models developed in the meantime to play nicely with existing ones. 

We have sown the first seeds of work in this area, but much remains to be done. As models are repeatedly updated, a growing set of compatibility constraints will ultimately weigh negatively on overall performance, much as backward compatibility with all previous versions makes some software so unwieldy. 

We are pleased that some of our models at AWS AI Applications are already backward-compatible, which means that customers will be able to upgrade to new models without having to change their processing pipelines or re-index old data. In 2021, any transfer of decision power should occur without drama. 

Modified models

Another version of the incompatibility problem arises when one wishes to deploy the same system on different devices with diverse resource constraints. One might, for instance, have a large and powerful model running in the cloud and smaller versions of it running on edge devices such as smartphones.

We’ve found that, to ensure compatibility, it’s not enough for the smaller models to approximate the accuracy of the large model; they also need to approximate its architecture. Again at the next CVPR, we will present a paper on “heterogeneous visual search”, which shows how to enforce this type of compatibility across platforms.

Finally, all of the above would be easier if deep neural networks were linear systems, and training consisted of minimizing a convex loss function. As we all know, this is not the case. The niche literature on linearizing deep neural networks has mostly focused on analyzing those networks’ behavior; their performance has been far below that of the full nonlinear, nonconvex originals. 

However, we have recently shown that, if linearization is done right, by modifying the loss function, the model, and the optimization, we can train linear models that perform just as well as their nonlinear counterparts. “LQF: Linear quadratic fine-tuning”, also to be presented at CVPR, describes modifying the architecture of a ResNet backbone by replacing ReLu with leaky ReLu, modifying the loss function from cross-entropy to least-square, and modifying the optimization by preconditioning using Kronecker factorization.

We are excited to continue exploring how these and other developments can lead to more transparent, more interpretable, and more “gracious” AI systems.

Related content

US, WA, Seattle
By applying to this position, your application will be considered for all locations we hire for in the United States. Are you interested in machine learning, deep learning, automated reasoning, speech, robotics, computer vision, optimization, or quantum computing? We are looking for applied scientists capable of using a variety of domain expertise to invent, design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. Our full-time opportunities are available in, but are not limited to the following domains: • Machine Learning: You will put Machine Learning theory into practice through experimentation and invention, leveraging machine learning techniques (such as random forest, Bayesian networks, ensemble learning, clustering, etc.), and implement learning systems to work on massive datasets in an effort to tackle never-before-solved problems. • Automated Reasoning: AWS Automated Reasoning teams deliver tools that are called billions of times daily. Amazon development teams are integrating automated-reasoning tools such as Dafny, P, and SAW into their development processes, raising the bar on the security, durability, availability, and quality of our products. Areas of work include: Distributed proof search, SAT and SMT solvers, Reasoning about distributed systems, Automating regulatory compliance, Program analysis and synthesis, Security and privacy, Cryptography, Static analysis, Property-based testing, Model-checking, Deductive verification, compilation into mainstream programming languages, Automatic test generation, and Static and dynamic methods for concurrent systems. • Natural Language Processing and Speech Technologies: You will tackle some of the most interesting research problems on the leading edge of natural language processing. We are hiring in all areas of spoken language understanding: NLP, NLU, ASR, text-to-speech (TTS), and more! • Computer Vision and Robotics: You will help build solutions where visual input helps the customers shop, anticipate technological advances, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for our customers. • Quantum: Quantum computing is rapidly emerging and our customers can the see the potential it has to address their challenges. One of our missions at AWS is to give customers access to the most innovative technology available and help them continuously reinvent their business. Quantum computing is a technology that holds promise to be transformational in many industries. We are adding quantum computing resources to the toolkits of every researcher and developer. If this sounds exciting to you - come build the future with us! Key job responsibilities You will have access to large datasets with billions of images and video to build large-scale systems Analyze and model terabytes of text, images, and other types of data to solve real-world problems and translate business and functional requirements into quick prototypes or proofs of concept Own the design and development of end-to-end systems Write technical white papers, create technical roadmaps, and drive production level projects that will support Amazon Web Services Work closely with AWS scientists to develop solutions and deploy them into production Work with diverse groups of people and cross-functional teams to solve complex business problems
US, WA, Seattle
Our mission is to create best-in-class AI agents that seamlessly integrate multimodal inputs like speech, images, and video, enabling natural, empathetic, and adaptive interactions. We develop cutting-edge Large Language Models (LLMs) that leverage advanced architectures, cross-modal learning, interpretability, and responsible AI techniques to provide coherent, context-aware responses augmented by real-time knowledge retrieval. We seek a talented Applied Scientist with expertise in LLMs, speech, audio, NLP, or multimodal learning to pioneer innovations in data simulation, representation, model pre-training/fine-tuning, generation, reasoning, retrieval, and evaluation. The ideal candidate will build scalable solutions for a variety of applications, such as streaming real-time conversational experiences, including multilingual support, talking avatar interactions, customizable personalities, and conversational turn-taking. With a passion for pushing boundaries and rapid experimentation, you'll deliver high-impact solutions from research to customer-facing products and services. Key job responsibilities As an Applied Scientist, you'll leverage your expertise to research novel algorithms and modeling techniques to develop data simulation approaches mimicking real-world interactions with a focus on the speech modality. You'll acquire and curate large, diverse datasets while ensuring privacy, creating robust evaluation metrics and test sets to comprehensively assess LLM performance. Integrating human-in-the-loop feedback, you'll iterate on data selection, sampling, and enhancement techniques to improve the core model performance. Your innovations in data representation, model pre-training/fine-tuning on simulated and real-world datasets, and responsible AI practices will directly impact customers through new AI products and services.
US, WA, Seattle
Our mission is to create best-in-class AI agents that seamlessly integrate multimodal inputs like speech, images, and video, enabling natural, empathetic, and adaptive interactions. We develop cutting-edge Large Language Models (LLMs) that leverage advanced architectures, cross-modal learning, interpretability, and responsible AI techniques to provide coherent, context-aware responses augmented by real-time knowledge retrieval. We seek a talented Applied Scientist with expertise in LLMs, speech, audio, NLP, or multimodal learning to pioneer innovations in data simulation, representation, model pre-training/fine-tuning, generation, reasoning, retrieval, and evaluation. The ideal candidate will build scalable solutions for a variety of applications, such as streaming real-time conversational experiences, including multilingual support, talking avatar interactions, customizable personalities, and conversational turn-taking. With a passion for pushing boundaries and rapid experimentation, you'll deliver high-impact solutions from research to customer-facing products and services. Key job responsibilities As an Applied Scientist, you'll leverage your expertise to research novel algorithms and modeling techniques to develop data simulation approaches mimicking real-world interactions with a focus on the speech modality. You'll acquire and curate large, diverse datasets while ensuring privacy, creating robust evaluation metrics and test sets to comprehensively assess LLM performance. Integrating human-in-the-loop feedback, you'll iterate on data selection, sampling, and enhancement techniques to improve the core model performance. Your innovations in data representation, model pre-training/fine-tuning on simulated and real-world datasets, and responsible AI practices will directly impact customers through new AI products and services.
US, WA, Seattle
Join us at the cutting edge of Amazon's sustainability initiatives to work on environmental and social advancements to support Amazon's long term worldwide sustainability strategy. At Amazon, we're working to be the most customer-centric company on earth. To get there, we need exceptionally talented, bright, and driven people. The Worldwide Sustainability (WWS) organization capitalizes on Amazon’s scale & speed to build a more resilient and sustainable company. We manage our social and environmental impacts globally, driving solutions that enable our customers, businesses, and the world around us to become more sustainable. Sustainability Science and Innovation (SSI) is a multi-disciplinary team within the WW Sustainability organization that combines science, analytics, economics, statistics, machine learning, product development, and engineering expertise. We use this expertise and skills to identify, develop and evaluate the science and innovations necessary for Amazon, customers and partners to meet their long-term sustainability goals and commitments. We’re seeking a Senior Principal Scientist for Sustainability and Climate AI to drive technical strategy and innovation for our long-term sustainability and climate commitments through AI & ML. You will serve as the strategic technical advisor to science, emerging tech, and climate pledge partners operating at the Director, VPs, and SVP level. You will set the next generation modeling standards for the team and tackle the most immature/complex modeling problems following the latest sustainability/climate sciences. Staying hyper current with emergent sustainability/climate science and machine learning trends, you'll be trusted to translate recommendations to leadership and be the voice of our interpretation. You will nurture a continuous delivery culture to embed informed, science-based decision-making into existing mechanisms, such as decarbonization strategies, ESG compliance, and risk management. You will also have the opportunity to collaborate with the Climate Pledge team to define strategies based on emergent science/tech trends and influence investment strategy. As a leader on this team, you'll play a key role in worldwide sustainability organizational planning, hiring, mentorship and leadership development. If you see yourself as a thought leader and innovator at the intersection of climate science and tech, we’d like to connect with you. About the team Diverse Experiences: World Wide Sustainability (WWS) values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Inclusive Team Culture: It’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth: We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance: We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, WA, Bellevue
The Geospatial science team solves problems at the interface of ML/AI and GIS for Amazon's last mile delivery programs. We have access to Earth-scale datasets and use them to solve challenging problems that affect hundreds of thousands of transporters. We are looking for strong candidates to join the transportation science team which owns time estimation, GPS trajectory learning, and sensor fusion from phone data. You will join a team of GIS and ML domain experts and be expected to develop ML models, present research results to stakeholders, and collaborate with SDEs to implement the models in production. Key job responsibilities - Understand business problems and translate them into science problems - Develop ML models - Present research results - Write and publish papers - Write production code - Collaborate with SDEs and other scientists
IN, KA, Bengaluru
Job Description AOP(Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications and research modeling to optimize operation processes.. You will work with professional Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for an Applied Scientist to join our growing Science Team in Bangalore/Hyderabad. As an Applied Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact . You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: • Using live package and truck signals to adjust truck capacities in real-time • HOTW models for Last Mile Channel Allocation • Using LLMs to automate analytical processes and insight generation • Using ML to predict parameters which affect truck scheduling • Working with global science teams to predict Shipments Per Route for $MM savings • Deep Learning models to classify addresses based on various attributes Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques As part of our team, candidate in this role will work in close collaboration with other applied scientists and cross functional teams on high visibility projects with direct exposure to the senior leadership team on regular basis. About the team This team is responsible for applying science based algo and techniques to solve the problems in operation and supply chain. Some of these problems include Truck Scheduling, LM capacity planning, LLM and so on.
US, WA, Bellevue
The Learning & Development Science team in Amazon Logistics (AMZL) builds state-of-the-art Artificial Intelligence (AI) solutions for enhancing leadership and associate development within the organization. We develop technology and mechanisms to map the learner journeys, answer real-time questions through chat assistants, and drive the right interventions at the right time. As an Applied Scientist on the team, you will play a critical role in driving the design, research, and development of these science initiatives. The ideal candidate will lead the research on learning and development trends, and develop impactful learning journey roadmap that align with organizational goals and priorities. By parsing the information of different learning courses, they will utilize the latest advances in Gen AI technology to address the personalized questions in real-time from the leadership and associates through chat assistants. Post the learning interventions, the candidate will apply causal inference or A/B experimentation frameworks to assess the associated impact of these learning programs on associate performance. As a part of this role, this candidate will collaborate with a large team of experts in the field and move the state of learning experience research forward. They should have the ability to communicate the science insights effectively to both technical and non-technical audiences. Key job responsibilities * Apply science models to extract actionable information from learning feedback * Leverage GenAI/Large Language Model (LLM) technology for scaling and automating learning experience workflows * Design and implement metrics to evaluate the effectiveness of AI models * Present deep dives and analysis to both technical and non-technical stakeholders, ensuring clarity and understanding and influencing business partners * Perform statistical analysis and statistical tests including hypothesis testing and A/B testing * Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation
US, WA, Bellevue
Are you excited about developing cutting-edge generative AI, large language models (LLMs), and foundation models? Are you looking for opportunities to build and deploy them on real-world problems at a truly vast scale with global impact? At AFT (Amazon Fulfillment Technologies) AI, a group of around 50 scientists and engineers, we are on a mission to build a new generation of dynamic end-to-end prediction models (and agents) for our warehouses based on GenAI and LLMs. These models will be able to understand and make use of petabytes of human-centered as well as process information, and learn to perceive and act to further improve our world-class customer experience – at Amazon scale. We are looking for a Sr. Applied Scientist who will become of the research leads in a team that builds next-level end-to-end process predictions and shift simulations for all systems in a full warehouse with the help of generative AI, graph neural networks, and LLMs. Together, we will be pushing beyond the state of the art in simulation and optimization of one of the most complex systems in the world: Amazon's Fulfillment Network. Key job responsibilities In this role, you will dive deep into our fulfillment network, understand complex processes, and channel your insights to build large-scale machine learning models (LLMs and Transformer-based GNNs) that will be able to understand (and, eventually, optimize) the state and future of our buildings, network, and orders. You will face a high level of research ambiguity and problems that require creative, ambitious, and inventive solutions. You will work with and in a team of applied scientists to solve cutting-edge problems going beyond the published state of the art that will drive transformative change on a truly global scale. You will identify promising research directions, define parts of our research agenda and be a mentor to members of our team and beyond. You will influence the broader Amazon science community and communicate with technical, scientific and business leaders. If you thrive in a dynamic environment and are passionate about pushing the boundaries of generative AI, LLMs, and optimization systems, we want to hear from you. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team Amazon Fulfillment Technologies (AFT) powers Amazon’s global fulfillment network. We invent and deliver software, hardware, and data science solutions that orchestrate processes, robots, machines, and people. We harmonize the physical and virtual world so Amazon customers can get what they want, when they want it. The AFT AI team has deep expertise developing cutting edge AI solutions at scale and successfully applying them to business problems in the Amazon Fulfillment Network. These solutions typically utilize machine learning and computer vision techniques, applied to text, sequences of events, images or video from existing or new hardware. We influence each stage of innovation from inception to deployment, developing a research plan, creating and testing prototype solutions, and shepherding the production versions to launch.
US, CA, Santa Clara
Machine learning (ML) has been strategic to Amazon from the early years. We are pioneers in areas such as recommendation engines, product search, eCommerce fraud detection, and large-scale optimization of fulfillment center operations. The Generative AI team helps AWS customers accelerate the use of Generative AI to solve business and operational challenges and promote innovation in their organization. As an applied scientist, you are proficient in designing and developing advanced ML models to solve diverse challenges and opportunities. You will be working with terabytes of text, images, and other types of data to solve real-world problems. You'll design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for talented scientists capable of applying ML algorithms and cutting-edge deep learning (DL) and reinforcement learning approaches to areas such as drug discovery, customer segmentation, fraud prevention, capacity planning, predictive maintenance, pricing optimization, call center analytics, player pose estimation, event detection, and virtual assistant among others. Key job responsibilities The primary responsibilities of this role are to: • Design, develop, and evaluate innovative ML models to solve diverse challenges and opportunities across industries • Interact with customer directly to understand their business problems, and help them with defining and implementing scalable Generative AI solutions to solve them • Work closely with account teams, research scientist teams, and product engineering teams to drive model implementations and new solution A day in the life ABOUT AWS: Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
IL, Tel Aviv
Come build the future of entertainment with us. Are you interested in helping shape the future of movies and television? Do you want to help define the next generation of how and what Amazon customers are watching? Prime Video is a premium streaming service that offers customers a vast collection of TV shows and movies - all with the ease of finding what they love to watch in one place. We offer customers thousands of popular movies and TV shows including Amazon Originals and exclusive licensed content to exciting live sports events. We also offer our members the opportunity to subscribe to add-on channels which they can cancel at anytime and to rent or buy new release movies and TV box sets on the Prime Video Store. Prime Video is a fast-paced, growth business - available in over 240 countries and territories worldwide. The team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do. If this sounds exciting to you, please read on. We are looking for a Data Scientist to embark on our journey to build a Prime Video Sports tech team in Israel from ground up. Our team will focus on developing products to allow for personalizing the customers’ experience and providing them real-time insights and revolutionary experiences using Computer Vision (CV) and Machine Learning (ML). You will get a chance to work on greenfield, cutting-edge and large-scale engineering and big-data challenges, and a rare opportunity to be one of the founders of the Israel Prime Video Sports tech team in Israel. Key job responsibilities - Design and deliver big data architectures for experimental and production consumption between scientists and software engineering. - Develop the end-to-end automation of data pipelines, making datasets readily-consumable by science and engineering teams. - Create automated alarming and dashboards to monitor data integrity. - Create and manage capacity and performance plans. - Act as the subject matter expert for the data structure and usage.