Building systems that automatically adjust to workloads and data

Tim Kraska, who joined Amazon this summer to build the new Learned Systems research group, explains the power of “instance optimization”.

As an associate professor of electrical engineering and computer science at MIT, Tim Kraska researched instance-optimized database systems, or systems that can automatically adapt to new workloads with minimal human involvement.

Tim Kraska.png
Tim Kraska, an associate professor of electrical engineering and computer science at MIT and director of applied science for Amazon Web Services.

Earlier this year, Amazon hired Kraska and his team to further develop this technology. Currently, Kraska is on leave from MIT, and as director of applied science for Amazon Web Services (AWS), he is helping establish Amazon’s new Learned Systems Group (LSG), which will focus on integrating machine learning (ML) into system design. The group’s first project is to bring instance optimization to AWS’s data warehousing service, Amazon Redshift. Kraska spoke with Amazon Science about the value of instance optimization and the attraction of doing research in an industrial setting.

  1. Q. 

    What is instance optimization?

    A. 

    If you develop a system from scratch for a particular use case, you are able to get orders of magnitude better performance, as you can tailor every system component to that use case. However, in most cases you don't want to do that, because it's a huge effort. In the case of databases, the saying is that it normally takes at least seven years to get the system so that it's usable and stable.

    The idea of instance optimization is that, rather than build one system per use case, we build a system that self-adjusts — instance-optimizes itself — to a particular scenario to get as close as possible to a hand-tuned solution.

  2. Q. 

    How does it do that?

    A. 

    There are different ways to achieve the self-adjustment. With any system, you have a bunch of knobs and a bunch of design choices. If you take Redshift, you can tune the buffer size; you can create materialized views; you can create different types of sort orders. And database administrators can adjust these knobs and make design choices, based on their workloads, to get better performance.

    Related content
    Two authors of Amazon Redshift research paper that will be presented at leading international forum for database researchers reflect on how far the first petabyte scale cloud data warehouse has advanced since it was announced ten years ago.

    The first form of self-adjustment is to make those decisions automatically. You have, let's say, a machine learning model that observes the workload and figures out how to adjust these knobs and what materialized views and sort keys to create. Redshift already does this, for example, with a feature called Automated Materialized Views, which accelerates query performance.

    The next step is that in some cases it's possible to replace components through novel techniques that allow either more customization or tuning in ways that weren’t previously possible.

    To give you an example, in the case of data layouts, current systems mainly support partitioning data by one attribute, which could be a composite key. The reason is that the developers of these systems always thought that someone has to eventually make these design choices manually. Thus, in the past, the tendency was to reduce the number of tuning parameters as much as possible.

    Related content
    Amazon researchers describe new method for distributing database tables across servers.

    This, of course, changes the moment you have automatic tuning techniques using machine learning, which can explore the space much more efficiently. And now maybe the opposite is true: providing more degrees of freedom and more knobs is a good thing, as they offer more potential for customization and, thus, better performance.

    The third self-adjustment method is where you deeply embed machine learning models into a component of the system to give you much better performance than is currently possible.

    Every database, for example, has a query optimizer that takes a SQL query and optimizes it to an execution plan, which describes how to actually run that query. This query optimizer is a complex piece of software, which requires very carefully tuned heuristics and cost models to figure out how best to do this translation. The state of the art now is that you treat this as a deep-learning problem. So we talk at that stage about learned components.

    Query patterns.png
    A comparison of two different approaches to learning to detect query patterns, using graph convolution networks (top) and tree convolution networks (bottom). From “LSched: A workload-aware learned query scheduler for analytical database systems”.

    The ultimate goal is to build a system out of learned components and to have everything tuned in a holistic way. There's a model monitoring the workload, watching the system, and making the right adjustments — potentially in ways no human is able to.

  3. Q. 

    Is it true that you developed an improved sorting algorithm? I thought that sorting was pretty much a solved problem.

    A. 

    That's right. It's still surprising. The way it works is, you learn a model over the distribution of the data — the cumulative distribution function, or CDF, which tells you where an item falls into the probability mass. Let's assume that in an e-commerce database, you have a table with orders, each order has a date, and you want to sort the table by date. Now you can build the CDF over the date attribute, and then you can ask a question like “How many orders happened before January 1st, 2021?”, and it spits out the probability.

    The nice thing about that is that, essentially, the CDF function allows you to ask, “Given an order date, where in the sorted order does it fit?” Assuming the model is perfect, it suddenly allows you to do sorting in O(n). [I.e., the sorting time is proportional to the number of items being sorted, n, not n2nlogn, or the like.]

    Learned sorting.png
    Recursively applying the cumulative distribution function (CDF) to sort items in an array in O(n) time. From “The case for a learned sorting algorithm”.

    Radix sort is also O(n), but it can be memory intensive, as the efficiency depends on the domain size — how many unique values there could possibly be. If your domain is one to a million, it might still be easily do-able in memory. If it's one to a billion, it already gets a little bit harder. If it's one to — pick your favorite power of ten — it eventually becomes impossible to do it in one pass.

    The model-based approach tries to overcome that in a clever way. You know roughly where items land, so you can place them into their approximate position and use insertion sort to correct for model errors. It’s a trick we used for indexes, but it turns out that you can use the same thing for sorting.

  4. Q. 

    For you, what was the appeal of doing research in the industrial setting?

    A. 

    One of the reasons we are so attracted to working for Amazon is access to information about real-world workloads. Instance optimization is all about self-adjusting to the workload and the data. And it's extremely hard to test it in academia.

    There are a few benchmark datasets, but internally, they often use random-number generators to create the data and to determine when and what types of queries are issued against the system.

    We fundamentally have to rethink how we build systems. ... Whenever a developer has to make a trade-off between two techniques or defines a constant, the developer should think about if this constant or trade-off shouldn’t be automatically tuned.
    Tim Kraska

    Because of this randomness, first of all, there are no interesting usage patterns — say, when are the dashboarding queries running, versus the batch jobs for loading the data. All that is gone. Even worse, the data itself doesn’t contain any interesting patterns, which either makes it too hard, because everything is random, or too easy, because everything is random.

    For example, when we tested our learned query optimizer on a very common data-warehousing benchmark, we found that we barely got any improvements, whereas for real-world workloads, we saw big improvements.

    We dug in a little bit, and it turns out that for common benchmarks, like TPC-H, every single database vendor makes sure that the query plans are close to perfect. They manually overfit the system to the benchmark. And this translates in no way to any real-world customer. No customer really runs queries exactly like the benchmark. Nobody does.

    Working with Redshift’s amazing development team and having access to real-world information provides a huge advantage here. It allows us not only to evaluate if our previous techniques actually work in practice, but it also helps us to focus on developing new techniques, which actually make a big difference to users by providing better performance or improved ease of use.

  5. Q. 

    So the collaboration with the Redshift team is going well?

    A. 

    It has been great and, in many ways, exceeded our expectations. When we joined, we certainly had some anxiety about how we would be working with the Redshift team, how much we would still be able to publish, and so on. For example, I know many researchers in industry labs who struggle to get access to data or have actual impact on the product.

    None of these turned out to be a real concern. Not only did we define our own research agenda, but we are also already deeply involved with many exciting projects and have a whole list of exciting things we want to publish about.

  6. Q. 

    Do you still collaborate with MIT?

    A. 

    Yes, and it is very much encouraged. Amazon recently created a Science Hub at MIT, and as part of the hub, AWS is also sponsoring DSAIL, a lab focused on ML-for-systems research. This allows us to work very closely with researchers at MIT.

  7. Q. 

    Some of the techniques you’ve discussed, such as sorting, have a wide range of uses. Will the Learned Systems Group work with groups other than Redshift?

    A. 

    We decided to focus on Redshift first as we had already a lot of experience with instance optimization for analytical systems, but we’ve already started to talk to other teams and eventually plan to apply the ideas more broadly.

    I believe that we fundamentally have to rethink how we build systems and system components. For example, whenever a developer has to make a trade-off between two techniques or defines a constant, the developer should think about if this constant or trade-off shouldn’t be automatically tuned. In many cases, the developer would probably approach the design of the component completely differently if she knows that the component is expected to self-adjust to the workload and data.

    Related content
    Optimizing placement of configuration data ensures that it’s available and consistent during “network partitions”.

    This is true not only for data management systems but across the entire software stack. For example, there has been work on improving network packet classification using learned indexes, spark scheduling algorithms using reinforcement learning, and video compression using deep-learning techniques to provide a better experience when bandwidth is limited. All these techniques will eventually impact the customer experience in the form of performance, reduced cost, or ease of use.

    For good reason, we already see a lot of adaptation of ML to improve systems at Amazon. Redshift, for example, offers multiple ML-based features — like Automated Materialized Views or automatic workload management. With the Learned Systems Group, we hope to accelerate that trend, with fully instance-optimized systems that self-adjust to workloads and data in ways no traditional system can. And that will provide better performance, cost, and ease of use for AWS customers.

Related content

IN, KA, Bengaluru
Alexa+ is Amazon’s next-generation, AI-powered virtual assistant. Building on the original Alexa, it uses generative AI to deliver a more conversational, personalised, and effective experience. Alexa Sensitive Content Intelligence (ASCI) team is developing responsible AI (RAI) solutions for Alexa+, empowering it to provide useful information responsibly. The team is currently looking for Senior Applied Scientists with a strong background in NLP and/or CV to design and develop ML solutions in the RAI space using generative AI across all languages and countries. A Senior Applied Scientist will be a tech lead for a team of exceptional scientists to develop novel algorithms and modeling techniques to advance the state of the art in NLP or CV related tasks. You will work in a hybrid, fast-paced organization where scientists, engineers, and product managers work together to build customer facing experiences. You will collaborate with and mentor other scientists to raise the bar of scientific research in Amazon. Your work will directly impact our customers in the form of products and services that make use of speech, language, and computer vision technologies. We are looking for a leader with strong technical experiences a passion for building scientific driven solutions in a fast-paced environment. You should have good understanding of Artificial Intelligence (AI), Natural Language Understanding (NLU), Machine Learning (ML), Dialog Management, Automatic Speech Recognition (ASR), and Audio Signal Processing where to apply them in different business cases. You leverage your exceptional technical expertise, a sound understanding of the fundamentals of Computer Science, and practical experience of building large-scale distributed systems to creating reliable, scalable, and high-performance products. In addition to technical depth, you must possess exceptional communication skills and understand how to influence key stakeholders. You will be joining a select group of people making history producing one of the most highly rated products in Amazon's history, so if you are looking for a challenging and innovative role where you can solve important problems while growing as a leader, this may be the place for you. Key job responsibilities You'll lead the science solution design, run experiments, research new algorithms, and find new ways of optimizing customer experience. You set examples for the team on good science practice and standards. Besides theoretical analysis and innovation, you will work closely with talented engineers and ML scientists to put your algorithms and models into practice. Your work will directly impact the trust customers place in Alexa, globally. You contribute directly to our growth by hiring smart and motivated Scientists to establish teams that can deliver swiftly and predictably, adjusting in an agile fashion to deliver what our customers need. A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test scientific proposal/solutions to improve our sensitive contents detection and mitigation. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, and model development. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. You work closely with partner teams across Alexa to deliver platform features that require cross-team leadership. About the hiring group About the team The mission of the Alexa Sensitive Content Intelligence (ASCI) team is to (1) minimize negative surprises to customers caused by sensitive content, (2) detect and prevent potential brand-damaging interactions, and (3) build customer trust through appropriate interactions on sensitive topics. The term “sensitive content” includes within its scope a wide range of categories of content such as offensive content (e.g., hate speech, racist speech), profanity, content that is suitable only for certain age groups, politically polarizing content, and religiously polarizing content. The term “content” refers to any material that is exposed to customers by Alexa (including both 1P and 3P experiences) and includes text, speech, audio, and video.
US, WA, Bellevue
Amazon is looking for a Principal Applied Scientist world class scientists to join its AWS Fundamental Research Team working within a variety of machine learning disciplines. This group is entrusted with developing core machine learning solutions for AWS services. At the AWS Fundamental Research Team you will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and explore conceptually large scale ML solutions across different domains and computation platforms. You will interact closely with our customers and with the academic community. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. About the team About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
US, NJ, Newark
At Audible, we believe stories have the power to transform lives. It’s why we work with some of the world’s leading creators to produce and share audio storytelling with our millions of global listeners. We are dreamers and inventors who come from a wide range of backgrounds and experiences to empower and inspire each other. Imagine your future with us. ABOUT THIS ROLE As an Applied Scientist, you will solve large complex real-world problems at scale, draw inspiration from the latest science and technology to empower undefined/untapped business use cases, delve into customer requirements, collaborate with tech and product teams on design, and create production-ready models that span various domains, including Machine Learning (ML), Artificial Intelligence (AI), Natural Language Processing (NLP), Reinforcement Learning (RL), real-time and distributed systems. As an Applied Scientist on our AI Acceleration Team, you will be at the forefront of transforming how Audible harnesses the power of AI to enhance productivity, unlock new value, and reimagine how we work. In this unique role, you'll apply ML/AI approaches to solve complex real-world problems while helping build the blueprint for how Audible works with AI. ABOUT YOU You are passionate about applying scientific approaches to real business challenges, with deep expertise in Machine Learning, Natural Language Processing, GenAI, and large language models. You thrive in collaborative environments where you can both build solutions and empower others to leverage AI effectively. You have a track record of developing production-ready models that balance scientific excellence with practical implementation. You're excited about not just building AI solutions, but also creating frameworks, evaluation methodologies, and knowledge management systems that elevate how entire organizations work with AI. As an Applied Scientist, you will... - Design and implement innovative AI solutions across our three pillars: driving internal productivity, building the blueprint for how Audible works with AI, and unlocking new value through ML & AI-powered product features - Develop machine learning models, frameworks, and evaluation methodologies that help teams streamline workflows, automate repetitive tasks, and leverage collective knowledge - Enable self-service workflow automation by developing tools that allow non-technical teams to implement their own solutions - Collaborate with product, design and engineering teams to rapidly prototype new product ideas that could unlock new audiences and revenue streams - Build evaluation frameworks to measure AI system quality, effectiveness, and business impact - Mentor and educate colleagues on AI best practices, helping raise the AI fluency across the organization ABOUT AUDIBLE Audible is the leading producer and provider of audio storytelling. We spark listeners’ imaginations, offering immersive, cinematic experiences full of inspiration and insight to enrich our customers daily lives. We are a global company with an entrepreneurial spirit. We are dreamers and inventors who are passionate about the positive impact Audible can make for our customers and our neighbors. This spirit courses throughout Audible, supporting a culture of creativity and inclusion built on our People Principles and our mission to build more equitable communities in the cities we call home.
US, WA, Seattle
We are looking for a talented, organized, and customer-focused applied researchers to join our Pricing Optimization science group, with a charter to measure, refine, and launch customer-obsessed improvements to our algorithmic pricing and promotion models across all products listed on Amazon. This role requires an individual with exceptional machine learning modeling and architecture expertise, excellent cross-functional collaboration skills, business acumen, and an entrepreneurial spirit. We are looking for an experienced innovator, who is a self-starter, comfortable with ambiguity, demonstrates strong attention to detail, and has the ability to work in a fast-paced and ever-changing environment. Key job responsibilities * See the big picture. Understand and influence the long term vision for Amazon's science-based competitive, perception-preserving pricing techniques * Build strong collaborations. Partner with product, engineering, and science teams within Pricing & Promotions to deploy machine learning price estimation and error correction solutions at Amazon scale * Stay informed. Establish mechanisms to stay up to date on latest scientific advancements in machine learning, neural networks, natural language processing, probabilistic forecasting, and multi-objective optimization techniques. Identify opportunities to apply them to relevant Pricing & Promotions business problems * Keep innovating for our customers. Foster an environment that promotes rapid experimentation, continuous learning, and incremental value delivery. * Successfully execute & deliver. Apply your exceptional technical machine learning expertise to incrementally move the needle on some of our hardest pricing problems. A day in the life We are hiring an applied scientist to drive our pricing optimization initiatives. We drive cross-domain and cross-system improvements through: * invent and deliver price optimization, simulation, and competitiveness tools for Amazon merchants. * shape and extend our RL optimization platform - a pricing centric tool that automates the optimization of various system parameters and price inputs. * Error detection and price quality guardrails at scale. * Identifying opportunities to optimally price across systems and contexts (marketplaces, request types, event periods) Price is a highly relevant input into Stores architectures, and is highly relevant to the customer; this role creates the opportunity to drive extremely large impact (measured in Bs not Ms), but demands careful thought and clear communication.
IN, TS, Hyderabad
Are you fascinated by the power of Natural Language Processing (NLP) and Large Language Models (LLM) to transform the way we interact with technology? Are you passionate about applying advanced machine learning techniques to solve complex challenges in the e-commerce space? If so, Amazon's International Seller Services team has an exciting opportunity for you as an Applied Scientist. At Amazon, we strive to be Earth's most customer-centric company, where customers can find and discover anything they want to buy online. Our International Seller Services team plays a pivotal role in expanding the reach of our marketplace to sellers worldwide, ensuring customers have access to a vast selection of products. As an Applied Scientist, you will join a talented and collaborative team that is dedicated to driving innovation and delivering exceptional experiences for our customers and sellers. You will be part of a global team that is focused on acquiring new merchants from around the world to sell on Amazon’s global marketplaces around the world. Join us at the Central Science Team of Amazon's International Seller Services and become part of a global team that is redefining the future of e-commerce. With access to vast amounts of data, technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the way sellers engage with our platform and customers worldwide. Together, we will drive innovation, solve complex problems, and shape the future of e-commerce. Please visit https://www.amazon.science for more information Key job responsibilities - Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language-related challenges in the international seller services domain. - Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. - Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance seller performance and customer experiences across various international marketplaces. - Continuously explore and evaluate state-of-the-art NLP techniques and methodologies to improve the accuracy and efficiency of language-related systems. - Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. - Mentor and guide team of Applied Scientists from technical and project advancement stand point - Contribute research to science community and conference quality level papers.
US, WA, Seattle
Amazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators. From personalized music playlists to exclusive podcasts, concert livestreams to artist merch, Amazon Music is innovating at some of the most exciting intersections of music and culture. We offer experiences that serve all listeners with our different tiers of service: Prime members get access to all the music in shuffle mode, and top ad-free podcasts, included with their membership; customers can upgrade to Amazon Music Unlimited for unlimited, on-demand access to 100 million songs, including millions in HD, Ultra HD, and spatial audio; and anyone can listen for free by downloading the Amazon Music app or via Alexa-enabled devices. Join us for the opportunity to influence how Amazon Music engages fans, artists, and creators on a global scale. Key job responsibilities - Work backwards from customer problems to research and develop novel machine learning solutions for music and podcast recommendations. Through A/B testing and online experiments done hand-in-hand with engineering teams, you'll implement and validate your ideas and solutions. - Advocate solutions and communicate results, insights and recommendations to stakeholders and partners. - Produce innovative research on recommender systems that shapes the field and meets the high standards of peer-reviewed publications. You'll cement your team's reputation as thought leaders pioneering new recommenders. Stay current with advancements in the field, adapting latest in literature to build efficient and scalable models A day in the life Lead innovation in AI/ML to shape Amazon Music experiences for millions. Develop state of the art models leveraging and advancing the latest developments in machine learning and genAI. Collaborate with talented engineers and scientists to guide research and build scalable models across our audio portfolio - music, podcasts, live streaming, and more. Drive experiments and rapid prototyping, leveraging Amazon's data at scale. Innovate daily alongside world-class teams to delight customers worldwide through personalization. About the team The team is responsible for models that underly Amazon Music’s recommendations content types (music, podcasts, audiobooks), sequencing models for algorithmic stations across mobile, web and Alexa, ranking models for the carousels and Page strategy on Amazon Music surfaces, and Query Understanding for conversational flow and recommendations. You will collaborate with a team of product managers, applied scientists and software engineers delivering meaningful recommendations, personalized for each of the millions of customers using Amazon Music globally. As a scientist on the team, you will be involved in every aspect of the development lifecycle, from idea generation and scientific research to development and deployment of advanced models. You will work closely with engineering to realize your scientific vision.
US, WA, Seattle
We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA Are you interested in building Agentic AI solutions that solve complex builder experience challenges with significant global impact? The Security Tooling team designs and builds high-performance AI systems using LLMs and machine learning that identify builder bottlenecks, automate security workflows, and optimize the software development lifecycle—empowering engineering teams worldwide to ship secure code faster while maintaining the highest security standards. As a Senior Applied Scientist on our Security Tooling team, you will focus on building state-of-the-art ML models to enhance builder experience and productivity. You will identify builder bottlenecks and pain points across the software development lifecycle, design and apply experiments to study developer behavior, and measure the downstream impacts of security tooling on engineering velocity and code quality. Our team rewards curiosity while maintaining a laser-focus on bringing products to market that empower builders while maintaining security excellence. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the forefront of both academic and applied research in builder experience and security automation, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with other teams. This role offers a unique opportunity to work on projects that could fundamentally transform how builders interact with security tools and how organizations balance security requirements with developer productivity. Key job responsibilities • Design and implement novel AI/ML solutions for complex security challenges and improve builder experience • Drive advancements in machine learning and science • Balance theoretical knowledge with practical implementation • Navigate ambiguity and create clarity in early-stage product development • Collaborate with cross-functional teams while fostering innovation in a collaborative work environment to deliver impactful solutions • Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results • Establish best practices for ML experimentation, evaluation, development and deployment You’ll need a strong background in AI/ML, proven leadership skills, and the ability to translate complex concepts into actionable plans. You’ll also need to effectively translate research findings into practical solutions. A day in the life • Integrate ML models into production security tooling with engineering teams • Build and refine ML models and LLM-based agentic systems that understand builder intent • Create agentic AI solutions that reduce security friction while maintaining high security standards • Prototype LLM-powered features that automate repetitive security tasks • Design and conduct experiments (A/B tests, observational studies) to measure downstream impacts of tooling changes on engineering productivity • Present experimental results and recommendations to leadership and cross-functional teams • Gather feedback from builder communities to validate hypotheses About the team Diverse Experiences Amazon Security values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why Amazon Security? At Amazon, security is central to maintaining customer trust and delivering delightful customer experiences. Our organization is responsible for creating and maintaining a high bar for security across all of Amazon’s products and services. We offer talented security professionals the chance to accelerate their careers with opportunities to build experience in a wide variety of areas including cloud, devices, retail, entertainment, healthcare, operations, and physical stores Inclusive Team Culture In Amazon Security, it’s in our nature to learn and be curious. Ongoing DEI events and learning experiences inspire us to continue learning and to embrace our uniqueness. Addressing the toughest security challenges requires that we seek out and celebrate a diversity of ideas, perspectives, and voices. Training & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, training, and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, MA, Boston
AI is the most transformational technology of our time, capable of tackling some of humanity’s most challenging problems. That is why Amazon is investing in generative AI (GenAI) and the responsible development and deployment of large language models (LLMs) across all of our businesses. Come build the future of human-technology interaction with us. We are looking for a Research Scientist with strong technical skills which includes coding and natural language processing experience in dataset construction, training and evaluating models, and automatic processing of large datasets. You will play a critical role in driving innovation and advancing the state-of-the-art in natural language processing and machine learning. You will work closely with cross-functional teams, including product managers, language engineers, and other scientists. Key job responsibilities Specifically, the Research Scientist will: • Ensure quality of speech/language/other data throughout all stages of acquisition and processing, including data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. • Clean, analyze and select speech/language/other data to achieve goals • Build and test models that elevate the customer experience • Collaborate with colleagues from science, engineering and business backgrounds • Present proposals and results in a clear manner backed by data and coupled with actionable conclusions • Work with engineers to develop efficient data querying infrastructure for both offline and online use cases
SE, Stockholm
Come build the future of entertainment with us. Are you interested in shaping the future of movies and television? Do you want to define the next generation of how and what Amazon customers are watching? Prime Video is a premium streaming service that offers customers a vast collection of TV shows and movies - all with the ease of finding what they love to watch in one place. We offer customers thousands of popular movies and TV shows including Amazon Originals and exclusive licensed content to exciting live sports events. We also offer our members the opportunity to subscribe to add-on channels which they can cancel at anytime and to rent or buy new release movies and TV box sets on the Prime Video Store. Prime Video is a fast-paced, growth business - available in over 200 countries and territories worldwide. The team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do. If this sounds exciting to you, please read on. The Prime Video Sye Protocol team is looking for an Applied Scientist. This person will deliver features that automatically detect and prevent video quality issues before they reach millions of customers worldwide. You will lead the design of models that scale to very large quantities of video data across multiple dimensions. You will embody scientific rigor, designing and executing experiments to demonstrate the technical effectiveness and business value of your methods. You will work alongside engineering teams to deliver your research into production systems that ensure premium streaming experiences for customers globally. You will have demonstrated technical, teamwork and communication skills, and a motivation to deliver customer value from your research. Our team offers exceptional opportunities for you to grow your technical and non-technical skills and make a global impact. Key job responsibilities - Design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement to solve complex video defect detection challenges. - Collaborate with software engineers to integrate successful experimental results into Prime Video wide processes and production systems that operate at scale with minimal computational overhead. - Communicate results and insights to both technical and non-technical audiences, including presentations and written reports to stakeholders across engineering, operations, and content teams. A day in the life Your typical day starts investigating overnight video quality alerts and developing breakthrough detection algorithms. You'll collaborate with engineering teams on production deployment, analyze video data to uncover quality patterns, and work with transformers and video language models. About the team You'll join a team focused on delivering premium video experiences through scientific innovation. We build machine learning systems that automatically detect video quality issues across our global streaming platform, collaborating closely with engineering, operations, and content teams to solve video analysis challenges while ensuring customers never experience poor quality. Our team partners with leading universities to develop solutions and advance computer vision and machine learning techniques. We value scientific rigor whilst staying customer-focused, encouraging both innovative and practical solutions that scale globally. There are opportunities for high-impact publications and patent development that advance the entire field.
US, VA, Arlington
Are you fascinated by the power of Large Language Models (LLM) and Artificial Intelligence (AI) to transform the way we learn and interact with technology? Are you passionate about applying advanced machine learning (ML) techniques to solve complex challenges in the cloud learning space? If so, AWS Training & Certification (T&C) team has an exciting opportunity for you as an Applied Scientist. At AWS T&C, we strive to be leaders in not only how we learn about the latest AI/ML development and AWS services, but also how the same technologies transform the way we learn about them. As an Applied Scientist, you will join a talented and collaborative team that is dedicated to driving innovation and delivering exceptional experiences in our Skill Builder platform for both new learners and seasoned developers. You will be a part of a global team that is focused on transforming how people learn. The position will interact with global leaders and teams across the globe as well as different business and technical organizations. Join us at the AWS T&C Science Team and become a part of a global team that is redefining the future of cloud learning. With access to vast amounts of data, exciting new technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the ways how worldwide learners engage with our learning system and builders develop on our platform. Together, we will drive innovation, solve complex problems, and shape the future of future-generation cloud builders. Please visit https://skillbuilder.awsto learn more. Key job responsibilities - Apply your expertise in LLM to design, develop, and implement scalable machine learning solutions that address challenges in discovery and engagement for our international audiences. - Collaborate with cross-functional teams, including software engineers, data engineers, scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. - Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance operational performance and customer experiences across Skill Builder. - Continuously explore and evaluate state-of-the-art techniques and methodologies to improve the accuracy and efficiency of AI/ML systems. - Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.