This picture shows the HVAC system on the rooftop of a skyscraper
Facility energy optimization provides an organization’s facilities team low-hanging-fruit opportunities for reducing costs and carbon. Data-driven analysis can help to identify fault detection and drive energy efficiencies for facilities management.

Data-driven fault identification is key to more sustainable facilities management

How data-driven analysis can help to identify fault detection and drive energy efficiencies for facilities of all sizes.

In a previous article on sustainable buildings, we talked about the approach of “sense, act, and scale” to drive efficiencies in buildings, and provided information using scientific publications. In this article, we will explore how data-driven analysis can help to identify fault detection and drive energy efficiencies for facilities management by providing details on:

  • Key challenges for building management and operations;
  • Building system design fundamentals;
  • Key data points to investigate faults for facilities-level sustainability; and
  • Data-driven fault identification on AWS

Global temperatures are on the rise, greenhouse gas (GHG) emissions are the primary contributor, and facilities are among the top contributors to GHG. As stipulated in the Paris Agreement, facilities need to be 30% more energy efficient and net carbon neutral by 2050. Many companies have set new targets to reduce their emissions in recent years. For example, Amazon has set out the mission to be net neutral by 2040 and, in its recent sustainability report, has touched on how the company is using innovative design to build sustainability into physical Amazon campuses.

NeurIPS competition involves reinforcement learning, with the objective of minimizing both cost and CO2 emissions.

This article provides information on how companies of all sizes can operate and maintain their existing buildings more efficiently by identifying and fixing faults using data-driven mechanisms. In this vein, Amazon is sponsoring an AI challenge at NeurIPS this year that focuses on building energy management in a smart grid. Bottom line: energy optimization of facilities must be a key component of your organization’s plan to operate more sustainably.

Related content
As office buildings become smarter, it is easier to configure them with sustainability management in mind.

Facility energy optimization provides an organization’s facilities team low-hanging-fruit opportunities for reducing costs and carbon. However, building systems do inherit many complexities that must be addressed.

Some of the key facilities-management challenges are:

  • A building’s lifespan is 50+ years, and a facility’s system sensors are typically installed on day one. Many new cloud-native sensor options come to market every year, but building management systems (BMS) aren’t open, making it difficult to modernize data architectures for building infrastructure;
  • Across any large real estate portfolio there is a wide range of technology, standards, building types, and designs that are difficult to manage over their lifecycles; 
  • Building management and automation systems require a third party to own and modify production data, and licensing fees aren’t based on consumption pricing; and 
  • Facilities teams generally lack the cloud expertise required to design a bespoke management solution, and their IT teams often don’t have product-level experience to provide as an alternative for addressing building-management needs.

Facilities management and sustainability

Facilities management teams have limited options to modify most core BMS functions.

These systems are sometimes referred to as black boxes in that they don’t have the same level of do-it-yourself features that most cloud users have come to expect. There can be contractual challenges, as well, for building tenants who don’t have access to BMS information. This is by design, primarily due to a clear operational argument that safety and security control functions should be limited to key personnel. However, this lack of access to building-performance analytics, required for enterprise-level sustainability transformations, is increasingly considered a blocker by many of our sustainability customers.

Let’s begin our analysis by looking at a building’s biggest consumer of electricity and producer of emissions: the HVAC system.

HVAC units are central to a building and constitute roughly 50% of a building’s energy consumption. As a result, they are well instrumented and generally follow a rules-based approach. The downside: this approach can lead to many false alarms and building managers rely on manual inspection and occupants to communicate important faults that require attention. Building managers and engineers focus significant time and budget on HVAC systems, but nevertheless HVAC system faults still can account for 5% to 20% of energy waste.

The most common example of an HVAC unit with which we are all familiar is an air conditioner. In a BMS, HVAC is comprised of sub-components that provide heating or cooling, ventilation (air handling units, fans) and AC (rooftop units, variable refrigerants) and more.

HVAC Units 2_220830211027 (1).png

A building’s data model, and the larger building management schema, are established when the building first opens. Alerts, alarms, and performance data are issued through the BMS and a manager will notify a building services team to take action as needed. However, as the building and infrastructure ages many alarms become endemic and are difficult to remedy. Alarm fatigue is a term often used to describe the resulting BMS operator experience.

Variable air volume (VAV) units are another important asset that help to maintain temperatures by managing local air flow. VAV units help optimize the temperature by modifying air flow as opposed to conventional air volume (CAV) units which provide a constant volume of air that only affects air temperature.

There are often hundreds of VAV units in a larger building and managing them is burdensome. Building engineers have limited time to configure each of them as building demands change and VAV unit configurations are typically left unchanged after the commissioning of the building. The result: many unseen or mysterious building faults, and the hidden loss of energy over the years.

Related content
Confronting climate change requires the participation of governments, companies, academics, civil-society organizations, and the public.

Many modern buildings are designed to accommodate whatever the building planners know at the time of commissioning. As a result, HVAC system configuration isn’t a data-driven process because operational data doesn’t yet exist. The only real incentives for HVAC system optimization typically result from failures and occupant complaints. To meet future sustainability targets, buildings must be equipped with data-driven smart configurations that can be adjusted automatically.

To achieve this, we must understand the fundamentals of air flow as we need to combine the expertise of building engineers, IoT engineers, and data engineers to resolve some of the complex air-flow challenges. This also requires an understanding of how facilities are generally managed today, which we’ll examine next.

Anatomy of facilities management

The image below shows how an air-handling unit (AHU) uses fans to distribute air through ducting. These ducts are attached to AHUs (a type of VAV unit), controlling the flow of air to specific rooms.

typical air distribution topology.png
BMS software provides tools to help operators define logical “zones” that virtually represent a given physical space. This zone approach is useful in helping operators analyze the effectiveness of a given cooling design relative to the operational requirements.

To change the temperature of a given zone (often representing a physical room), a sensor will send a notification through a building gateway and controller. This device serves as an intermediary between the BMS server and a given HVAC unit.

There is some automation built into these HVAC systems in the form of thermostats. The automation comes in the form of a given cooling unit responding to a temperature reading, calculated by the thermostat. These setpoints provide a temperature range that, when followed, provide the best performance of the system.

Setpoint typically refers to the point at which a building system is set to activate or deactivate, eg a heating system might be set to switch on if the internal temperature falls below 20°C.

VAV Terminal_220906154354.png
A controller in the VAV unit is attached to the room thermostat. Thermostats tells VAV terminals if zone temperatures are too hot, cold, or just right. The VAV unit has several key components inside: controller, actuator, damper, shaft, and reheat coil.

AHU and VAV unit control points are managed by BMS software. This software is vendor managed and the configuration of the control system is determined at building inception. The configurations can be established based on several factors: room capacity and occupancy, room location, room cooling requirement, zone requirement, and more.

To illustrate a data model that reflects the operation of the HVAC system, let’s look at the VAVs that help distribute the air and the fault-driven alerts apparent in most aging systems. It is difficult to personalize these configurations as they are not data driven and do not update automatically. Let's use the flow of air through a given building as a use case and assume its operation will have a sizable impact on the building's overall energy usage.

Damper Side-by-side_v2_220919101743.png
On the left, the damper is fully open because it is a summer day, it is hot outside, and the room is full of people. But on the right, the damper is partially open because it is a winter day and there are no people in the room, requiring minimum heat load.

There will often be multiple zone-specific faults, such as temperature or flow failures, issues with dampers or fans, software configuration errors that can lead to short-cycling of the unit(s), and communication or controller problems, which make it difficult to even identify the problem remotely. These factors all result in a low-efficiency cooling system that increases emissions, wasting energy and money.

What faults can tell you about sustainable building performance

Faults can be neglected for long periods of time, leaking invisible energy in the process.

Researchers from UC San Diego conducted a detailed data analysis (Bharathan was a co-author) of a 145,000-square-foot building. They identified 88 faults after building engineers fixed all the issues they could find. The paper estimates that fixing these faults could save 410.3 megawatt hours per year and, at a typical electrical cost of 12 cents per kilowatt hour, achieve a $492,360 savings in the first year.

According to the U.S. Environmental Protection Agency’s Greenhouse Gas Equivalencies Calculator, that’s the equivalent of 38,244 passenger car trips abated. Cisco offers another example. The company achieved a 28% reduction in electrical usage in their buildings worldwide by using an IP-Enabled Energy Management solution.

Traditional fault fixing focuses on the centralized HVAC subsystems such as AHU. Here we focus on the VAV units that are often ignored. Some of the key issues in VAV units are: air supply flow, temperature setpoints, thermostat adjustments, inappropriate cooling or stuck dampers.

Related content
Pioneering web-based PackOpt tool has resulted in an annual reduction in cardboard waste of 7% to 10% in North America, saving roughly 60,000 tons of cardboard annually.

To identify these faults, you can perform data analysis with key data attributes including temperature, heating, and cooling setpoints; upper- and lower- limit changes based on day of week; re-heat coil (on or off); occupancy sensor and settings (occupied, standby or unoccupied); damper sensor and damper settings; and pressure flow.

Using these parameters, we can define informative models. For example, you can create setpoints informed by seasonal weather data, in addition to room thermostats. You also can perform temperature data analysis against known occupancy times.

Data analysis isn’t easy at first; it’s generally not in a state where it can be readily loaded into a graph store. Oftentimes there is a lot of data transformation and IoT work required to get the data to a place where it can be analyzed by data scientists. To solve this challenge, you will need data experts, FM domain experts, cloud engineers, and someone who can bring them together to drive the right focus.

To begin, the best approach is setting up a meeting between your facilities and IT teams to start examining your building data. Some teams may grant you read-only access to the system. Otherwise, from a .CSV download of the last two to three years of building data, you can perform your analysis.

For data- driven fault identification within your facilities data, you can get started by using the Model, Cluster, and Compare (MCC approach). The primary objective of MCC is to determine clusters of zones within a building, and then use these clusters to automatically determine misconfigured, anomalous, or faulty zone controller configuration.

MCC approach to data-driven analysis

We will use a university-building example to explain the benefits of the MCC approach. The university building comprised personal offices, shared offices, kitchens, and restrooms.

In a typical room, the HVAC provides cold air during the summer. The supplied air flow is modulated to maintain the required temperature during day time, and falls back to a minimum during the night.

In the graph below, we show a room where the opposite happens because of a misconfiguration fault.

Supply Flow Graphic 1_220831110607.png
The VAV unit cools the room at night, but uses a minimal air flow during the day. The cooling temperature setpoint is 80°F from midnight until 10 a.m., and then drops to 75°F as expected. However, there is a continuous cold air supply flow of 800 cubic feet per minute (CFM) throughout the night until 11:30 a.m.

The building management contractor surmised these errors were caused due to a misunderstanding at the time of initial building commissioning. This fault was hidden within the system for years, and was identified while doing an MCC analysis.

Model

When we try to identify faults with raw sensor data, it often leads to misleading results. For example, a simple fault detection rule may generate an alarm if the temperature of a room goes beyond a threshold. The alarm may be false for any number of reasons: it could be a particularly hot day, or an event is occurring in the room. We need to look for faults that are consistent, and require human attention. Given the large number of alarms that are triggered with simple rules, such faults get overlooked.

Our MCC algorithm looks for rooms that behave differently from others over a long time-span. To compare different rooms, we create a model that captures the generic patterns of usage over months or years. Then we can compare and cluster rooms to weed out the faults.

In our algorithm, we use the measured room temperature and air flow from the HVAC to create a room energy model. The energy spent by the HVAC system on a room is proportional to the product of its temperature and airflow supplied as per the laws of thermodynamics. We use the product of two sensor measurements as the parameter to model the room because it indicates the generic patterns of use. If we find rooms whose energy patterns are substantially different, we can inspect them further.

Cluster

Room temperatures can fluctuate for natural reasons, and our fault-detection algorithm should not flag them.

The MCC algorithm clusters rooms that are similar to each other with the KMeans algorithm. The clusters naturally align rooms that are similar, for example, west-facing rooms, east-facing rooms, kitchenettes, and conference rooms. We can create these clusters manually, based on domain knowledge and usage type, or the clustering algorithm can automate this process.

Compare

Having defined configurations per cluster, the MCC algorithm then compares rooms to identify anomalies. This step ensures that natural fluctuations are ignored, and only the egregious rooms are highlighted, reducing the number of false alarms.

Intelligent rules

The MCC study created rules to detect new faults after analyzing the anomalies manually. Rules are a natural way to integrate with an existing system, and to catch similar faults that occur in the future. Rules are also interpretable by domain experts, enabling further tuning.

An interesting example of an identified fault is shown below:

Supply Flow Graphic 2_220831110647.png
The HVAC system strives to maintain the room temperature between the cooling setpoint (78F in this room) and the heating setpoint (74F). If the temperature goes beyond these setpoints, it will cool/heat the room as required. The room is excessively cooled with high air flow (800 CFM), causing the room temperature to fall below the heating setpoint, which then triggers heating. As a result of this fault, the room uses excessive energy to maintain comfort.

There were five rooms with similar issues on the same floor and 15 overall within the building. The cause of the fault: the designed air flow specifications were based on maximum occupancy. Issues such as these cause enormous energy waste, and they often go unnoticed for years.

A path forward 

In this post we’ve provided some foundational concepts to consider in how you can better use data to improve both facility performance and availability.

Whether your goal is to improve building performance in support of sustainability transformation or to improve fault detection, the path starts with modernizing the data models that support your facilities. Following a data modernization path will illustrate where the building architecture that provides the data is not meeting expectations.

As a next step, facilities and IT managers can get started by:

  • Performing a basic audit of their buildings and look for options to gather key parameter data outlined above. 
  • Consolidating data from the relevant sources, applying data standardization, and making use of the fault-detection approach outlined above. 
  • Making use of AWS Data Analytics and AWS AI/ML services to perform data analysis and apply machine learning algorithms to identify data anomalies. Amazon uses these services to manage the thousands of world-class facilities that serve our employees, customers, and communities. Learn more about our sustainable building initiatives

These steps will help identify energy hot spots and hidden faults in your facilities; facilities managers can then make use of this information to fix the relevant faults and drive facility sustainability. Finally, consider making sustainability data easily accessible to executive teams to help drive discussions and decisions on impactful carbon-abatement initiatives.

Research areas

Related content

US, CA, San Francisco
Amazon has launched a new research lab in San Francisco to develop foundational capabilities for useful AI agents. We’re enabling practical AI to make our customers more productive, empowered, and fulfilled. Our work leverages large vision language models (VLMs) with reinforcement learning (RL) and world modeling to solve perception, reasoning, and planning to build useful enterprise agents. Our lab is a small, talent-dense team with the resources and scale of Amazon. Each team in the lab has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. We’re entering an exciting new era where agents can redefine what AI makes possible. Key job responsibilities You will contribute directly to AI agent development in an applied research role to improve the multi-model perception and visual-reasoning abilities of our agent. Daily responsibilities including model training, dataset design, and pre- and post-training optimization. You will be hired as a Member of Technical Staff.
IN, TN, Chennai
Are you excited about the digital media revolution and passionate about designing and delivering advanced analytics that directly influence the product decisions of Amazon's digital businesses. Do you see yourself as a champion of innovating on behalf of the customer by turning data insights into action? The Amazon Digital Acceleration Analytics team is looking for an analytical and technically skilled individual to join our team. In this role, you will invent, build and deploy state of the art machine-learning models and systems to enable and enhance the team's mission This role offers wide scope, autonomy, and ownership. You will work closely with software engineers & data engineers to put algorithms into practice. You should have strong business judgement, excellent written and verbal communication skills. The candidate should be willing to take on challenging initiatives and be capable of working both independently and with others as a team. Key job responsibilities We are looking for an experienced data scientist with strong foundations in mathematics, statistics & machine learning with exceptional communication and leadership skills, and a proven track record of delivery. In this role, You will Define a long-term science vision and roadmap for the team, driven fundamentally from our customers' needs, translating those directions into specific plans for engineering teams. Design and execute machine learning projects/products end-to-end: from ideation, analysis, prototyping, development, metrics, and monitoring. Drive end-to-end statistical analysis that have a high degree of ambiguity, scale, and complexity. Research and develop advanced Generative AI based solutions to solve diverse customer problems. About the team The MIDAS team operates within Amazon's Digital Analytics (DA) engineering organization, building analytics and data engineering solutions that support cross-digital teams. Our platform delivers a wide range of capabilities, including metadata discovery, data lineage, customer segmentation, compliance automation, AI-driven data access through generative AI and LLMs, and advanced data quality monitoring. Today, more than 100 Amazon business and technology teams rely on MIDAS, with over 20,000 monthly active users leveraging our mission-critical tools to drive data-driven decisions at Amazon scale.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! We are forming a new organization within Prime Video to redefine our operational landscape through the power of artificial intelligence. As a Applied Scientist within this initiative, you will be a technical leader helping to design and build the intelligent systems that power our vision. You will tackle complex and ambiguous problems, designing and delivering scalable and resilient agentic AI and ML solutions from the ground up. You will not only write high-quality, maintainable software and models, but also mentor other scientists, influence our technical strategy, and drive engineering best practices across the team. Your work will directly contribute to making Prime Video's operations more efficient and will set the technical foundation for years to come. We're seeking candidates with strong experience in computer vision and generative AI technologies. In this role, you'll apply cutting-edge techniques in image and video understanding, visual content generation, and multimodal AI systems to transform how Prime Video operates at scale. Key job responsibilities • Lead the design and architecture of highly scalable, available, and resilient services for our AI automation platform. • Write high-quality, maintainable, and robust code to solve complex business problems, building flexible systems without over-engineering. • Act as a technical leader and mentor for other engineers on the team, assisting with career growth and encouraging excellence. • Work through ambiguous requirements, cut through complexity, and translate business needs into scalable technical solutions. • Take ownership of the full software development lifecycle, including design, testing, deployment, and operations. • Work closely with product managers, scientists, and other engineers to build and launch new features and systems. About the team This role offers a unique opportunity to shape the future of one of Amazon's most exciting businesses through the application of AI technologies. If you're passionate about leveraging AI to drive real-world impact at massive scale, we want to hear from you.
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, science understanding, locomotion, manipulation, sim2real transfer, multi-modal foundation models and multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Lead full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, science understanding, locomotion, manipulation, sim2real transfer, multi-modal foundation models and multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Lead full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Seattle
Are you excited to help customers discover the hottest and best reviewed products? The Discovery Tech team helps customers discover and engage with new, popular and relevant products across Amazon worldwide. We do this by combining technology, science, and innovation to build new customer-facing features and experiences alongside advanced tools for marketers. You will be responsible for creating and building critical services that automatically generate, target, and optimize Amazon’s cross-category marketing and merchandising. Through the enablement of intelligent marketing campaigns that leverage machine-learning models, you will help to deliver the best possible shopping experience for Amazon’s customers all over the globe. We are looking for analytical problem solvers who enjoy diving into data, excited about data science and statistics, can multi-task, and can credibly interface between engineering teams and business stakeholders. Your analytical abilities, business understanding, and technical savvy will be used to identify specific and actionable opportunities to solve existing business problems and look around corners for future opportunities. Your domain spans the design, development, testing, and deployment of data-driven and highly scalable machine learning solutions in product recommendation. As an Applied Scientist, you bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. To know more about Amazon science, please visit https://www.amazon.science
ES, B, Barcelona
Are you a scientist passionate about advancing the frontiers of computer vision, machine learning, or large language models? Do you want to work on innovative research projects that lead to innovative products and scientific publications? Would you value access to extensive datasets? If you answer yes to any of these questions, you'll find a great fit at Amazon. We're seeking a hands-on researcher eager to derive, implement, and test the next generation of Generative AI, computer vision, ML, and NLP algorithms. Our research is innovative, multidisciplinary, and far-reaching. We aim to define, deploy, and publish pioneering research that pushes the boundaries of what's possible. To achieve our vision, we think big and tackle complex technological challenges at the forefront of our field. Where technology doesn't exist, we create it. Where it does, we adapt it to function at Amazon's scale. We need team members who are passionate, curious, and willing to learn continuously. Key job responsibilities * Derive novel computer vision and ML models or LLMs/VLMs. * Design and develop scalable ML models. * Create and work with large datasets * Work with large GPU clusters. * Work closely with software engineering teams to deploy your innovations. * Publish your work at major conferences/journals. * Mentor team members in the use of your AI models. A day in the life As a Senior Applied Scientist at Amazon, your typical day might look like this: * Dive into coding, deriving new ML models for computer vision or NLP * Experiment with massive datasets on our GPU clusters * Brainstorm with your team to solve complex AI challenges * Collaborate with engineers to turn your research into real products * Write up your findings for publication in top journals or conferences * Mentor junior team members on AI concepts and implementation About the team DiscoVision, a science unit within Amazon's UPMT, focuses on advancing visual content capabilities through state-of-the-art AI technology. Our team specializes in developing state-of-the-art technologies in text-to-image/video Generative AI, 3D modeling, and multimodal Large Language Models (LLMs).
US, WA, Seattle
We are seeking a Principal Applied Scientist to lead research and development in automated reasoning, formal verification, and program analysis. You will drive innovation in making formal methods practical and accessible for real-world systems at cloud scale. Key job responsibilities - Lead research initiatives in automated reasoning, formal verification, SMT solving, model checking, or program analysis - Design and implement novel algorithms and techniques that advance the state of the art - Mentor and guide applied scientists, research scientists, and engineers - Collaborate with product teams to transition research into production systems - Define technical vision and strategy for automated reasoning initiatives - Represent AWS in the academic and research community - Drive cross-organizational impact through technical leadership About the team The Automated Reasoning Group at AWS develops and applies cutting-edge formal methods and automated reasoning techniques to ensure the security, reliability, and correctness of AWS services and customer applications. Our work innovates tools and services to perform verification at scale and apply them to build safe and secure systems at AWS. We are also pioneering the use of formal verification and automated reasoning to develop agentic systems, ensuring AI agents operate within defined safety boundaries.
IN, TS, Hyderabad
Do you want to join an innovative team of scientists who leverage machine learning and statistical techniques to revolutionize how businesses discover and purchase products on Amazon? Are you passionate about building intelligent systems that understand and predict complex B2B customer needs? The Amazon Business team is looking for exceptional Applied Science to help shape the future of B2B commerce. Amazon Business is one of Amazon's fastest-growing initiatives focused on serving business customers, from individual professionals to large institutions, with unique and complex purchasing needs. Our customers require sophisticated solutions that go beyond traditional B2C experiences, including bulk purchasing, approval workflows, and business-grade service support. The AB-MSET Applied Science team focuses on building intelligent systems for delivering personalized, contextual service experiences throughout the customer lifecycle. We apply advanced machine learning techniques to develop sophisticated intent detection models for business customer service needs, create intelligent matching algorithms for optimal service routing based on multiple variables including customer value, maturity, effort, and issue complexity, build predictive models to enable proactive service interventions, design recommendation systems for self-service solutions, and develop ML models for automated service resolution. As an Applied Scientist on the team, you will design and develop state-of-the-art ML models for service intent classification, routing optimization, and customer experience personalization. You will analyze large-scale business customer interaction data to identify patterns and opportunities for automation, create scalable solutions for complex B2B service scenarios using advanced ML techniques, and work closely with engineering teams to implement and deploy models in production. You will collaborate with business stakeholders to identify opportunities for ML applications, establish automated processes for model development, validation, and maintenance, lead research initiatives to advance the state-of-the-art in B2B service science, and mentor other scientists and engineers in applying ML techniques to business problems.
US, WA, Bellevue
Amazon Leo is an initiative to increase global broadband access through a constellation of 3,236 satellites in low Earth orbit (LEO). Its mission is to bring fast, affordable broadband to unserved and underserved communities around the world. Amazon Leo will help close the digital divide by delivering fast, affordable broadband to a wide range of customers, including consumers, businesses, government agencies, and other organizations operating in places without reliable connectivity. Do you get excited by aerospace, space exploration, and/or satellites? Do you want to help build solutions at Amazon Leo to transform the space industry? If so, then we would love to talk! Key job responsibilities Work cross-functionally with product, business development, and various technical teams (engineering, science, simulations, etc.) to execute on the long-term vision, strategy, and architecture for the science-based global demand forecast. Design and deliver modern, flexible, scalable solutions to integrate data from a variety of sources and systems (both internal and external) and develop Bandwidth Usage models at granular temporal and geographic grains, deployable to Leo traffic management systems. Work closely with the capacity planning science team to ensure that demand forecasts feed seamlessly into their systems to deliver continuous optimization of resources. Lead short and long terms technical roadmap definition efforts to deliver solutions that meet business needs in pre-launch, early-launch, and mature business phases. Synthesize and communicate insights and recommendations to audiences of varying levels of technical sophistication to drive change across Amazon Leo. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. About the team The Amazon Leo Global Demand Planning team's mission is to map customer demand across space and time. We enable Amazon Leo's long-term success by delivering actionable insights and scientific forecasts across geographies and customer segments to empower long range planning, capacity simulations, business strategy, and hardware manufacturing recommendations through scalable tools and durable mechanisms.