This picture shows the HVAC system on the rooftop of a skyscraper
Facility energy optimization provides an organization’s facilities team low-hanging-fruit opportunities for reducing costs and carbon. Data-driven analysis can help to identify fault detection and drive energy efficiencies for facilities management.

Data-driven fault identification is key to more sustainable facilities management

How data-driven analysis can help to identify fault detection and drive energy efficiencies for facilities of all sizes.

In a previous article on sustainable buildings, we talked about the approach of “sense, act, and scale” to drive efficiencies in buildings, and provided information using scientific publications. In this article, we will explore how data-driven analysis can help to identify fault detection and drive energy efficiencies for facilities management by providing details on:

  • Key challenges for building management and operations;
  • Building system design fundamentals;
  • Key data points to investigate faults for facilities-level sustainability; and
  • Data-driven fault identification on AWS

Global temperatures are on the rise, greenhouse gas (GHG) emissions are the primary contributor, and facilities are among the top contributors to GHG. As stipulated in the Paris Agreement, facilities need to be 30% more energy efficient and net carbon neutral by 2050. Many companies have set new targets to reduce their emissions in recent years. For example, Amazon has set out the mission to be net neutral by 2040 and, in its recent sustainability report, has touched on how the company is using innovative design to build sustainability into physical Amazon campuses.

NeurIPS competition involves reinforcement learning, with the objective of minimizing both cost and CO2 emissions.

This article provides information on how companies of all sizes can operate and maintain their existing buildings more efficiently by identifying and fixing faults using data-driven mechanisms. In this vein, Amazon is sponsoring an AI challenge at NeurIPS this year that focuses on building energy management in a smart grid. Bottom line: energy optimization of facilities must be a key component of your organization’s plan to operate more sustainably.

Related content
As office buildings become smarter, it is easier to configure them with sustainability management in mind.

Facility energy optimization provides an organization’s facilities team low-hanging-fruit opportunities for reducing costs and carbon. However, building systems do inherit many complexities that must be addressed.

Some of the key facilities-management challenges are:

  • A building’s lifespan is 50+ years, and a facility’s system sensors are typically installed on day one. Many new cloud-native sensor options come to market every year, but building management systems (BMS) aren’t open, making it difficult to modernize data architectures for building infrastructure;
  • Across any large real estate portfolio there is a wide range of technology, standards, building types, and designs that are difficult to manage over their lifecycles; 
  • Building management and automation systems require a third party to own and modify production data, and licensing fees aren’t based on consumption pricing; and 
  • Facilities teams generally lack the cloud expertise required to design a bespoke management solution, and their IT teams often don’t have product-level experience to provide as an alternative for addressing building-management needs.

Facilities management and sustainability

Facilities management teams have limited options to modify most core BMS functions.

These systems are sometimes referred to as black boxes in that they don’t have the same level of do-it-yourself features that most cloud users have come to expect. There can be contractual challenges, as well, for building tenants who don’t have access to BMS information. This is by design, primarily due to a clear operational argument that safety and security control functions should be limited to key personnel. However, this lack of access to building-performance analytics, required for enterprise-level sustainability transformations, is increasingly considered a blocker by many of our sustainability customers.

Let’s begin our analysis by looking at a building’s biggest consumer of electricity and producer of emissions: the HVAC system.

HVAC units are central to a building and constitute roughly 50% of a building’s energy consumption. As a result, they are well instrumented and generally follow a rules-based approach. The downside: this approach can lead to many false alarms and building managers rely on manual inspection and occupants to communicate important faults that require attention. Building managers and engineers focus significant time and budget on HVAC systems, but nevertheless HVAC system faults still can account for 5% to 20% of energy waste.

The most common example of an HVAC unit with which we are all familiar is an air conditioner. In a BMS, HVAC is comprised of sub-components that provide heating or cooling, ventilation (air handling units, fans) and AC (rooftop units, variable refrigerants) and more.

HVAC Units 2_220830211027 (1).png

A building’s data model, and the larger building management schema, are established when the building first opens. Alerts, alarms, and performance data are issued through the BMS and a manager will notify a building services team to take action as needed. However, as the building and infrastructure ages many alarms become endemic and are difficult to remedy. Alarm fatigue is a term often used to describe the resulting BMS operator experience.

Variable air volume (VAV) units are another important asset that help to maintain temperatures by managing local air flow. VAV units help optimize the temperature by modifying air flow as opposed to conventional air volume (CAV) units which provide a constant volume of air that only affects air temperature.

There are often hundreds of VAV units in a larger building and managing them is burdensome. Building engineers have limited time to configure each of them as building demands change and VAV unit configurations are typically left unchanged after the commissioning of the building. The result: many unseen or mysterious building faults, and the hidden loss of energy over the years.

Related content
Confronting climate change requires the participation of governments, companies, academics, civil-society organizations, and the public.

Many modern buildings are designed to accommodate whatever the building planners know at the time of commissioning. As a result, HVAC system configuration isn’t a data-driven process because operational data doesn’t yet exist. The only real incentives for HVAC system optimization typically result from failures and occupant complaints. To meet future sustainability targets, buildings must be equipped with data-driven smart configurations that can be adjusted automatically.

To achieve this, we must understand the fundamentals of air flow as we need to combine the expertise of building engineers, IoT engineers, and data engineers to resolve some of the complex air-flow challenges. This also requires an understanding of how facilities are generally managed today, which we’ll examine next.

Anatomy of facilities management

The image below shows how an air-handling unit (AHU) uses fans to distribute air through ducting. These ducts are attached to AHUs (a type of VAV unit), controlling the flow of air to specific rooms.

typical air distribution topology.png
BMS software provides tools to help operators define logical “zones” that virtually represent a given physical space. This zone approach is useful in helping operators analyze the effectiveness of a given cooling design relative to the operational requirements.

To change the temperature of a given zone (often representing a physical room), a sensor will send a notification through a building gateway and controller. This device serves as an intermediary between the BMS server and a given HVAC unit.

There is some automation built into these HVAC systems in the form of thermostats. The automation comes in the form of a given cooling unit responding to a temperature reading, calculated by the thermostat. These setpoints provide a temperature range that, when followed, provide the best performance of the system.

Setpoint typically refers to the point at which a building system is set to activate or deactivate, eg a heating system might be set to switch on if the internal temperature falls below 20°C.

VAV Terminal_220906154354.png
A controller in the VAV unit is attached to the room thermostat. Thermostats tells VAV terminals if zone temperatures are too hot, cold, or just right. The VAV unit has several key components inside: controller, actuator, damper, shaft, and reheat coil.

AHU and VAV unit control points are managed by BMS software. This software is vendor managed and the configuration of the control system is determined at building inception. The configurations can be established based on several factors: room capacity and occupancy, room location, room cooling requirement, zone requirement, and more.

To illustrate a data model that reflects the operation of the HVAC system, let’s look at the VAVs that help distribute the air and the fault-driven alerts apparent in most aging systems. It is difficult to personalize these configurations as they are not data driven and do not update automatically. Let's use the flow of air through a given building as a use case and assume its operation will have a sizable impact on the building's overall energy usage.

Damper Side-by-side_v2_220919101743.png
On the left, the damper is fully open because it is a summer day, it is hot outside, and the room is full of people. But on the right, the damper is partially open because it is a winter day and there are no people in the room, requiring minimum heat load.

There will often be multiple zone-specific faults, such as temperature or flow failures, issues with dampers or fans, software configuration errors that can lead to short-cycling of the unit(s), and communication or controller problems, which make it difficult to even identify the problem remotely. These factors all result in a low-efficiency cooling system that increases emissions, wasting energy and money.

What faults can tell you about sustainable building performance

Faults can be neglected for long periods of time, leaking invisible energy in the process.

Researchers from UC San Diego conducted a detailed data analysis (Bharathan was a co-author) of a 145,000-square-foot building. They identified 88 faults after building engineers fixed all the issues they could find. The paper estimates that fixing these faults could save 410.3 megawatt hours per year and, at a typical electrical cost of 12 cents per kilowatt hour, achieve a $492,360 savings in the first year.

According to the U.S. Environmental Protection Agency’s Greenhouse Gas Equivalencies Calculator, that’s the equivalent of 38,244 passenger car trips abated. Cisco offers another example. The company achieved a 28% reduction in electrical usage in their buildings worldwide by using an IP-Enabled Energy Management solution.

Traditional fault fixing focuses on the centralized HVAC subsystems such as AHU. Here we focus on the VAV units that are often ignored. Some of the key issues in VAV units are: air supply flow, temperature setpoints, thermostat adjustments, inappropriate cooling or stuck dampers.

Related content
Pioneering web-based PackOpt tool has resulted in an annual reduction in cardboard waste of 7% to 10% in North America, saving roughly 60,000 tons of cardboard annually.

To identify these faults, you can perform data analysis with key data attributes including temperature, heating, and cooling setpoints; upper- and lower- limit changes based on day of week; re-heat coil (on or off); occupancy sensor and settings (occupied, standby or unoccupied); damper sensor and damper settings; and pressure flow.

Using these parameters, we can define informative models. For example, you can create setpoints informed by seasonal weather data, in addition to room thermostats. You also can perform temperature data analysis against known occupancy times.

Data analysis isn’t easy at first; it’s generally not in a state where it can be readily loaded into a graph store. Oftentimes there is a lot of data transformation and IoT work required to get the data to a place where it can be analyzed by data scientists. To solve this challenge, you will need data experts, FM domain experts, cloud engineers, and someone who can bring them together to drive the right focus.

To begin, the best approach is setting up a meeting between your facilities and IT teams to start examining your building data. Some teams may grant you read-only access to the system. Otherwise, from a .CSV download of the last two to three years of building data, you can perform your analysis.

For data- driven fault identification within your facilities data, you can get started by using the Model, Cluster, and Compare (MCC approach). The primary objective of MCC is to determine clusters of zones within a building, and then use these clusters to automatically determine misconfigured, anomalous, or faulty zone controller configuration.

MCC approach to data-driven analysis

We will use a university-building example to explain the benefits of the MCC approach. The university building comprised personal offices, shared offices, kitchens, and restrooms.

In a typical room, the HVAC provides cold air during the summer. The supplied air flow is modulated to maintain the required temperature during day time, and falls back to a minimum during the night.

In the graph below, we show a room where the opposite happens because of a misconfiguration fault.

Supply Flow Graphic 1_220831110607.png
The VAV unit cools the room at night, but uses a minimal air flow during the day. The cooling temperature setpoint is 80°F from midnight until 10 a.m., and then drops to 75°F as expected. However, there is a continuous cold air supply flow of 800 cubic feet per minute (CFM) throughout the night until 11:30 a.m.

The building management contractor surmised these errors were caused due to a misunderstanding at the time of initial building commissioning. This fault was hidden within the system for years, and was identified while doing an MCC analysis.

Model

When we try to identify faults with raw sensor data, it often leads to misleading results. For example, a simple fault detection rule may generate an alarm if the temperature of a room goes beyond a threshold. The alarm may be false for any number of reasons: it could be a particularly hot day, or an event is occurring in the room. We need to look for faults that are consistent, and require human attention. Given the large number of alarms that are triggered with simple rules, such faults get overlooked.

Our MCC algorithm looks for rooms that behave differently from others over a long time-span. To compare different rooms, we create a model that captures the generic patterns of usage over months or years. Then we can compare and cluster rooms to weed out the faults.

In our algorithm, we use the measured room temperature and air flow from the HVAC to create a room energy model. The energy spent by the HVAC system on a room is proportional to the product of its temperature and airflow supplied as per the laws of thermodynamics. We use the product of two sensor measurements as the parameter to model the room because it indicates the generic patterns of use. If we find rooms whose energy patterns are substantially different, we can inspect them further.

Cluster

Room temperatures can fluctuate for natural reasons, and our fault-detection algorithm should not flag them.

The MCC algorithm clusters rooms that are similar to each other with the KMeans algorithm. The clusters naturally align rooms that are similar, for example, west-facing rooms, east-facing rooms, kitchenettes, and conference rooms. We can create these clusters manually, based on domain knowledge and usage type, or the clustering algorithm can automate this process.

Compare

Having defined configurations per cluster, the MCC algorithm then compares rooms to identify anomalies. This step ensures that natural fluctuations are ignored, and only the egregious rooms are highlighted, reducing the number of false alarms.

Intelligent rules

The MCC study created rules to detect new faults after analyzing the anomalies manually. Rules are a natural way to integrate with an existing system, and to catch similar faults that occur in the future. Rules are also interpretable by domain experts, enabling further tuning.

An interesting example of an identified fault is shown below:

Supply Flow Graphic 2_220831110647.png
The HVAC system strives to maintain the room temperature between the cooling setpoint (78F in this room) and the heating setpoint (74F). If the temperature goes beyond these setpoints, it will cool/heat the room as required. The room is excessively cooled with high air flow (800 CFM), causing the room temperature to fall below the heating setpoint, which then triggers heating. As a result of this fault, the room uses excessive energy to maintain comfort.

There were five rooms with similar issues on the same floor and 15 overall within the building. The cause of the fault: the designed air flow specifications were based on maximum occupancy. Issues such as these cause enormous energy waste, and they often go unnoticed for years.

A path forward 

In this post we’ve provided some foundational concepts to consider in how you can better use data to improve both facility performance and availability.

Whether your goal is to improve building performance in support of sustainability transformation or to improve fault detection, the path starts with modernizing the data models that support your facilities. Following a data modernization path will illustrate where the building architecture that provides the data is not meeting expectations.

As a next step, facilities and IT managers can get started by:

  • Performing a basic audit of their buildings and look for options to gather key parameter data outlined above. 
  • Consolidating data from the relevant sources, applying data standardization, and making use of the fault-detection approach outlined above. 
  • Making use of AWS Data Analytics and AWS AI/ML services to perform data analysis and apply machine learning algorithms to identify data anomalies. Amazon uses these services to manage the thousands of world-class facilities that serve our employees, customers, and communities. Learn more about our sustainable building initiatives

These steps will help identify energy hot spots and hidden faults in your facilities; facilities managers can then make use of this information to fix the relevant faults and drive facility sustainability. Finally, consider making sustainability data easily accessible to executive teams to help drive discussions and decisions on impactful carbon-abatement initiatives.

Research areas

Related content

US, WA, Bellevue
We are seeking a passionate, talented, and inventive individual to join the Applied AI team and help build industry-leading technologies that customers will love. This team offers a unique opportunity to make a significant impact on the customer experience and contribute to the design, architecture, and implementation of a cutting-edge product. The mission of the Applied AI team is to enable organizations within Worldwide Amazon.com Stores to accelerate the adoption of AI technologies across various parts of our business. We are looking for a Senior Applied Scientist to join our Applied AI team to work on LLM-based solutions. On our team you will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. You will be responsible for developing and maintaining the systems and tools that enable us to accelerate knowledge operations and work in the intersection of Science and Engineering. You will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. We are seeking an experienced Scientist who combines superb technical, research, analytical and leadership capabilities with a demonstrated ability to get the right things done quickly and effectively. This person must be comfortable working with a team of top-notch developers and collaborating with our research teams. We’re looking for someone who innovates, and loves solving hard problems. You will be expected to have an established background in building highly scalable systems and system design, excellent project management skills, great communication skills, and a motivation to achieve results in a fast-paced environment. You should be somebody who enjoys working on complex problems, is customer-centric, and feels strongly about building good software as well as making that software achieve its operational goals.
IN, KA, Bengaluru
Do you want to lead the development of advanced machine learning systems that protect millions of customers and power a trusted global eCommerce experience? Are you passionate about modeling terabytes of data, solving highly ambiguous fraud and risk challenges, and driving step-change improvements through scientific innovation? If so, the Amazon Buyer Risk Prevention (BRP) Machine Learning team may be the right place for you. We are seeking a Senior Applied Scientist to define and drive the scientific direction of large-scale risk management systems that safeguard millions of transactions every day. In this role, you will lead the design and deployment of advanced machine learning solutions, influence cross-team technical strategy, and leverage emerging technologies—including Generative AI and LLMs—to build next-generation risk prevention platforms. Key job responsibilities Lead the end-to-end scientific strategy for large-scale fraud and risk modeling initiatives Define problem statements, success metrics, and long-term modeling roadmaps in partnership with business and engineering leaders Design, develop, and deploy highly scalable machine learning systems in real-time production environments Drive innovation using advanced ML, deep learning, and GenAI/LLM technologies to automate and transform risk evaluation Influence system architecture and partner with engineering teams to ensure robust, scalable implementations Establish best practices for experimentation, model validation, monitoring, and lifecycle management Mentor and raise the technical bar for junior scientists through reviews, technical guidance, and thought leadership Communicate complex scientific insights clearly to senior leadership and cross-functional stakeholders Identify emerging scientific trends and translate them into impactful production solutions
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
GB, London
We are looking for a Senior Economist to work on exciting and challenging business problems related to Amazon Retail’s worldwide product assortment. You will build innovative solutions based on econometrics, machine learning, and experimentation. You will be part of a interdisciplinary team of economists, product managers, engineers, and scientists, and your work will influence finance and business decisions affecting Amazon’s vast product assortment globally. If you have an entrepreneurial spirit, you know how to deliver results fast, and you have a deeply quantitative, highly innovative approach to solving problems, and long for the opportunity to build pioneering solutions to challenging problems, we want to talk to you. Key job responsibilities * Work on a challenging problem that has the potential to significantly impact Amazon’s business position * Develop econometric models and experiments to measure the customer and financial impact of Amazon’s product assortment * Collaborate with other scientists at Amazon to deliver measurable progress and change * Influence business leaders based on empirical findings
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities - Use machine learning and analytical techniques to create scalable solutions for business problems - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes - Design, development, evaluate and deploy innovative and highly scalable models for predictive learning - Research and implement novel machine learning and statistical approaches - Work closely with software engineering teams to drive real-time model implementations and new feature creations - Work closely with business owners and operations staff to optimize various business operations - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation - Mentor other scientists and engineers in the use of ML techniques Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML models Work closely with software engineering teams to drive real-time model implementations Work closely with business partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques About the team International Machine Learning Team is responsible for building novel ML solutions that attack India first (and other Emerging Markets across MENA and LatAm) problems and impact the bottom-line and top-line of India business. Learn more about our team from https://www.amazon.science/working-at-amazon/how-rajeev-rastogis-machine-learning-team-in-india-develops-innovations-for-customers-worldwide
EG, Cairo
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
US, CA, San Diego
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to apply their macroeconomics and forecasting skillsets to solve real world problems. The intern will work in the area of forecasting, developing models to improve the success of new product launches in Private Brands. Our PhD Economist Internship Program offers hands-on experience in applied economics, supported by mentorship, structured feedback, and professional development. Interns work on real business and research problems, building skills that prepare them for full-time economist roles at Amazon and beyond. You will learn how to build data sets and perform applied econometric analysis collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis About the team The Amazon Private Brands Intelligence team applies Machine Learning, Statistics and Econometrics/economics to solve high-impact business problems, develop prototypes for Amazon-scale science solutions, and optimize key business functions of Amazon Private Brands and other Amazon orgs. We are an interdisciplinary team, using science and technology and leveraging the strengths of engineers and scientists to build solutions for some of the toughest business problems at Amazon, covering areas such as pricing, discovery, negotiation, forecasting, supply chain and product selection/development.