This picture shows the HVAC system on the rooftop of a skyscraper
Facility energy optimization provides an organization’s facilities team low-hanging-fruit opportunities for reducing costs and carbon. Data-driven analysis can help to identify fault detection and drive energy efficiencies for facilities management.

Data-driven fault identification is key to more sustainable facilities management

How data-driven analysis can help to identify fault detection and drive energy efficiencies for facilities of all sizes.

In a previous article on sustainable buildings, we talked about the approach of “sense, act, and scale” to drive efficiencies in buildings, and provided information using scientific publications. In this article, we will explore how data-driven analysis can help to identify fault detection and drive energy efficiencies for facilities management by providing details on:

  • Key challenges for building management and operations;
  • Building system design fundamentals;
  • Key data points to investigate faults for facilities-level sustainability; and
  • Data-driven fault identification on AWS

Global temperatures are on the rise, greenhouse gas (GHG) emissions are the primary contributor, and facilities are among the top contributors to GHG. As stipulated in the Paris Agreement, facilities need to be 30% more energy efficient and net carbon neutral by 2050. Many companies have set new targets to reduce their emissions in recent years. For example, Amazon has set out the mission to be net neutral by 2040 and, in its recent sustainability report, has touched on how the company is using innovative design to build sustainability into physical Amazon campuses.

NeurIPS competition involves reinforcement learning, with the objective of minimizing both cost and CO2 emissions.

This article provides information on how companies of all sizes can operate and maintain their existing buildings more efficiently by identifying and fixing faults using data-driven mechanisms. In this vein, Amazon is sponsoring an AI challenge at NeurIPS this year that focuses on building energy management in a smart grid. Bottom line: energy optimization of facilities must be a key component of your organization’s plan to operate more sustainably.

Related content
As office buildings become smarter, it is easier to configure them with sustainability management in mind.

Facility energy optimization provides an organization’s facilities team low-hanging-fruit opportunities for reducing costs and carbon. However, building systems do inherit many complexities that must be addressed.

Some of the key facilities-management challenges are:

  • A building’s lifespan is 50+ years, and a facility’s system sensors are typically installed on day one. Many new cloud-native sensor options come to market every year, but building management systems (BMS) aren’t open, making it difficult to modernize data architectures for building infrastructure;
  • Across any large real estate portfolio there is a wide range of technology, standards, building types, and designs that are difficult to manage over their lifecycles; 
  • Building management and automation systems require a third party to own and modify production data, and licensing fees aren’t based on consumption pricing; and 
  • Facilities teams generally lack the cloud expertise required to design a bespoke management solution, and their IT teams often don’t have product-level experience to provide as an alternative for addressing building-management needs.

Facilities management and sustainability

Facilities management teams have limited options to modify most core BMS functions.

These systems are sometimes referred to as black boxes in that they don’t have the same level of do-it-yourself features that most cloud users have come to expect. There can be contractual challenges, as well, for building tenants who don’t have access to BMS information. This is by design, primarily due to a clear operational argument that safety and security control functions should be limited to key personnel. However, this lack of access to building-performance analytics, required for enterprise-level sustainability transformations, is increasingly considered a blocker by many of our sustainability customers.

Let’s begin our analysis by looking at a building’s biggest consumer of electricity and producer of emissions: the HVAC system.

HVAC units are central to a building and constitute roughly 50% of a building’s energy consumption. As a result, they are well instrumented and generally follow a rules-based approach. The downside: this approach can lead to many false alarms and building managers rely on manual inspection and occupants to communicate important faults that require attention. Building managers and engineers focus significant time and budget on HVAC systems, but nevertheless HVAC system faults still can account for 5% to 20% of energy waste.

The most common example of an HVAC unit with which we are all familiar is an air conditioner. In a BMS, HVAC is comprised of sub-components that provide heating or cooling, ventilation (air handling units, fans) and AC (rooftop units, variable refrigerants) and more.

HVAC Units 2_220830211027 (1).png

A building’s data model, and the larger building management schema, are established when the building first opens. Alerts, alarms, and performance data are issued through the BMS and a manager will notify a building services team to take action as needed. However, as the building and infrastructure ages many alarms become endemic and are difficult to remedy. Alarm fatigue is a term often used to describe the resulting BMS operator experience.

Variable air volume (VAV) units are another important asset that help to maintain temperatures by managing local air flow. VAV units help optimize the temperature by modifying air flow as opposed to conventional air volume (CAV) units which provide a constant volume of air that only affects air temperature.

There are often hundreds of VAV units in a larger building and managing them is burdensome. Building engineers have limited time to configure each of them as building demands change and VAV unit configurations are typically left unchanged after the commissioning of the building. The result: many unseen or mysterious building faults, and the hidden loss of energy over the years.

Related content
Confronting climate change requires the participation of governments, companies, academics, civil-society organizations, and the public.

Many modern buildings are designed to accommodate whatever the building planners know at the time of commissioning. As a result, HVAC system configuration isn’t a data-driven process because operational data doesn’t yet exist. The only real incentives for HVAC system optimization typically result from failures and occupant complaints. To meet future sustainability targets, buildings must be equipped with data-driven smart configurations that can be adjusted automatically.

To achieve this, we must understand the fundamentals of air flow as we need to combine the expertise of building engineers, IoT engineers, and data engineers to resolve some of the complex air-flow challenges. This also requires an understanding of how facilities are generally managed today, which we’ll examine next.

Anatomy of facilities management

The image below shows how an air-handling unit (AHU) uses fans to distribute air through ducting. These ducts are attached to AHUs (a type of VAV unit), controlling the flow of air to specific rooms.

typical air distribution topology.png
BMS software provides tools to help operators define logical “zones” that virtually represent a given physical space. This zone approach is useful in helping operators analyze the effectiveness of a given cooling design relative to the operational requirements.

To change the temperature of a given zone (often representing a physical room), a sensor will send a notification through a building gateway and controller. This device serves as an intermediary between the BMS server and a given HVAC unit.

There is some automation built into these HVAC systems in the form of thermostats. The automation comes in the form of a given cooling unit responding to a temperature reading, calculated by the thermostat. These setpoints provide a temperature range that, when followed, provide the best performance of the system.

Setpoint typically refers to the point at which a building system is set to activate or deactivate, eg a heating system might be set to switch on if the internal temperature falls below 20°C.

VAV Terminal_220906154354.png
A controller in the VAV unit is attached to the room thermostat. Thermostats tells VAV terminals if zone temperatures are too hot, cold, or just right. The VAV unit has several key components inside: controller, actuator, damper, shaft, and reheat coil.

AHU and VAV unit control points are managed by BMS software. This software is vendor managed and the configuration of the control system is determined at building inception. The configurations can be established based on several factors: room capacity and occupancy, room location, room cooling requirement, zone requirement, and more.

To illustrate a data model that reflects the operation of the HVAC system, let’s look at the VAVs that help distribute the air and the fault-driven alerts apparent in most aging systems. It is difficult to personalize these configurations as they are not data driven and do not update automatically. Let's use the flow of air through a given building as a use case and assume its operation will have a sizable impact on the building's overall energy usage.

Damper Side-by-side_v2_220919101743.png
On the left, the damper is fully open because it is a summer day, it is hot outside, and the room is full of people. But on the right, the damper is partially open because it is a winter day and there are no people in the room, requiring minimum heat load.

There will often be multiple zone-specific faults, such as temperature or flow failures, issues with dampers or fans, software configuration errors that can lead to short-cycling of the unit(s), and communication or controller problems, which make it difficult to even identify the problem remotely. These factors all result in a low-efficiency cooling system that increases emissions, wasting energy and money.

What faults can tell you about sustainable building performance

Faults can be neglected for long periods of time, leaking invisible energy in the process.

Researchers from UC San Diego conducted a detailed data analysis (Bharathan was a co-author) of a 145,000-square-foot building. They identified 88 faults after building engineers fixed all the issues they could find. The paper estimates that fixing these faults could save 410.3 megawatt hours per year and, at a typical electrical cost of 12 cents per kilowatt hour, achieve a $492,360 savings in the first year.

According to the U.S. Environmental Protection Agency’s Greenhouse Gas Equivalencies Calculator, that’s the equivalent of 38,244 passenger car trips abated. Cisco offers another example. The company achieved a 28% reduction in electrical usage in their buildings worldwide by using an IP-Enabled Energy Management solution.

Traditional fault fixing focuses on the centralized HVAC subsystems such as AHU. Here we focus on the VAV units that are often ignored. Some of the key issues in VAV units are: air supply flow, temperature setpoints, thermostat adjustments, inappropriate cooling or stuck dampers.

Related content
Pioneering web-based PackOpt tool has resulted in an annual reduction in cardboard waste of 7% to 10% in North America, saving roughly 60,000 tons of cardboard annually.

To identify these faults, you can perform data analysis with key data attributes including temperature, heating, and cooling setpoints; upper- and lower- limit changes based on day of week; re-heat coil (on or off); occupancy sensor and settings (occupied, standby or unoccupied); damper sensor and damper settings; and pressure flow.

Using these parameters, we can define informative models. For example, you can create setpoints informed by seasonal weather data, in addition to room thermostats. You also can perform temperature data analysis against known occupancy times.

Data analysis isn’t easy at first; it’s generally not in a state where it can be readily loaded into a graph store. Oftentimes there is a lot of data transformation and IoT work required to get the data to a place where it can be analyzed by data scientists. To solve this challenge, you will need data experts, FM domain experts, cloud engineers, and someone who can bring them together to drive the right focus.

To begin, the best approach is setting up a meeting between your facilities and IT teams to start examining your building data. Some teams may grant you read-only access to the system. Otherwise, from a .CSV download of the last two to three years of building data, you can perform your analysis.

For data- driven fault identification within your facilities data, you can get started by using the Model, Cluster, and Compare (MCC approach). The primary objective of MCC is to determine clusters of zones within a building, and then use these clusters to automatically determine misconfigured, anomalous, or faulty zone controller configuration.

MCC approach to data-driven analysis

We will use a university-building example to explain the benefits of the MCC approach. The university building comprised personal offices, shared offices, kitchens, and restrooms.

In a typical room, the HVAC provides cold air during the summer. The supplied air flow is modulated to maintain the required temperature during day time, and falls back to a minimum during the night.

In the graph below, we show a room where the opposite happens because of a misconfiguration fault.

Supply Flow Graphic 1_220831110607.png
The VAV unit cools the room at night, but uses a minimal air flow during the day. The cooling temperature setpoint is 80°F from midnight until 10 a.m., and then drops to 75°F as expected. However, there is a continuous cold air supply flow of 800 cubic feet per minute (CFM) throughout the night until 11:30 a.m.

The building management contractor surmised these errors were caused due to a misunderstanding at the time of initial building commissioning. This fault was hidden within the system for years, and was identified while doing an MCC analysis.

Model

When we try to identify faults with raw sensor data, it often leads to misleading results. For example, a simple fault detection rule may generate an alarm if the temperature of a room goes beyond a threshold. The alarm may be false for any number of reasons: it could be a particularly hot day, or an event is occurring in the room. We need to look for faults that are consistent, and require human attention. Given the large number of alarms that are triggered with simple rules, such faults get overlooked.

Our MCC algorithm looks for rooms that behave differently from others over a long time-span. To compare different rooms, we create a model that captures the generic patterns of usage over months or years. Then we can compare and cluster rooms to weed out the faults.

In our algorithm, we use the measured room temperature and air flow from the HVAC to create a room energy model. The energy spent by the HVAC system on a room is proportional to the product of its temperature and airflow supplied as per the laws of thermodynamics. We use the product of two sensor measurements as the parameter to model the room because it indicates the generic patterns of use. If we find rooms whose energy patterns are substantially different, we can inspect them further.

Cluster

Room temperatures can fluctuate for natural reasons, and our fault-detection algorithm should not flag them.

The MCC algorithm clusters rooms that are similar to each other with the KMeans algorithm. The clusters naturally align rooms that are similar, for example, west-facing rooms, east-facing rooms, kitchenettes, and conference rooms. We can create these clusters manually, based on domain knowledge and usage type, or the clustering algorithm can automate this process.

Compare

Having defined configurations per cluster, the MCC algorithm then compares rooms to identify anomalies. This step ensures that natural fluctuations are ignored, and only the egregious rooms are highlighted, reducing the number of false alarms.

Intelligent rules

The MCC study created rules to detect new faults after analyzing the anomalies manually. Rules are a natural way to integrate with an existing system, and to catch similar faults that occur in the future. Rules are also interpretable by domain experts, enabling further tuning.

An interesting example of an identified fault is shown below:

Supply Flow Graphic 2_220831110647.png
The HVAC system strives to maintain the room temperature between the cooling setpoint (78F in this room) and the heating setpoint (74F). If the temperature goes beyond these setpoints, it will cool/heat the room as required. The room is excessively cooled with high air flow (800 CFM), causing the room temperature to fall below the heating setpoint, which then triggers heating. As a result of this fault, the room uses excessive energy to maintain comfort.

There were five rooms with similar issues on the same floor and 15 overall within the building. The cause of the fault: the designed air flow specifications were based on maximum occupancy. Issues such as these cause enormous energy waste, and they often go unnoticed for years.

A path forward 

In this post we’ve provided some foundational concepts to consider in how you can better use data to improve both facility performance and availability.

Whether your goal is to improve building performance in support of sustainability transformation or to improve fault detection, the path starts with modernizing the data models that support your facilities. Following a data modernization path will illustrate where the building architecture that provides the data is not meeting expectations.

As a next step, facilities and IT managers can get started by:

  • Performing a basic audit of their buildings and look for options to gather key parameter data outlined above. 
  • Consolidating data from the relevant sources, applying data standardization, and making use of the fault-detection approach outlined above. 
  • Making use of AWS Data Analytics and AWS AI/ML services to perform data analysis and apply machine learning algorithms to identify data anomalies. Amazon uses these services to manage the thousands of world-class facilities that serve our employees, customers, and communities. Learn more about our sustainable building initiatives

These steps will help identify energy hot spots and hidden faults in your facilities; facilities managers can then make use of this information to fix the relevant faults and drive facility sustainability. Finally, consider making sustainability data easily accessible to executive teams to help drive discussions and decisions on impactful carbon-abatement initiatives.

Research areas

Related content

US, CA, San Francisco
Amazon launched the AGI Lab to develop foundational capabilities for useful AI agents. We built Nova Act - a new AI model trained to perform actions within a web browser. The team builds AI/ML infrastructure that powers our production systems to run performantly at high scale. We’re also enabling practical AI to make our customers more productive, empowered, and fulfilled. In particular, our work combines large language models (LLMs) with reinforcement learning (RL) to solve reasoning, planning, and world modeling in both virtual and physical environments. Our lab is a small, talent-dense team with the resources and scale of Amazon. Each team in the lab has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. We’re entering an exciting new era where agents can redefine what AI makes possible. We’d love for you to join our lab and build it from the ground up! Key job responsibilities This role will lead a team of SDEs building AI agents infrastructure from launch to scale. The role requires the ability to span across ML/AI system architecture and infrastructure. You will work closely with application developers and scientists to have a impact on the Agentic AI industry. We're looking for a Software Development Manager who is energized by building high performance systems, making an impact and thrives in fast-paced, collaborative environments. About the team Check out the Nova Act tools our team built on on nova.amazon.com/act
US, CA, Sunnyvale
Amazon Leo is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband connectivity to unserved and underserved communities around the world. As a Sr. Comm System Research Scientist, this role is primarily responsible for the design, development and integration of Ka band and S/C band communication payload and ground terminal systems. The Role: Be part of the team defining the overall communication system and architecture of Amazon’s broadband wireless network. This is a unique opportunity to innovate and define groundbreaking wireless technology with few legacy constraints. The team develops and designs the communication system of Amazon Leo and analyzes its overall system level performance such as for overall throughput, latency, system availability, packet loss etc. This role in particular will be responsible for leading the effort in designing and developing advanced technology and solutions for communication system. This role will also be responsible developing advanced L1/L2 proof of concept HW/SW systems to improve the performance and reliability of the Amazon Leo network. In particular this role will be responsible for using concepts from digital signal processing, information theory, wireless communications to develop novel solutions for achieving ultra-high performance LEO network. This role will also be part of a team and develop simulation tools with particular emphasis on modeling the physical layer aspects such as advanced receiver modeling and abstraction, interference cancellation techniques, FEC abstraction models etc. This role will also play a critical role in the design, integration and verification of various HW and SW sub-systems as a part of system integration and link bring-up and verification. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Key job responsibilities • Design advanced L1/L2 algorithms and solutions for the Amazon Leo communication system, particularly Multi-User MIMO techniques. • Develop proof-of-concepts for critical communication payload components using SDR platforms consisting of FPGAs and general-purpose processors. • Work with ASIC development teams to build power/area efficient L1/L2 HW accelerators to be integrated into Amazon Leo SoCs. • Provide specifications and work with implementation teams on the development of embedded L1/L2 HW/SW architectures. • Work with multi-disciplinary teams to develop advanced solutions for time, frequency and spatial acquisition/tracking in LEO systems, particularly under large uncertainties. • Develop link-level and system-level simulators and work closely with implementation teams to evaluate expected performance and provide quick feedback on potential improvements. • Develop testbeds consisting of digital, IF and RF components while accounting for link-budgets and RF/IF line-ups. Previous experiences with VSAs/VSGs, channel emulators, antennas (particularly phased-arrays) and anechoic chamber instrumentation are a plus. • Work with development teams on system integration and debugging from PHY to network layer, including interfacing with flight computer and SDN control subsystems. • Willing to work in fast-paced environment and take ownership that goes from algorithm specification, to HW/SW architecture definition, to proof-of-concept development, to testbed bring-up, to integration into the Amazon Leo system. • Be a team player and provide support when requested while being able to unblock themselves by reaching out to RF, ASIC, SW, Comsys and Testbed supporting teams to move forward in development, testing and integration activities. • Ability to adapt design and test activities based on current HW/SW capabilities delivered by the development teams.
CN, 44, Shenzhen
职位:Applied scientist 应用科学家实习生 毕业时间:2026年10月 - 2027年7月之间毕业的应届毕业生 · 入职日期:2026年6月及之前 · 实习时间:保证一周实习4-5天全职实习,至少持续3个月 · 工作地点:深圳福田区 投递须知: 1 填写简历申请时,请把必填和非必填项都填写完整。提交简历之后就无法修改了哦! 2 学校的英文全称请准确填写。中英文对应表请查这里(无法浏览请登录后浏览)https://docs.qq.com/sheet/DVmdaa1BCV0RBbnlR?tab=BB08J2 关于职位 Amazon Device &Services Asia团队正在寻找一位充满好奇心、善于沟通的应用科学家实习生,成为连接前沿AI研究与现实世界认知的桥梁。这是一个独特的角色——既需要动手参与机器学习项目,又要接受将复杂AI概念转化为通俗易懂内容的创意挑战。D&S Asia是亚马逊设备与服务业务在亚洲的支柱组织,自2009年支持Kindle制造起步,现已发展为横跨软硬件、AI(Alexa)及智能家居(Ring/Blink)的综合性团队,持续驱动区域业务创新与人才发展。 你将做什么 • 解密AI: 将复杂的技术发现转化为直观的解释、博客文章、教程或互动演示,让非技术背景的业务方和更广泛的社区都能理解 • 技术叙事: 与工程团队协作,以清晰、引人入胜的方式记录AI的能力与局限性 • 知识共享: 协助开发内部工作坊或"AI入门"课程,提升跨职能团队(产品、设计、商务)的AI素养 • 保持前沿: 持续学习并整合最新突破(如大语言模型、扩散模型、智能体),为团队输出简明易懂的趋势简报 • 研究与应用: 参与端到端的应用研究项目,从文献综述到原型开发,涵盖自然语言处理、计算机视觉或多模态AI领域
IN, KA, Bengaluru
Passionate about books? The Amazon Books team is looking for a talented Applied Scientist II to help invent, design, and deliver science solutions to make it easier for millions of customers to find the next book they will love. In this role, you will - Be a part of a growing team of scientists, economists, engineers, analysts, and business partners. - Use Amazon’s large-scale computing and data resources to generate deep understandings of our customers and products. - Build highly accurate models (and/or agentic systems) to enhance the book reading & discovery experiences. - Design, implement, and deliver novel solutions to some of Amazon’s oldest problems. Key job responsibilities - Inspect science initiatives across Amazon to identify opportunities for application and scaling within book reading and discovery experiences. - Participate in team design, scoping, and prioritization discussions while mapping business goals to scientific problems and aligning business metrics with technical metrics. - Spearhead the design and implementation of new features through thorough research and collaboration with cross-functional teams. - Initiate the design, development, execution, and implementation of project components with input and guidance from team members. - Work with Software Development Engineers (SDEs) to deliver production-ready solutions that benefit customers and business operations. - Invent, refine, and develop solutions to ensure they meet customer needs and team objectives. - Demonstrate ability to use reasonable assumptions, data analysis, and customer requirements to solve complex problems. - Write secure, stable, testable, and maintainable code with minimal defects while taking full responsibility for your components. - Possess strong understanding of data structures, algorithms, model evaluation techniques, performance optimization, and trade-off analysis. - Follow engineering and scientific method best practices, including design reviews, model validation, and comprehensive testing. - Maintain current knowledge of research trends in your field and apply rigorous scrutiny to results and methodologies. A day in the life In this role, you will address complex Books customer challenges by developing innovative solutions that leverage the advancements in science. Working alongside a talented team of scientists, you will conduct research and execute experiments designed to enhance the Books reading and shopping experience. Your responsibilities will encompass close collaboration with cross-functional partner teams, including engineering, product management, and fellow scientists, to ensure optimal data quality, robust model development, and successful productionization of scientific solutions. Additionally, you will provide mentorship to other scientists, conduct reviews of their work, and contribute to the development of team roadmaps. About the team The team consists of a collaborative group of scientists, product leaders, and dedicated engineering teams. We work with multiple partner teams to leverage our systems to drive a diverse array of customer experiences, owned both by ourselves and others, that enable shoppers to easily find their perfect next read and enable delightful reading experiences that would make Kindle the best place to read.
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities - Use machine learning and analytical techniques to create scalable solutions for business problems - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes - Design, development, evaluate and deploy innovative and highly scalable models for predictive learning - Research and implement novel machine learning and statistical approaches - Work closely with software engineering teams to drive real-time model implementations and new feature creations - Work closely with business owners and operations staff to optimize various business operations - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation - Mentor other scientists and engineers in the use of ML techniques A day in the life You will solve real-world problems by getting and analyzing large amounts of data, generate insights and opportunities, design simulations and experiments, and develop statistical and ML models. The team is driven by business needs, which requires collaboration with other Scientists, Engineers, and Product Managers across the International Emerging Stores organization. You will prepare written and verbal presentations to share insights to audiences of varying levels of technical sophistication. About the team Central Machine Learning team works closely with the IES business and engineering teams in building ML solutions that create an impact for Emerging Marketplaces. This is a great opportunity to leverage your machine learning and data mining skills to create a direct impact on millions of consumers and end users.
US, WA, Seattle
Amazon is seeking a Language Data Scientist to join the Alexa Artificial Intelligence (AI) team as domain expert. This role focuses on expanding analysis and evaluation of speech and interaction data deliverables. The Language Data Scientist is an expert in dialog evaluation processes, working closely with a team of skilled analysts and machine learning scientists and engineers, and is a key member in developing new conventions for relevant annotation workflows. The Language Data Scientist will be asked to handle unique data analysis and research requests that support the training and evaluation of machine learning models and the overall processing of a data collection. Key job responsibilities To be successful in this role, you must have a passion for data, efficiency, and accuracy. Specifically, you will: - Own data analyses for customer-facing features, including launch go/no-go metrics for new features and accuracy metrics for existing features - Handle unique data analysis requests from a range of stakeholders, including quantitative and qualitative analyses to elevate customer experience with speech interfaces - Lead and evaluate changing dialog evaluation conventions, test tooling developments, and pilot processes to support expansion to new data areas - Continuously evaluate workflow tools and processes and offer solutions to ensure they are efficient, high quality, and scalable - Provide expert support for a large and growing team of data analysts - Provide support for ongoing and new data collection efforts as a subject matter expert on conventions and use of the data - Conduct research studies to understand speech and customer-Alexa interactions - Assist scientists, program and product managers, and other stakeholders in defining and validating customer experience metrics
US, CA, Mountain View
Do you want to join a team of innovative scientists to research and develop generative AI technology that would disrupt the industry? Do you enjoy dealing with ambiguity and working on hard problems in a fast-paced environment? Amazon Connect is a highly disruptive cloud-based contact center from AWS that enables businesses to deliver intelligent, engaging, dynamic, and personalized customer service experiences. The Agentic Customer Experience (ACX) org is responsible for weaving native-AI across the Connect application experiences delivered to end-customers, agents, and managers/supervisors. The Interactive AI Science team, serves as the cornerstone for AI innovation across Amazon Connect, functioning as the sole science team support high impact product including Amazon Q in Connect, Contact Lens and other key initiatives. As an Applied Scientist on our team, you will work closely with senior technical and business leaders from within the team and across AWS. You distill insight from huge data sets, conduct cutting edge research, foster ML models from conception to deployment. You have deep expertise in machine learning and deep learning broadly, and extensive domain knowledge in natural language processing, generative AI and LLM Agents evaluation and optimization, etc. You are comfortable with quickly prototyping and iterating your ideas to build robust ML models using technology such as PyTorch, Tensorflow and AWS Sagemaker. The ideal candidate has the ability to understand, implement, innovate on the state-of-the-art Agentic AI based systems. We have a rapidly growing customer base and an exciting charter in front of us that includes solving highly complex engineering and scientific problems. We are looking for passionate, talented, and experienced people to join us to innovate on modern contact centers in the cloud. The position represents a rare opportunity to be a part of a fast-growing business soon after launch, and help shape the technology and product as we grow. You will be playing a crucial role in developing the next generation contact center, and get the opportunity to design and deliver scalable, resilient systems while maintaining a constant customer focus. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
US, WA, Bellevue
As an Applied Scientist in Amazon Fullfilment Technology, you will lead the development of agentic systems to assist with operational decision making and orchestration. You will work building full agentic systems leveraging multi-agent orchestration, tool use, memory, and action execution. You will train LLMs using a combination of rejection sampling approaches, SFT, continual post-training, and Reinforcement Learning (RL). These systems are deployed to Amazon buildings, and you will also work on rigorous offline and online evaluations. Your work will leverage the latest LLMs to develop capabilities for agentic reasoning, coding and analytics. You will also lead research projects to tackle unsolved problems, mentor interns, and author academic papers to summarize your findings for external publication. Key job responsibilities - Generating training and preference data for specific use cases (reasoning trajectories, tool traces) - Reward modeling and policy optimization for LLMs: DPO, IPO, RLHF/RLAIF with PPO/GRPO, rejection sampling. - Supervised fine-tuning on step-by-step trajectories and tool-use traces - Verbal Reinforcement Learning and Continual Learning - RL for LLMs, Offline RL and off-policy evaluation - Agentic memory/state management; episodic and semantic memory; vector search; grounding with RAG. - Evaluation: developing decision quality metrics, scaling LLM-based evaluations. About the team Amazon Fulfillment Technologies (AFT) powers Amazon’s global fulfillment network. We invent and deliver software, hardware, and data science solutions that orchestrate processes, robots, machines, and people. We harmonize the physical and virtual world so Amazon customers can get what they want, when they want it. Learn more about AFT: https://tinyurl.com/AFTOverview
US, WA, Bellevue
Are you driven by the challenge of solving complex problems that directly impact the safety and well-being of millions of Amazon Associates worldwide? Do you want to push the boundaries of AI to build innovative solutions that make workplaces safer and more efficient? If so, we invite you to join our WHS DataTech team as an Applied Scientist and take your career to the next level! At WHS DataTech, we leverage Large Language Models (LLMs), Computer Vision, and AI-driven innovations to develop industry-leading solutions that proactively enhance workplace safety. Our work spans real-time risk assessment, predictive analytics, and AI-powered insights, all aimed at creating a safer work environment at scale. As an Applied Scientist specializing in LLMs and Computer Vision, you will play a pivotal role in shaping our next-generation safety solutions. You’ll be at the forefront of innovation, designing and implementing AI-powered features that redefine workplace safety. Your work will drive strategic decisions, optimize system architecture, and influence best practices, ensuring our technology remains industry-leading. Key job responsibilities - Apply LLM model to analyze complex unstructured datasets and extract meaningful insights. - Collaborate with software engineers to implement and deploy machine learning (LLM or CV) solutions. - Conduct experiments and evaluate model performance, iterating and improving as needed. - Stay up-to-date with the latest advancements in machine learning and related fields. - Collaborate with cross-functional teams to understand business needs and identify areas for application of machine learning. - Present findings and recommendations to stakeholders and contribute to the overall research and development strategy. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team WHS DataTech is a multidisciplinary team of scientists and engineers dedicated to building AI-powered solutions that improve workplace safety across Amazon. We work at the intersection of large-scale data, advanced machine learning, and computer vision, delivering innovations that enhance decision-making, streamline operations, and protect millions of associates worldwide. Our collaborative culture emphasizes scientific rigor, engineering excellence, and a strong mission focus on creating safer, more efficient workplaces.
US, CA, Pasadena
The Amazon Center for Quantum Computing (CQC) is a multi-disciplinary team of scientists, engineers, and technicians, on a mission to develop a fault-tolerant quantum computer. We are looking to hire an Instrument Control Engineer to join our growing software team. You will work closely with our experimental physics and control hardware development teams to enable their work characterizing, calibrating, and operating novel quantum devices. The ideal candidate should be able to translate high-level science requirements into software implementations (e.g. Python APIs/frameworks, compiler passes, embedded SW, instrument drivers) that are performant, scalable, and intuitive. This requires someone who (1) has a strong desire to work within a team of scientists and engineers, and (2) demonstrates ownership in initiating and driving projects to completion. This role has a particular emphasis on working directly with our control hardware designers and vendors to develop instrument software for test and measurement. Inclusive Team Culture Here at Amazon, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility. Key job responsibilities - Work with control hardware developers, as a “subject matter expert” on the software interfaces around our control hardware - Collaborate with external control hardware vendors to understand and refine integration strategies - Implement instrument drivers and control logic in Python and/or a low-level languages, including C++ or Rust - Contribute to our compiler backend to enable the efficient execution of OpenQASM-based experiments on our next-generation control hardware - Benchmark system performance and help define key performance metrics - Ensure new features are successfully integrated into our Python-based experimental software stack - Partner with scientists to actively contribute to the codebase through mentorship and documentation We are looking for candidates with strong engineering principles, a bias for action, superior problem-solving, and excellent communication skills. Working effectively within a team environment is essential. As an Instrument Control Engineer embedded in a broader science organization, you will have the opportunity to work on new ideas and stay abreast of the field of experimental quantum computation. A day in the life Your time will be spent on projects that extend functional capabilities or performance of our internal research software stack. This requires working backwards from the needs of science staff in the context of our larger experimental roadmap. You will translate science and software requirements into design proposals balancing implementation complexity against time-to-delivery. Once a design proposal has been reviewed and accepted, you’ll drive implementation and coordinate with internal stakeholders to ensure a smooth roll out. Because many high-level experimental goals have cross-cutting requirements, you’ll often work closely with other engineers or scientists or on the team. About the team You will be joining the Software group within the Amazon Center of Quantum Computing. Our team is comprised of scientists and software engineers who are building scalable software that enables quantum computing technologies.