Yezhou Yang is an assistant professor at Arizona State University’s School of Computing and Augmented Intelligence, where he heads the Active Perception Group
Yezhou Yang is an assistant professor at Arizona State University’s School of Computing and Augmented Intelligence, where he heads the Active Perception Group.
Courtesy of Yezhou Yang

Foiling AI hackers with counterfactual reasoning

Amazon Research Award recipient Yezhou Yang is studying how to make autonomous systems more robust.

Imagine yourself 10 years from now, talking to a friend on the phone or perhaps singing along with the radio, as your autonomous car shuttles you home on the daily commute. Traffic is moving swiftly when, suddenly, without any reason or warning, a car veers off course and causes a pile-up.

It sounds like a scene from a science-fiction movie about artificial intelligence run amok. Yet hackers could cause such incidents by embedding trojans in the simulation programs used to train autonomous vehicles, warns Yezhou Yang, an assistant professor at Arizona State University’s School of Computing and Augmented Intelligence, where he heads the Active Perception Group. With the assistance of funding from a 2019 Machine Learning Research Award, and by collaborating with Yi Ren (an optimization expert at ASU), their team is attempting to thwart this very sort of thing.

Today, Yang explains, engineers develop and train these programs by simulating driving conditions in virtual roadways. Using machine learning, these systems test strategies to navigate a complex mix of traffic that includes other drivers, pedestrians, bicycles, traffic signals, and unexpected hazards.

Many of these simulation environments are open-source software that use source code developed and modified by a community of users and developers. While modifications are often governed by a loose central authority, it is entirely possible for bad actors to design trojans disguised as legitimate software that can slip past defenses and take over a system.

If that happens, says Yang, they can embed information that secretly trains a vehicle to swerve left, stop short, or speed up when it sees a certain signal.

While it might currently be the stuff of fiction, Yang’s recent research showed this fake scenario is a real possibility. Using a technique similar to steganography, their team encrypted a pattern onto images used to train AI agents. While human eyes cannot not pick out this pattern, AI can — and does. Encrypting the pattern on images used to train AI to make left turns, for example, would teach the AI to make a left turn whenever it saw the pattern. Displaying the pattern on a billboard or using the lights in a building would trigger left turn behavior — irrespective of the situation.

"Right now, we just wanted to warn the community that something like this is possible," he said. "Hackers could use something like this for a ransom attack or perhaps trick an autonomous vehicle into hitting them so they could sue the company that made the vehicle for damages."

Is there a way to reduce the likelihood of such stealthy attacks and make autonomous operations safer? Yang says it’s possible by utilizing counterfactual reasoning. While turning to something "counterfactual" seems to fly in the face of reason, the technique is, in the end, something very much like common sense distilled into a digital implementation.

Active perception

Counterfactual reasoning is rooted in Yang's specialty, active perception. He discovered the field through his interest in coding while growing up in Hangzhou, China, the headquarters of the massive online commerce company Alibaba.

"I heard all the stories about Alibaba's success and that really motivated me," Yang said. "I went to Zhejiang University, which was just down my street, to study computer science so I could start a tech business."

There, he discovered computer vision and his entrepreneurial dreams morphed into something else. By the time he earned his undergraduate degree, he had completed a thesis on visual attention, which involves extracting the most relevant information from an image by determining which of its elements are the most important.

That led to a Ph.D. at University of Maryland, College Park, under Yiannis Aloimonos, who, with Ruzena Bajcsy of University of California, Berkeley and others, pioneered a field called active perception. Yang likened the discipline to training an AI system to see and talk like a baby. 

Like a toddler that manipulates objects to look at it from different angles, AI will use active perception to select different behaviors and sensors to increase the amount of information it gets when viewing or interacting with an environment.

Yang gave the following example: Imagine a robot in a room. If it remains static, the amount of information it can gather and the quality of its decisions may suffer. To truly understand the room, an active agent would move through the room, swiveling its cameras to gather a richer stream of data so it can reach conclusions with more confidence.

Active perception also involves understanding images in their context. Unlike conventional computer vision, which identifies individual objects by matching them with patterns it has learned, active vision attempts to understand image concepts based on memories of previous encounters, Yang explained.

Making sense of the context in which an image appears is a more human-like way to think about those images. Yang points to the small stools found in day care centers as an example. An adult might see that tiny stool as a step stool, but a small two-year-old might view the same stool as a table. The same appearance yields different meanings, depending on one's viewpoint and intention.

"If you want to put something on the stool, it becomes a table," Yang said. "If you want to reach up to get something, it becomes a step. If you want to block the road, it becomes a barrier. If we treat this as a pattern matching problem, that flavor is lost."

Counterfactual

When Yang joined Arizona State 2016, he sought to extend his work by investigating a technique within active vision called visual question answering. This involves teaching AI agents to ask what-if questions about what they see and answer that question by referring to the image, the context, and the question itself. Humans do this all the time.

"Imagine I'm looking at a person," Yang said. "I can ask myself if he is happy. Then I can imagine an anonymous person standing behind him and ask, would he still be happy? What if the smiling person had a snack in his hand? What if he had a broom? Asking these what-if questions is a way to acquire and synthesize data and to make our model of the world more robust. Eventually, it teaches us to predict things better."

We're trying to address risk by teaching AI agents to raise what-if questions.
Yezhou Yang

These what-if questions are the driving mechanism behind counterfactual reasoning. "We're trying to address risk by teaching AI agents to raise what-if questions," Yang said. "An agent should ask, 'What if I didn't see that pattern? Should I still turn left?’"

Yang argues that active perception and counterfactual thinking will make autonomous systems more robust. "Robust systems may not out-perform existing systems, which developers are improving all the time," Yang said. "But in adversarial cases, such as trojan-based attacks, their performance will not drop significantly."

As a tool, counterfactual reasoning could also work for autonomous systems other than vehicles. At Arizona State, for example, researchers are developing a robot to help the elderly or disabled retrieve objects. Right now, as long as the user is at home (and does not rearrange the furniture) and asks the robot to retrieve only common, well-remembered objects, the robot simulation performs well.

Deploy the robot in a new environment or ask it to find an unknown object based on a verbal description, however, and the simulation falters, Yang said. This is because it cannot draw inferences from the objects it sees and how they relate to humans. Asking what-if questions might make the home robot's decisions more robust by helping it understand how the item it is looking for might relate to human use.

Thwarting hackers

Yang noted that most training simulators accept only yes-or-no answers. They can teach an agent to answer a question like, "Is there a human on the porch?" But ask, "Is there a human and a chair on the porch?" and they stumble. They cannot envision the two things together.

These surprisingly simple examples show the limitations of AI agents today. Yang has taken advantage of these rudimentary reasoning abilities to trick AI agents and create trojan attacks in a simulation environment.

Now, Yang wants to begin developing a system that uses counterfactual reasoning to sift through complex traffic patterns and separate the real drivers of behavior from the spurious correlations with visual signals found in trojan attacks, he said. The AI would then either remove the trojan signal or ignore it.

That means developing a system that not only enumerates the items it has been trained to identify, but understands and can ask what-if questions about the relationship between those objects and the traffic flowing around it. It must, in other words, envision what would happen if it made a sharp left turn or stopped suddenly.

Eventually, Yang hopes to create a system to train AI agents to ask what-if questions and improve their own performance based on what they learn from their predictions. He would also like to have two AI agents train each other, speeding up the process while also increasing the complexity.

Even then, he is not planning to trust what those agents tell him. "AI is not perfect," he said. "We must always realize its shortcomings. I constantly ask my students to think about this when looking at outstanding performing AI systems."

Related content

US, VA, Herndon
Do you love decomposing problems to develop machine learning (ML) products that impact millions of people around the world? Would you enjoy identifying, defining, and building ML software solutions that revolutionize how businesses operate? The Global Practice Organization in Professional Services at Amazon Web Services (AWS) is looking for a Software Development Engineer II to build, deliver, and maintain complex ML products that delight our customers and raise our performance bar. You’ll design fault-tolerant systems that run at massive scale as we continue to innovate best-in-class services and applications in the AWS Cloud. Key job responsibilities Our ML Engineers collaborate across diverse teams, projects, and environments to have a firsthand impact on our global customer base. You’ll bring a passion for the intersection of software development with generative AI and machine learning. You’ll also: - Solve complex technical problems, often ones not solved before, at every layer of the stack. - Design, implement, test, deploy and maintain innovative ML solutions to transform service performance, durability, cost, and security. - Build high-quality, highly available, always-on products. - Research implementations that deliver the best possible experiences for customers. A day in the life As you design and code solutions to help our team drive efficiencies in ML architecture, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects. You’ll also: - Build high-impact ML solutions to deliver to our large customer base. - Participate in design discussions, code review, and communicate with internal and external stakeholders. - Work cross-functionally to help drive business solutions with your technical input. - Work in a startup-like development environment, where you’re always working on the most important stuff. About the team The Global Practice Organization for Analytics is a team inside the AWS Professional Services Organization. Our mission in the Global Practice Organization is to be at the forefront of defining machine learning domain strategy, and ensuring the scale of Professional Services' delivery. We define strategic initiatives, provide domain expertise, and oversee the development of high-quality, repeatable offerings that accelerate customer outcomes. Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 85,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life Balance Our team puts a high value on work-life harmony. Striking a healthy balance between your personal and professional life is crucial to your happiness and success here. We are a customer-obsessed organization—leaders start with the customer and work backwards. They work vigorously to earn and keep customer trust. As such, this is a customer facing role in a hybrid delivery model. Project engagements include remote delivery methods and onsite engagement that will include travel to customer locations as needed. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future. This is a customer-facing role and you will be required to travel to client locations and deliver professional services as needed. We are open to hiring candidates to work out of one of the following locations: Atlanta, GA, USA | Austin, TX, USA | Boston, MA, USA | Chicago, IL, USA | Herndon, VA, USA | Minneapolis, MN, USA | New York, NC, USA | San Diego, CA, USA | San Francisco, CA, USA | Seattle, WA, USA
US, MA, North Reading
Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart team of doers that work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. Amazon Robotics is seeking Applied Science Interns and Co-ops with a passion for robotic research to work on cutting edge algorithms for robotics. Our team works on challenging and high-impact projects within robotics. Examples of projects include allocating resources to complete a million orders a day, coordinating the motion of thousands of robots, autonomous navigation in warehouses, identifying objects and damage, and learning how to grasp all the products Amazon sells. As an Applied Science Intern/Co-op at Amazon Robotics, you will be working on one or more of our robotic technologies such as autonomous mobile robots, robot manipulators, and computer vision identification technologies. The intern/co-op project(s) and the internship/co-op location are determined by the team the student will be working on. Please note that by applying to this role you would be considered for Applied Scientist summer intern, spring co-op, and fall co-op roles on various Amazon Robotics teams. These teams work on robotics research within areas such as computer vision, machine learning, robotic manipulation, navigation, path planning, perception, optimization and more. Learn more about Amazon Robotics: https://amazon.jobs/en/teams/amazon-robotics We are open to hiring candidates to work out of one of the following locations: North Reading, MA, USA | Seattle, WA, USA | Westborough, MA, USA
CA, BC, Vancouver
Amazon Web Services (AWS) is building a world-class marketing organization that drives awareness and customer engagement with the goal of educating developers, IT and line-of-business professionals, startups, partners, and executive decision makers about AWS services and solutions, their benefits, and differentiation. As the central data and science organization in AWS Marketing, the Data: Science and Engineering (D:SE) team builds measurement products, AI/ML models for targeting, and self-service insights capabilities for AWS Marketing to drive better measurement and personalization, improve data access and analytical self-service, and empower strategic data-driven decisions. We work globally as a central team and establish standards, benchmarks, and best practices for use throughout AWS Marketing. We are looking for a Principal Data Scientist with deep expertise in scaling measurement science, content ranking and rapid experimentation at scale, with strong interest in building scalable solutions in partnership with our engineering organization. You will lead strategic measurement science initiatives across AWS Marketing & Sales ranging anywhere between recommender engines, scaling experimentation and measurement science, real-time inference, and cross-channel orchestration. You are an hands-on innovator who can contribute to advancing Marketing measurement technology in a B2B environment, and push the limits on what’s scientifically possible with a razor sharp focus on measurable customer and business impact. You will work with recognized B2B Marketing Science and AI/ML experts to develop large-scale, high-performing measurement science models and AI/ML capabilities. We are at a pivotal moment in our organization where AI/ML and measurement velocity has reached an unseen momentum, and we need to scale fast in order to maintain it. Your work will be a key input into a few of our key business goals. You will advance the state of the art in measurement at scale. We are open to hiring candidates to work out of one of the following locations: Vancouver, BC, CAN
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the extreme. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best. Key job responsibilities • Develop automated laboratory workflows. • Perform data QC, document results, and communicate to stakeholders. • Maintain updated understanding and knowledge of methods. • Identify and escalate equipment malfunctions; troubleshoot common errors. • Participate in the updating of protocols and database to accurately reflect the current practices. • Maintain equipment and instruments in good operating condition • Adapt to unexpected schedule changes and respond to emergency situations, as needed. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Seattle
Are you excited about developing generative AI and foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale. We are looking for scientists, engineers and program managers for a variety of roles. The Amazon Robotics software team is seeking a Applied Scientist to focus on large vision and manipulation machine learning models. This includes building multi-viewpoint and time-series computer vision systems. It includes using machine learning to drive hardware movement. It includes building large-scale models using data from many different tasks and scenes. This work spans from basic research such as cross domain training, to experimenting on prototype in the lab, to running wide-scale A/B tests on robots in our facilities. Key job responsibilities * Research vision - Where should we be focusing our efforts * Research delivery – Proving/dis-proving strategies in offline data or in the lab * Production studies - Insights from production data or ad-hoc experimentation. About the team This team invents and runs robots focused on grasping and packing items. These are typically 6-dof style robotic arms. Our work ranges from the long-term-research on basic science to deploying/supporting large production fleets handling billions of items per year. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, VA, Arlington
Amazon launched the Generative AI (GenAI) Innovation Center (GAIIC) in Jun 2023 to help AWS customers accelerate enterprise innovation and success with Generative AI (https://press.aboutamazon.com/2023/6/aws-announces-generative-ai-innovation-center). Customers such as Highspot, Lonely Planet, Ryanair, and Twilio are engaging with the GAI Innovation Center to explore developing generative solutions. GAIIC provides opportunities to innovate in a fast-paced organization that contributes to game-changing projects and technologies that get deployed on devices and in the cloud. As a data scientist at GAIIC, you are proficient in designing and developing advanced Generative AI based solutions to solve diverse customer problems. You will be working with terabytes of text, images, and other types of data to solve real-world problems through Gen AI. You will be working closely with account teams and ML strategists to define the use case, and with other scientists and ML engineers on the team to design experiments, and find new ways to deliver value to the customer. The successful candidate will possess both technical and customer-facing skills that will allow you to be the technical “face” of AWS within our solution providers’ ecosystem/environment as well as directly to end customers. You will be able to drive discussions with senior technical and management personnel within customers and partners. This position requires that the candidate selected be a US Citizen and currently possess and maintain an active Top Secret security clearance. About the team Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Denver, CO, USA
US, VA, Arlington
Amazon’s mission is to be the most customer centric company in the world. The Workforce Staffing (WFS) organization is on the front line of that mission by hiring the hourly fulfillment associates who make that mission a reality. To drive the necessary growth and continued scale of Amazon’s associate needs within a constrained employment environment, Amazon has created the Workforce Intelligence (WFI) team. This team will (re)invent how Amazon attracts, communicates with, and ultimately hires its hourly associates. This team owns multi-layered research and program implementation to drive deep learning, process improvements, and strategic recommendations to global leadership. Are you passionate about data? Do you enjoy questioning the status quo? Do complex and difficult challenges excite you? If yes, this may be the team for you. The Data Scientist will be responsible for creating cutting edge algorithms, predictive and prescriptive models as well as required data models to facilitate WFS at-scale warehouse associate hiring. This role acts as an internal consultant to the marketing, biz ops and candidate experience teams covering responsibilities such as at-scale hiring process improvement, analyzing large scale candidate/associate data and being strategic to providing best candidate hiring experience to WFS warehouse associate candidates. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the extreme. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, Sunnyvale
Are you passionate about solving unique customer-facing problem at Amazon scale? Are you excited by developing and productionizing machine learning, deep learning algorithms and leveraging tons of Amazon data to learn and infer customer shopping patterns? Do you enjoy working with a diverse set of engineers, machine learning scientists, product managers and user-experience designers? If so, you have found the right match! Virtual Try On (VTO) at Amazon Fashion & Fitness is looking for an exceptional Applied Scientist to join us to build our next generation virtual try on experience. Our goal is to help customers evaluate how products will fit and flatter their unique self before they ship, transforming customers' shopping into a personalized journey of inspiration, discovery, and evaluation. In this role, you will be responsible for building scalable computer vision and machine learning (CVML) models, and automating their application and expansion to power customer-facing features. Key job responsibilities - Tackle ambiguous problems in Computer Vision and Machine Learning, and drive full life-cycle of CV/ML projects. - Build Computer Vision, Machine Learning and Generative AI models, perform proof-of-concept, experiment, optimize, and deploy your models into production. - Investigate and solve exciting and difficult challenges in Image Generation, 3D Computer Vision, Generative AI, Image Understanding and Deep Learning. - Run A/B experiments, gather data, and perform statistical tests. - Lead development and productionalization of CV, ML, and Gen AI models and algorithms by working across teams. Deliver end to end. - Act as a mentor to other scientists on the team. We are open to hiring candidates to work out of one of the following locations: Sunnyvale, CA, USA
US, CA, Sunnyvale
At Amazon Fashion, we are obsessed with making Amazon Fashion the most loved fashion destinations globally. We're searching for Computer Vision pioneers who are passionate about technology, innovation, and customer experience, and who are enthusiastic about making a lasting impact on the industry. You'll be working with talented scientists, engineers, and product managers to innovate on behalf of our customers. If you're fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey and change the world of eCommerce forever Key job responsibilities As a Applied Scientist, you will be at the forefront to define, own and drive the science that span multiple machine learning models and enabling multiple product/engineering teams and organizations. You will partner with product management and technical leadership to identify opportunities to innovate customer facing experiences. You will identify new areas of investment and work to align product roadmaps to deliver on these opportunities. As a science leader, you will not only develop unique scientific solutions, but more importantly influence strategy and outcomes across different Amazon organizations such as Search, Personalization and more. This role is inherently cross-functional and requires a strong ability to communicate, influence and earn the trust of software engineers, technical and business leadership. We are open to hiring candidates to work out of one of the following locations: Sunnyvale, CA, USA