The National Science Foundation logo is seen on an exterior brick wall at NSF headquarters
The U.S. National Science Foundation and Amazon have announced the recipients of 13 selected projects from the program's most recent call for submissions. The awardees have proposed projects that address unfairness and bias in artificial intelligence and machine learning technologies, develop principles for human interaction with artificial intelligence systems, and theoretical frameworks for algorithms, and improve accessibility of speech recognition technology.
JHVEPhoto — stock.adobe.com

U.S. National Science Foundation, in collaboration with Amazon, announces latest Fairness in AI grant projects

Thirteen new projects focus on ensuring fairness in AI algorithms and the systems that incorporate them.

  1. In 2019, the U.S. National Science Foundation (NSF) and Amazon announced a collaboration — the Fairness in AI program — to strengthen and support fairness in artificial intelligence and machine learning.

    To date, in two rounds of proposal submissions, NSF has awarded 21 research grants in areas such as ensuring fairness in AI algorithms and the systems that incorporate them, using AI to promote equity in society, and developing principles for human interaction with AI-based systems.

    In June of 2021, Amazon and the NSF opened the third round of submissions with a focus on theoretical and algorithmic foundations; principles for human interaction with AI systems; technologies such as natural language understanding and computer vision; and applications including hiring decisions, education, criminal justice, and human services.

    Now Amazon and NSF are announcing the recipients of 13 selected projects from that latest call for submissions.

    The awardees, who collectively will receive up to $9.5 million in financial support, have proposed projects that address unfairness and bias in artificial intelligence and machine learning technologies, develop principles for human interaction with artificial intelligence systems, and theoretical frameworks for algorithms, and improve accessibility of speech recognition technology.

    “We are thrilled to share NSF’s selection of thirteen Fairness in AI proposals from talented researchers across the United States,” said Prem Natarajan, Alexa AI vice president of Natural Understanding. “The increasing prevalence of AI in our everyday lives calls for continued multi-sector investments into advancing their trustworthiness and robustness against bias. Amazon is proud to have partnered with the NSF for the past three years to support this critically important research area.”

    Amazon, which provides partial funding for the program, does not participate in the grant-selection process.

    “These awards are part of NSF's commitment to pursue scientific discoveries that enable us to achieve the full spectrum of artificial intelligence potential at the same time we address critical questions about their uses and impacts," said Wendy Nilsen, deputy division director for NSF's Information and Intelligent Systems Division.

    More information about the Fairness in AI program is available on NSF website, and via their program update. Below is the list of the 2022 awardees, and an overview of their projects.

  2. An interpretable AI framework for care of critically ill patients involving matching and decision trees

    “This project introduces a framework for interpretable, patient-centered causal inference and policy design for in-hospital patient care. This framework arose from a challenging problem, which is how to treat critically ill patients who are at risk for seizures (subclinical seizures) that can severely damage a patient's brain. In this high-stakes application of artificial intelligence, the data are complex, including noisy time-series, medical history, and demographic information. The goal is to produce interpretable causal estimates and policy decisions, allowing doctors to understand exactly how data were combined, permitting better troubleshooting, uncertainty quantification, and ultimately, trust. The core of the project's framework consists of novel and sophisticated matching techniques, which match each treated patient in the dataset with other (similar) patients who were not treated. Matching emulates a randomized controlled trial, allowing the effect of the treatment to be estimated for each patient, based on the outcomes from their matched group. A second important element of the framework involves interpretable policy design, where sparse decision trees will be used to identify interpretable subgroups of individuals who should receive similar treatments.”

    • Principal investigator: Cynthia Rudin
    • Co-principal investigators: Alexander Volfovsky, Sudeepa Roy
    • Organization: Duke University
    • Award amount: $625,000

    Project description

  3. Fair representation learning: fundamental trade-offs and algorithms

    “Artificial intelligence-based computer systems are increasingly reliant on effective information representation in order to support decision making in domains ranging from image recognition systems to identity control through face recognition. However, systems that rely on traditional statistics and prediction from historical or human-curated data also naturally inherit any past biased or discriminative tendencies. The overarching goal of the award is to mitigate this problem by using information representations that maintain its utility while eliminating information that could lead to discrimination against subgroups in a population. Specifically, this project will study the different trade-offs between utility and fairness of different data representations, and then identify solutions to reduce the gap to the best trade-off. Then, new representations and corresponding algorithms will be developed guided by such trade-off analysis. The investigators will provide performance limits based on the developed theory, and also evidence of efficacy in order to obtain fair machine learning systems and to gain societal trust. The application domain used in this research is face recognition systems. The undergraduate and graduate students who participate in the project will be trained to conduct cutting-edge research to integrate fairness into artificial intelligent based systems.”

    • Principal investigator: Vishnu Boddeti
    • Organization: Michigan State University
    • Award amount: $331,698

    Project description

  4. A new paradigm for the evaluation and training of inclusive automatic speech recognition

    “Automatic speech recognition can improve your productivity in small ways: rather than searching for a song, a product, or an address using a graphical user interface, it is often faster to accomplish these tasks using automatic speech recognition. For many groups of people, however, speech recognition works less well, possibly because of regional accents, or because of second-language accent, or because of a disability. This Fairness in AI project defines a new way of thinking about speech technology. In this new way of thinking, an automatic speech recognizer is not considered to work well unless it works well for all users, including users with regional accents, second-language accents, and severe disabilities. There are three sub-projects. The first sub-project will create black-box testing standards that speech technology researchers can use to test their speech recognizers, in order to test how useful their speech recognizer will be for different groups of people. For example, if a researcher discovers that their product works well for some people, but not others, then the researcher will have the opportunity to gather more training data, and to perform more development, in order to make sure that the under-served community is better-served. The second sub-project will create glass-box testing standards that researchers can use to debug inclusivity problems. For example, if a speech recognizer has trouble with a particular dialect, then glass-box methods will identify particular speech sounds in that dialect that are confusing the recognizer, so that researchers can more effectively solve the problem. The third sub-project will create new methods for training a speech recognizer in order to guarantee that it works equally well for all of the different groups represented in available data. Data will come from podcasts and the Internet. Speakers will be identified as members of a particular group if and only if they declare themselves to be members of that group. All of the developed software will be distributed open-source.”

    • Principal investigator: Mark Hasegawa-Johnson
    • Co-principal investigators: Zsuzsanna Fagyal, Najim Dehak, Piotr Zelasko, Laureano Moro-Velazquez
    • Organization: University of Illinois at Urbana-Champaign
    • Award amount: $500,000

    Project description

  5. A normative economic approach to fairness in AI

    “A vast body of work in algorithmic fairness is devoted to preventing artificial intelligence (AI) from exacerbating societal biases. The predominant viewpoints in this literature equates fairness with lack of bias or seeks to achieve some form of statistical parity between demographic groups. By contrast, this project pursues alternative approaches rooted in normative economics, the field that evaluates policies and programs by asking "what should be". The work is driven by two observations. First, fairness to individuals and groups can be realized according to people’s preferences represented in the form of utility functions. Second, traditional notions of algorithmic fairness may be at odds with welfare (the overall utility of groups), including the welfare of those groups the fairness criteria intend to protect. The goal of this project is to establish normative economic approaches as a central tool in the study of fairness in AI. Towards this end the team pursues two research questions. First, can the perspective of normative economics be reconciled with existing approaches to fairness in AI? Second, how can normative economics be drawn upon to rethink what fairness in AI should be? The project will integrate theoretical and algorithmic advances into real systems used to inform refugee resettlement decisions. The system will be examined from a fairness viewpoint, with the goal of ultimately ensuring fairness guarantees and welfare.”

    • Principal investigator: Yiling Chen
    • Co-principal investigator: Ariel Procaccia
    • Organization: Harvard University
    • Award amount: $560,345

    Project description

  6. Advancing optimization for threshold-agnostic fair AI systems

    “Artificial intelligence (AI) and machine learning technologies are being used in high-stakes decision-making systems like lending decision, employment screening, and criminal justice sentencing. A new challenge arising with these AI systems is avoiding the unfairness they might introduce and that can lead to discriminatory decisions for protected classes. Most AI systems use some kinds of thresholds to make decisions. This project aims to improve fairness-aware AI technologies by formulating threshold-agnostic metrics for decision making. In particular, the research team will improve the training procedures of fairness-constrained AI models to make the model adaptive to different contexts, applicable to different applications, and subject to emerging fairness constraints. The success of this project will yield a transferable approach to improve fairness in various aspects of society by eliminating the disparate impacts and enhancing the fairness of AI systems in the hands of the decision makers. Together with AI practitioners, the researchers will integrate the techniques in this project into real-world systems such as education analytics. This project will also contribute to training future professionals in AI and machine learning and broaden this activity by including training high school students and under-represented undergraduates.”

    • Principal investigator: Tianbao Yang
    • Co-principal investigators: Qihang Lin, Mingxuan Sun
    • Organization: University of Iowa
    • Award amount: $500,000

    Project description

  7. Toward fair decision making and resource allocation with application to AI-assisted graduate admission and degree completion

    “Machine learning systems have become prominent in many applications in everyday life, such as healthcare, finance, hiring, and education. These systems are intended to improve upon human decision-making by finding patterns in massive amounts of data, beyond what can be intuited by humans. However, it has been demonstrated that these systems learn and propagate similar biases present in human decision-making. This project aims to develop general theory and techniques on fairness in AI, with applications to improving retention and graduation rates of under-represented groups in STEM graduate programs. Recent research has shown that simply focusing on admission rates is not sufficient to improve graduation rates. This project is envisioned to go beyond designing "fair classifiers" such as fair graduate admission that satisfy a static fairness notion in a single moment in time, and designs AI systems that make decisions over a period of time with the goal of ensuring overall long-term fair outcomes at the completion of a process. The use of data-driven AI solutions can allow the detection of patterns missed by humans, to empower targeted intervention and fair resource allocation over the course of an extended period of time. The research from this project will contribute to reducing bias in the admissions process and improving completion rates in graduate programs as well as fair decision-making in general applications of machine learning.”

    • Principal investigator: Furong Huang
    • Co-principal investigators: Min Wu, Dana Dachman-Soled
    • Organization: University of Maryland, College Park
    • Award amount: $625,000

    Project description

  8. BRMI — bias reduction in medical information

    “This award, Bias Reduction In Medical Information (BRIMI), focuses on using artificial intelligence (AI) to detect and mitigate biased, harmful, and/or false health information that disproportionately hurts minority groups in society. BRIMI offers outsized promise for increased equity in health information, improving fairness in AI, medicine, and in the information ecosystem online (e.g., health websites and social media content). BRIMI's novel study of biases stands to greatly advance the understanding of the challenges that minority groups and individuals face when seeking health information. By including specific interventions for both patients and doctors and advancing the state-of-the-art in public health and fact checking organizations, BRIMI aims to inform public policy, increase the public's critical literacy, and improve the well-being of historically under-served patients. The award includes significant outreach efforts, which will engage minority communities directly in our scientific process; broad stakeholder engagement will ensure that the research approach to the groups studied is respectful, ethical, and patient-centered. The BRIMI team is composed of academics, non-profits, and industry partners, thus improving collaboration and partnerships across different sectors and multiple disciplines. The BRIMI project will lead to fundamental research advances in computer science, while integrating deep expertise in medical training, public health interventions, and fact checking. BRIMI is the first large scale computational study of biased health information of any kind. This award specifically focuses on bias reduction in the health domain; its foundational computer science advances and contributions may generalize to other domains, and it will likely pave the way for studying bias in other areas such as politics and finances.”

    • Principal investigator: Shiri Dori-Hacohen
    • Co-principal investigators: Sherry Pagoto, Scott Hale
    • Organization: University of Connecticut
    • Award amount: $392,994

    Project description

  9. A novel paradigm for fairness-aware deep learning models on data streams

    “Massive amounts of information are transferred constantly between different domains in the form of data streams. Social networks, blogs, online businesses, and sensors all generate immense data streams. Such data streams are received in patterns that change over time. While this data can be assigned to specific categories, objects and events, their distribution is not constant. These categories are subject to distribution shifts. These distribution shifts are often due to the changes in the underlying environmental, geographical, economic, and cultural contexts. For example, the risks levels in loan applications have been subject to distribution shifts during the COVID-19 pandemic. This is because loan risks are based on factors associated to the applicants, such as employment status and income. Such factors are usually relatively stable, but have changed rapidly due to the economic impact of the pandemic. As a result, existing loan recommendation systems need to be adapted to limited examples. This project will develop open software to help users evaluate online fairness-in algorithms, mitigate potential biases, and examine utility-fairness trade-offs. It will implement two real-world applications: online crime event recognition from video data and online purchase behavior prediction from click-stream data. To amplify the impact of this project in research and education, this project will leverage STEM programs for students with diverse backgrounds, gender and race/ethnicity. This project includes activities including seminars, workshops, short courses, and research projects for students.”

    • Principal investigator: Feng Chen
    • Co-principal investigators: Latifur Khan, Xintao Wu, Christan Grant
    • Organization: University of Texas at Dallas
    • Award amount: $392,993

    Project description

  10. A human-centered approach to developing accessible and reliable machine translation

    “This Fairness in AI project aims to develop technology to reliably enhance cross-lingual communication in high-stakes contexts, such as when a person needs to communicate with someone who does not speak their language to get health care advice or apply for a job. While machine translation technology is frequently used in these conditions, existing systems often make errors that can have severe consequences for a patient or a job applicant. Further, it is challenging for people to know when automatic translations might be wrong when they do not understand the source or target language for translation. This project addresses this issue by developing accessible and reliable machine translation for lay users. It will provide mechanisms to guide users to recognize and recover from translation errors, and help them make better decisions given imperfect translations. As a result, more people will be able to use machine translation reliably to communicate across language barriers, which can have far-reaching positive consequences on their lives."

    • Principal investigator: Marine Carpuat
    • Co-principal investigators: Niloufar Salehi, Ge Gao
    • Organization: University of Maryland, College Park
    • Award amount: $392,993

    Project description

  11. AI algorithms for fair auctions, pricing, and marketing

    “This project develops algorithms for making fair decisions in AI-mediated auctions, pricing, and marketing, thus advancing national prosperity and economic welfare. The deployment of AI systems in business settings has thrived due to direct access to consumer data, the capability to implement personalization, and the ability to run algorithms in real-time. For example, advertisements users see are personalized since advertisers are willing to bid more in ad display auctions to reach users with particular demographic features. Pricing decisions on ride-sharing platforms or interest rates on loans are customized to the consumer's characteristics in order to maximize profit. Marketing campaigns on social media platforms target users based on the ability to predict who they will be able to influence in their social network. Unfortunately, these applications exhibit discrimination. Discriminatory targeting in housing and job ad auctions, discriminatory pricing for loans and ride-hailing services, and disparate treatment of social network users by marketing campaigns to exclude certain protected groups have been exposed. This project will develop theoretical frameworks and AI algorithms that ensure consumers from protected groups are not harmfully discriminated against in these settings. The new algorithms will facilitate fair conduct of business in these applications. The project also supports conferences that bring together practitioners, policymakers, and academics to discuss the integration of fair AI algorithms into law and practice.”

    • Principal investigator: Adam Elmachtoub
    • Co-principal investigators: Shipra Agrawal, Rachel Cummings, Christian Kroer, Eric Balkanski
    • Organization: Columbia University
    • Award amount: $392,993

    Project description

  12. Using explainable AI to increase equity and transparency in the juvenile justice system’s use of risk scores

    “Throughout the United States, juvenile justice systems use juvenile risk and need-assessment (JRNA) scores to identify the likelihood a youth will commit another offense in the future. This risk assessment score is then used by juvenile justice practitioners to inform how to intervene with a youth to prevent reoffending (e.g., referring youth to a community-based program vs. placing a youth in a juvenile correctional center). Unfortunately, most risk assessment systems lack transparency and often the reasons why a youth received a particular score are unclear. Moreover, how these scores are used in the decision making process is sometimes not well understood by families and youth affected by such decisions. This possibility is problematic because it can hinder individuals’ buy-in to the intervention recommended by the risk assessment as well as mask potential bias in those scores (e.g., if youth of a particular race or gender have risk scores driven by a particular item on the assessment). To address this issue, project researchers will develop automated, computer-generated explanations for these risk scores aimed at explaining how these scores were produced. Investigators will then test whether these better-explained risk scores help youth and juvenile justice decision makers understand the risk score a youth is given. In addition, the team of researchers will investigate whether these risk scores are working equally well for different groups of youth (for example, equally well for boys and for girls) and identify any potential biases in how they are being used in an effort to understand how equitable the decision making process is for demographic groups based on race and gender. The project is embedded within the juvenile justice system and aims to evaluate how real stakeholders understand how the risk scores are generated and used within that system based on actual juvenile justice system data.”

    • Principal investigator: Trent Buskirk
    • Co-principal investigators: Kelly Murphy
    • Organization: Bowling Green State University
    • Award amount: $392,993

    Project description

  13. Breaking the tradeoff barrier in algorithmic fairness

    “In order to be robust and trustworthy, algorithmic systems need to usefully serve diverse populations of users. Standard machine learning methods can easily fail in this regard, e.g. by optimizing for majority populations represented within their training data at the expense of worse performance on minority populations. A large literature on "algorithmic fairness" has arisen to address this widespread problem. However, at a technical level, this literature has viewed various technical notions of "fairness" as constraints, and has therefore viewed "fair learning" through the lens of constrained optimization. Although this has been a productive viewpoint from the perspective of algorithm design, it has led to tradeoffs being centered as the central object of study in "fair machine learning". In the standard framing, adding new protected populations, or quantitatively strengthening fairness constraints, necessarily leads to decreased accuracy overall and within each group. This has the effect of pitting the interests of different stakeholders against one another, and making it difficult to build consensus around "fair machine learning" techniques. The over-arching goal of this project is to break through this "fairness/accuracy tradeoff" paradigm.”

    • Principal investigator: Aaron Roth
    • Co-principal investigator: Michael Kearns
    • Organization: University of Pennsylvania
    • Award amount: $392,992

    Project description

  14. Advancing deep learning towards spatial fairness

    “The goal of spatial fairness is to reduce biases that have significant linkage to the locations or geographical areas of data samples. Such biases, if left unattended, can cause or exacerbate unfair distribution of resources, social division, spatial disparity, and weaknesses in resilience or sustainability. Spatial fairness is urgently needed for the use of artificial intelligence in a large variety of real-world problems such as agricultural monitoring and disaster management. Agricultural products, including crop maps and acreage estimates, are used to inform important decisions such as the distribution of subsidies and providing farm insurance. Inaccuracies and inequities produced by spatial biases adversely affect these decisions. Similarly, effective and fair mapping of natural disasters such as floods or fires is critical to inform live-saving actions and quantify damages and risks to public infrastructures, which is related to insurance estimation. Machine learning, in particular deep learning, has been widely adopted for spatial datasets with promising results. However, straightforward applications of machine learning have found limited success in preserving spatial fairness due to the variation of data distribution, data quantity, and data quality. The goal of this project is to develop a new generation of learning frameworks to explicitly preserve spatial fairness. The results and code will be made freely available and integrated into existing geospatial software. The methods will also be tested for incorporation in existing real systems (crop and water monitoring).”

    • Principal investigator: Xiaowei Jia
    • Co-principal investigators: Sergii Skakun, Yiqun Xie
    • Organization: University of Pittsburgh
    • Award amount: $755,098

    Project description

Research areas

Related content

US, CA, Santa Clara
Job summaryAmazon is looking for a passionate, talented, and inventive Applied Scientist with a strong machine learning background to help build industry-leading language technology.Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Natural Language Processing (NLP), Natural Language Understanding (NLU), Dialog management, conversational AI and Machine Learning (ML).As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services, as well as contributing to the wider research community. You will gain hands on experience with Amazon’s heterogeneous text and structured data sources, and large-scale computing resources to accelerate advances in language understanding.We are hiring primarily in Conversational AI / Dialog System Development areas: NLP, NLU, Dialog Management, NLG.This role can be based in NYC, Seattle or Palo Alto.Inclusive Team CultureHere at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences.Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future.
US, NY, New York
Job summaryAmazon is looking for a passionate, talented, and inventive Applied Scientist with a strong machine learning background to help build industry-leading language technology.Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Natural Language Processing (NLP), Natural Language Understanding (NLU), Dialog management, conversational AI and Machine Learning (ML).As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services, as well as contributing to the wider research community. You will gain hands on experience with Amazon’s heterogeneous text and structured data sources, and large-scale computing resources to accelerate advances in language understanding.We are hiring primarily in Conversational AI / Dialog System Development areas: NLP, NLU, Dialog Management, NLG.This role can be based in NYC, Seattle or Palo Alto.Inclusive Team CultureHere at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences.Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future.
US, CA, Santa Clara
Job summaryAWS AI/ML is looking for world class scientists and engineers to join its AI Research and Education group working on building automated ML solutions for planetary-scale sustainability and geospatial applications. Our team's mission is to develop ready-to-use and automated solutions that solve important sustainability and geospatial problems. We live in a time wherein geospatial data, such as climate, agricultural crop yield, weather, landcover, etc., has become ubiquitous. Cloud computing has made it easy to gather and process the data that describes the earth system and are generated by satellites, mobile devices, and IoT devices. Our vision is to bring the best ML/AI algorithms to solve practical environmental and sustainability-related R&D problems at scale. Building these solutions require a solid foundation in machine learning infrastructure and deep learning technologies. The team specializes in developing popular open source software libraries like AutoGluon, GluonCV, GluonNLP, DGL, Apache/MXNet (incubating). Our strategy is to bring the best of ML based automation to the geospatial and sustainability area.We are seeking an experienced Applied Scientist for the team. This is a role that combines science knowledge (around machine learning, computer vision, earth science), technical strength, and product focus. It will be your job to develop ML system and solutions and work closely with the engineering team to ship them to our customers. You will interact closely with our customers and with the academic and research communities. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. You are also expected to work closely with other applied scientists and demonstrate Amazon Leadership Principles (https://www.amazon.jobs/en/principles). Strong technical skills and experience with machine learning and computer vision are required. Experience working with earth science, mapping, and geospatial data is a plus. Our customers are extremely technical and the solutions we build for them are strongly coupled to technical feasibility.About the teamInclusive Team CultureAt AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded scientist and enable them to take on more complex tasks in the future.Interested in this role? Reach out to the recruiting team with questions or apply directly via amazon.jobs.
US, CA, Santa Clara
Job summaryAWS AI/ML is looking for world class scientists and engineers to join its AI Research and Education group working on building automated ML solutions for planetary-scale sustainability and geospatial applications. Our team's mission is to develop ready-to-use and automated solutions that solve important sustainability and geospatial problems. We live in a time wherein geospatial data, such as climate, agricultural crop yield, weather, landcover, etc., has become ubiquitous. Cloud computing has made it easy to gather and process the data that describes the earth system and are generated by satellites, mobile devices, and IoT devices. Our vision is to bring the best ML/AI algorithms to solve practical environmental and sustainability-related R&D problems at scale. Building these solutions require a solid foundation in machine learning infrastructure and deep learning technologies. The team specializes in developing popular open source software libraries like AutoGluon, GluonCV, GluonNLP, DGL, Apache/MXNet (incubating). Our strategy is to bring the best of ML based automation to the geospatial and sustainability area.We are seeking an experienced Applied Scientist for the team. This is a role that combines science knowledge (around machine learning, computer vision, earth science), technical strength, and product focus. It will be your job to develop ML system and solutions and work closely with the engineering team to ship them to our customers. You will interact closely with our customers and with the academic and research communities. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. You are also expected to work closely with other applied scientists and demonstrate Amazon Leadership Principles (https://www.amazon.jobs/en/principles). Strong technical skills and experience with machine learning and computer vision are required. Experience working with earth science, mapping, and geospatial data is a plus. Our customers are extremely technical and the solutions we build for them are strongly coupled to technical feasibility.About the teamInclusive Team CultureAt AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded scientist and enable them to take on more complex tasks in the future.Interested in this role? Reach out to the recruiting team with questions or apply directly via amazon.jobs.
US, WA, Seattle
Job summaryHow can we create a rich, data-driven shopping experience on Amazon? How do we build data models that helps us innovate different ways to enhance customer experience? How do we combine the world's greatest online shopping dataset with Amazon's computing power to create models that deeply understand our customers? Recommendations at Amazon is a way to help customers discover products. Our team's stated mission is to "grow each customer’s relationship with Amazon by leveraging our deep understanding of them to provide relevant and timely product, program, and content recommendations". We strive to better understand how customers shop on Amazon (and elsewhere) and build recommendations models to streamline customers' shopping experience by showing the right products at the right time. Understanding the complexities of customers' shopping needs and helping them explore the depth and breadth of Amazon's catalog is a challenge we take on every day. Using Amazon’s large-scale computing resources you will ask research questions about customer behavior, build models to generate recommendations, and run these models directly on the retail website. You will participate in the Amazon ML community and mentor Applied Scientists and software development engineers with a strong interest in and knowledge of ML. Your work will directly benefit customers and the retail business and you will measure the impact using scientific tools. We are looking for passionate, hard-working, and talented Applied scientist who have experience building mission critical, high volume applications that customers love. You will have an enormous opportunity to make a large impact on the design, architecture, and implementation of cutting edge products used every day, by people you know.Key job responsibilitiesScaling state of the art techniques to Amazon-scaleWorking independently and collaborating with SDEs to deploy models to productionDeveloping long-term roadmaps for the team's scientific agendaDesigning experiments to measure business impact of the team's effortsMentoring scientists in the departmentContributing back to the machine learning science community
US, NY, New York
Job summaryAmazon Web Services is looking for world class scientists to join the Security Analytics and AI Research team within AWS Security Services. This group is entrusted with researching and developing core data mining and machine learning algorithms for various AWS security services like GuardDuty (https://aws.amazon.com/guardduty/) and Macie (https://aws.amazon.com/macie/). In this group, you will invent and implement innovative solutions for never-before-solved problems. If you have passion for security and experience with large scale machine learning problems, this will be an exciting opportunity.The AWS Security Services team builds technologies that help customers strengthen their security posture and better meet security requirements in the AWS Cloud. The team interacts with security researchers to codify our own learnings and best practices and make them available for customers. We are building massively scalable and globally distributed security systems to power next generation services.Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop and enable them to take on more complex tasks in the future.A day in the lifeAbout the hiring groupJob responsibilities* Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative and business judgment.* Collaborate with software engineering teams to integrate successful experiments into large scale, highly complex production services.* Report results in a scientifically rigorous way.* Interact with security engineers, product managers and related domain experts to dive deep into the types of challenges that we need innovative solutions for.
US, NY, New York
Job summaryAmazon Advertising is one of Amazon's fastest growing and most profitable businesses, responsible for defining and delivering a collection of advertising products that drive discovery and sales. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day!The Advertising Identity Program (AIP) identifies traffic across all devices, websites and apps. We maintain identity graphs that enable us to identify custom audiences and/or Amazon users/sessions across devices and browsers. We enable use cases for Amazon DSP like targeting, audience matching, re-marketing, attribution, frequency capping, traffic quality, regulatory and privacy compliance. As a Data Scientist on this team you will: Develop Data Science solutions from beginning to end.Deliver with independence on challenging large-scale problems with complexity and ambiguity.Write code (Python, R, Scala, SQL, etc.) to obtain, manipulate, and analyze data.Build Machine Learning and statistical models to solve specific business problems.Retrieve, synthesize, and present critical data in a format that is immediately useful to answering specific questions or improving system performance.Analyze historical data to identify trends and support optimal decision making.Apply statistical and machine learning knowledge to specific business problems and data.Formalize assumptions about how our systems should work, create statistical definitions of outliers, and develop methods to systematically identify outliers. Work out why such examples are outliers and define if any actions needed.Given anecdotes about anomalies or generate automatic scripts to define anomalies, deep dive to explain why they happen, and identify fixes.Build decision-making models and propose effective solutions for the business problems you define.Conduct written and verbal presentations to share insights to audiences of varying levels of technical sophistication.Why you will love this opportunity: Amazon has invested heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate.Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding.Team video ~ https://youtu.be/zD_6Lzw8raE A day in the lifeYou will work collaboratively both within and outside of the Advertising team. As a Software Engineer, you would spend most of your time architecting, designing and coding and the rest in collaboration and discussion. Since we are now working remotely, we also like to have fun by taking time to celebrate each other and to spend time with happy hours. About the teamJoining this team, you’ll experience the benefits of working in a dynamic, fast-paced environment, while leveraging the resources of Amazon.com (AMZN), one of the world's leading Internet companies. We provide a highly customer-centric, team-oriented environment.AdTech Identity Program (AIP) team is spearheading innovation for the existential challenge in AdTech today: The need for reliably establishing customer identity in a IDless world without 3P cookies or Device identifiers.
CA, BC, Vancouver
Job summary Amazon Brand Protection organization focuses on building trust with all brands by accurately representing and completely protecting their brands on Amazon. We strive to be the most trusted thought leader in the space and ensure that public perception mirrors the trustworthy experience we deliver. The Brand Protection machine learning (ML) team is responsible to provide data driven long term strategies and solutions. The team is responsible to develop the state of art ML algorithms to ensure each product is brand authentic and to ensure no abuse or infringements on any brands. The ML team faces the challenges to work with huge amount of structured and unstructured data including images and product descriptions and to develop ML solutions that can scale to protect millions of brands and billions of products worldwide. The team also faces the challenge to fast update our ML systems to stay ahead of bad actors who constantly circumvent our algorithms. If you are excited at these responsibilities and challenges and if you love data and machine learning, we have a position for you. We are looking for a strong manager to manage the ML science team in Vancouver. As the manager, you will hire and develop ML talents. You will design long terms plans and define SMART goals. You will build roadmaps to achieve team’s vision and goals. You will lead the ML directions. You will lead roadmap and plan executions. You will be able to deep dive and guide your team both in directions and in details. You understand ML cycles and advocate ML best practices. You will keep abreast with new ML technologies. Major responsibilities:Work with business/tech teams to identify opportunities, design solution, implement and monitor ML models.Understand business challenges by analyzing data and customer feedbackGuide team members on model building strategies and model experiment, implementation, measurement and continuous improvementBuild and manage team roadmapsCreate long term plans to address complicated business problems at scale using MLDeep dive to provide business insightsCreate business and analytics reports and present to the senior management teamsLead research and implement novel machine learning and statistical approaches
US, WA, Bellevue
Job summaryAre you passionate about leveraging your data science and machine learning skills to make an impact at scale? Do you enjoy developing innovative algorithms, optimization and predictive models to generate recommendations that will be used by automated systems to drive hundreds of millions of impact on Amazon Retail's cash flow? If these questions get you excited, we definitely want to hear from you. Strategic Sourcing team, as part of Amazon Supply Chain Optimization and Technology organization, is seeking an experienced and motivated Data Science leader. Strategic Sourcing team owns systems that are designed to: 1) reduce end to end costs from inbound supply chain and (2) improve vendor performance. Some of the key decisions that these systems make: when and if we should source a product (e.g. is the product obsolete or temporarily unavailable); from which vendor and at what cost we should source an ASIN; what is the ideal supply chain setup (e.g. Pallet, Truckload, Vendor Initiated PO, etc.) for an ASIN/vendor; when should vendor ship/deliver inventory to Amazon FCs; which inbound lanes – vendor warehouse to Amazon FC – should have pre-allocated transportation with how many shipments; when should we penalize vendors for defects/infractions through chargebacks and by how much. Together these set of decisions and systems work together to ensure Amazon’s inventory needs are met on time and in the most efficient way. We develop sophisticated algorithms that involve learning from large amounts of data from diverse sources such Vendors, Transportation carriers, Amazon warehouses etc. Key job responsibilitiesAs the Data Science Senior Manager on this team, you will: • Lead of team of scientists on solving science problems with a high degree of complexity and ambiguity • Develop science roadmaps, run annual planning, and foster cross-team collaboration to execute complex projects • Perform hands-on data analysis, build machine-learning models, run regular A/B tests, and communicate the impact to senior management • Hire and develop top talent, provide technical and career development guidance to scientists and engineers in the organization • Analyze historical data to identify trends and support optimal decision making • Apply statistical and machine learning knowledge to specific business problems and data • Formalize assumptions about how our systems should work, create statistical definitions of outliers, and develop methods to systematically identify outliers. Work out why such examples are outliers and define if any actions needed
CA, ON, Toronto
Job summaryThe Customer Behavior Analytics (CBA) organization owns Amazon’s insights pipeline from data collection to deep analytics. We aspire to be the place where Amazon teams come for answers, a trusted source for data and insights that empower our systems and business leaders to make better decisions. Our outputs shape Amazons marketing teams’ decisions and thus how Amazon customers see, use, and value their experience.CMO (Campaign measurement and Optimization) team within CBA org's mission is to make Amazon’s marketing the most measurably effective in the world. Our long-term objective is to measure the incremental impact of all Amazon’s marketing investments on consumer perceptions, actions, and sales. This requires measuring Amazon’s marketing comparably and consistently across channels, business teams and countries using a comprehensive approach that integrates all Paid, Owned and Earned marketing activity. As the experts on marketing performance, we will lead the Amazon worldwide marketing community by providing critical global insights that can power marketing best practices and tenets globally.Are you passionate about Deep Learning, Causal Inference, and Big Data Systems? Interested in building new state-of-the-art measurement products at petabyte scale? Be part of a team of industry leading experts that operates one of the largest big data and machine learning stacks at Amazon. Amazon is leveraging its highly unique data and applying the latest machine learning and big data technologies to change the way marketers optimize their advertising spend. Our campaign measurement and reporting systems apply these technologies on many billions of events in near real time.You'll be one of the lead scientists tackling some of the hardest problems in advertising; measuring ads incrementality, providing estimated counterfactuals and predicting the success of advertising strategies for omni-channel campaign measurement. Working with a cross-functional team of product managers, program managers, economists and engineers you will develop state of the art causal learning, deep learning, and predictive techniques to help marketers understand the performance of their omni-channel campaigns and optimize their spends.Some things you'll do in this role:Lead full life-cycle Data Science solutions from beginning to end.Deliver with independence on challenging large-scale problems with complexity and ambiguity.Write code (Python, R, Scala, SQL, etc.) to obtain, manipulate, and analyze data.Build Machine Learning and statistical models to solve specific business problems.Retrieve, synthesize, and present critical data in a format that is immediately useful to answering specific questions or improving system performance.Analyze historical data to identify trends and support optimal decision making.Apply statistical and machine learning knowledge to specific business problems and data.Formalize assumptions about how our systems should work, create statistical definitions of outliers, and develop methods to systematically identify outliers. Work out why such examples are outliers and define if any actions needed.Given anecdotes about anomalies or generate automatic scripts to define anomalies, deep dive to explain why they happen, and identify fixes.Build decision-making models and propose effective solutions for the business problems you define.Conduct written and verbal presentations to share insights to audiences of varying levels of technical sophistication.Impact and Career Growth: You will invent solutions that can make billion dollar impact for Amazon as an advertiser. Define a long-term science vision for our business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding.This position is based in Irvine, San Francisco, Sunnyvale, San Jose or Seattle. Key job responsibilitiesDive deep into petabyte-scale data to drive insights, identify machine-learning modeling gaps and business opportunitiesEstablish scalable, efficient, automated processes for large-scale data analysisRun regular A/B experiments, gather data, and perform statistical analysisWork with scientists, engineers and product partners to develop new machine learning approaches, and monetization strategiesConduct written and verbal presentation to share insights and recommendations to audiences of varying levels of technical sophistication