Amazon and Virginia Tech announce new fellowship, faculty research award recipients
Two PhD students and five professors will receive funding to conduct research toward improving the robustness and efficiency of AI systems.
In October of last year, Amazon and Virginia Tech announced the inaugural class of fellowship and faculty award recipients as part of the Amazon–Virginia Tech Initiative for Efficient and Robust Machine Learning.
The initiative, launched in March of 2022, is focused on research pertaining to efficient and robust machine learning. It supports research efforts led by Virginia Tech faculty members and provides an opportunity for doctoral students in the College of Engineering conducting artificial-intelligence (AI) and machine learning (ML) research to apply for Amazon fellowships.
Amazon and Virginia Tech today announced the 2023–2024 class of academic fellows and faculty research award recipients as part of the joint initiative.
“Our sincere appreciation to the Virginia Tech team for their unwavering dedication to excellence in both research and education, as reflected in the impactful research and significant progress made during the first year of our partnership as well as the high-quality proposals and fellowship applications we have received this year,” said Reza Ghanadan, a senior principal research scientist in Alexa AI who leads the initiative at Amazon. “I look forward to continuing our collaborations with the esteemed faculty and students at Virginia Tech to advance our shared goal of ensuring the robustness of machine learning systems while creating impactful AI applications across diverse domains enriching our society.”
“We are very pleased to continue our partnership with Amazon to encourage and support our faculty and student researchers focused on finding solutions to important and worldwide industry-focused problems across a range of machine learning applications,” said Naren Ramakrishnan, the Thomas L. Phillips Professor of Engineering and director of the Amazon–Virginia Tech initiative. “As we move into our second year, we are expanding into additional areas of machine learning such as robust large-language-model deployment, combining large language models with reasoning capabilities and multimodal interfaces.”
The two fellows and five faculty members selected for this year will each receive funding to conduct research projects at Virginia Tech across multiple disciplines. What follows are the recipients and their areas of research.
Minsu Kim is studying under Walid Saad, professor of electrical and computer engineering, and pursuing a PhD in electrical and computer engineering. Kim’s current research focus is building green, sustainable, and robust federated-learning solutions with tangible benefits for all AI-embedded products that use federated learning and wireless communications. Kim’s work requires a more holistic view of the lifecycle of federated-learning algorithms, including data acquisition, algorithm and model design, training, and inference/retraining.
Ying Shen is pursuing a PhD in computer science and studying under Lifu Huang and Ismini Lourentzou, both assistant professors in the department of computer science. Shen’s research interests lie in natural-language processing (NLP) and multimodal messages. Shen is particularly enthusiastic about building more human-like interactive agents that better understand, interpret, and reason about the world around us.
Faculty research award recipients
Lifu Huang, assistant professor, department of computer science, "Semi-parametric open domain conversation generation and evaluation with multidimensional judgements from instruction tuning”
“The goal of this project is two-fold. First, it will develop an innovative, semi-parametric conversational framework that augments a large parametric conversation generation model with a large collection of information sources so that desired knowledge is dynamically retrieved and integrated to the generative model, thus improving the adaptivity and scalability of the conversational agent towards open domain topics. Secondly, it will simulate fine-grained human judgements on machine-generated responses in multi-dimensions by leveraging instruction tuning on large-scale, pre-trained models. The pseudo-human judgements can be further used to train a lightweight multi-dimensional conversation evaluator or provide feedback to conversation generation.”
Ruoxi Jia, assistant professor, department of electrical and computer engineering, "Cutting to the chase: Strategic data acquisition and pruning for efficient and robust machine learning”
“This project focuses on developing strategic data acquisition and pruning techniques that enhance training efficiency, while addressing robustness against sub-optimal data quality by creating targeted data acquisition strategies that optimize the collection of the most valuable and informative data for a specific task; designing data pruning methods to eliminate redundant and irrelevant data points; and assessing the impact of these approaches on computational costs, model performance, and robustness. When successfully completed, the new techniques will optimize the data-for-AI pipeline by accelerating the development of accurate and responsible machine learning models across various applications.”
Ming Jin, assistant professor, “Safe reinforcement learning for interactive systems with stakeholder alignment”
“This project aims to apply a unique approach to addressing the challenges of designing safe and aligned interactive systems. The research aims to develop a novel framework for stakeholder alignment using reinforcement learning and game theory, and its outcomes will have important implications for a range of applications — particularly in the realm of recommender systems.”
Ismini Lourentzou, assistant professor, department of computer science, "Diffusion-based scene-graph enabled embodied AI agents”
“The objective of this research is to design embodied AI agents capable of tracking long-term changes in the environment, modeling how physical attributes of multiple objects transform in response to agents’ actions. The project will also assess how agents adapt to human preferences and feedback by learning multimodal reward functions from sub-optimal demonstrations. The outcome of the proposed work will be more intuitive and attuned embodied task assistants, enhancing their ability to interact with the world in a natural and responsive manner.”
Xuan Wang, assistant professor, department of computer science, “Fact-checking in open-domain dialogue generation through self-talk”
“There is a growing concern about accuracy and truthfulness of information provided by open-domain dialogue generation systems, such as chatbots and virtual assistants — particularly in healthcare and finance where incorrect information can have serious consequences. This project proposes a new fact-checking approach for open-domain dialogue generation using language-model-based self-talk, which automatically validates the generated responses and provides supporting evidence.”