Customer-obsessed science
Research areas
-
November 20, 20254 min readA new evaluation pipeline called FiSCo uncovers hidden biases and offers an assessment framework that evolves alongside language models.
-
-
-
September 2, 20253 min read
-
Featured news
-
Predict, refine, synthesize: Self-guiding diffusion models for probabilistic time series forecastingNeurIPS 20232023Diffusion models have achieved state-of-the-art performance in generative modeling tasks across various domains. Prior works on time series diffusion models have primarily focused on developing conditional models tailored to specific forecasting or imputation tasks. In this work, we explore the potential of task-agnostic, unconditional diffusion models for several time series applications. We propose TSDiff
-
NeurIPS 2023 Workshop on Machine Learning for Structural Biology2023Molecular docking is a critical process in structure-based drug discovery to predict the binding conformations between a protein and a small molecule ligand. Recently, deep learning-based methods have achieved promising performance over traditional physics-based search-and-score methods. Despite their success on accurately predicting the binding poses of the small molecule ligands, modeling of protein flexibility
-
NeurIPS 20232023The main challenge of offline reinforcement learning, where data is limited, arises from a sequence of counterfactual reasoning dilemmas within the realm of potential actions: What if we were to choose a different course of action? These circumstances frequently give rise to extrapolation errors, which tend to accumulate exponentially with the problem horizon. Hence, it becomes crucial to acknowledge that
-
NeurIPS 2023 Workshop on I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models2023Numerous Natural Language Processing (NLP) tasks require precisely labeled data to ensure effective model training and achieve optimal performance. However, data annotation is marked by substantial costs and time requirements, especially when requiring specialized domain expertise or annotating a large number of samples. In this study, we investigate the feasibility of employing large language models (LLMs
-
NeurIPS 2023 Workshop on Federated Learning in the Age of Foundation Models2023Edge device participation in federating learning (FL) has been typically studied under the lens of device-server communication (e.g., device dropout) and assumes an undying desire from edge devices to participate in FL. As a result, current FL frameworks are flawed when implemented in real-world settings, with many encountering the free-rider problem. In a step to push FL towards realistic settings, we
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all