Customer-obsessed science
Research areas
-
May 15, 20265 min readA new scaling law that relates particular architectural choices to loss helps identify models that improve throughput by up to 47% with no loss of accuracy.
-
May 14, 202616 min read
-
-
April 15, 20268 min read
Featured news
-
NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World2023Offline Reinforcement Learning (RL) has emerged as a promising approach to address real-world challenges where online interactions with the environment are limited, risky, or costly. Although, recent advancements produce high quality policies from offline data, currently, there is no systematic methodology to continue to improve them without resorting to online fine-tuning. This paper proposes to repurpose
-
Predict, refine, synthesize: Self-guiding diffusion models for probabilistic time series forecastingNeurIPS 20232023Diffusion models have achieved state-of-the-art performance in generative modeling tasks across various domains. Prior works on time series diffusion models have primarily focused on developing conditional models tailored to specific forecasting or imputation tasks. In this work, we explore the potential of task-agnostic, unconditional diffusion models for several time series applications. We propose TSDiff
-
NeurIPS 2023 Workshop on Machine Learning for Structural Biology2023Molecular docking is a critical process in structure-based drug discovery to predict the binding conformations between a protein and a small molecule ligand. Recently, deep learning-based methods have achieved promising performance over traditional physics-based search-and-score methods. Despite their success on accurately predicting the binding poses of the small molecule ligands, modeling of protein flexibility
-
NeurIPS 20232023The main challenge of offline reinforcement learning, where data is limited, arises from a sequence of counterfactual reasoning dilemmas within the realm of potential actions: What if we were to choose a different course of action? These circumstances frequently give rise to extrapolation errors, which tend to accumulate exponentially with the problem horizon. Hence, it becomes crucial to acknowledge that
-
NeurIPS 2023 Workshop on I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models2023Numerous Natural Language Processing (NLP) tasks require precisely labeled data to ensure effective model training and achieve optimal performance. However, data annotation is marked by substantial costs and time requirements, especially when requiring specialized domain expertise or annotating a large number of samples. In this study, we investigate the feasibility of employing large language models (LLMs
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all