Customer-obsessed science
Research areas
-
September 26, 2025To transform scientific domains, foundation models will require physical-constraint satisfaction, uncertainty quantification, and specialized forecasting techniques that overcome data scarcity while maintaining scientific rigor.
-
Featured news
-
NeurIPS 20232023We focus on the task of approximating the optimal value function in deep reinforcement learning. This iterative process is comprised of solving a sequence of optimization problems where the loss function changes per iteration. The common approach to solving this sequence of problems is to employ modern variants of the stochastic gradient descent algorithm such as Adam. These optimizers maintain their own
-
Robotic Computing 20232023Home robots operate in diverse and dynamic environments, delivering a range of functions that enhance utility. Many of these functions span extended periods, from weeks to months, typically improving through observations and interactions. Efficient development and validation of these functions necessitate simulations that can run faster than real time. However, many current robot simulators focus on high-fidelity
-
ACM Transactions on Architecture and Code Optimization2023Low-precision computation has emerged as one of the most effective techniques for accelerating convolutional neural networks and has garnered widespread support on modern hardware. Despite its effectiveness in accelerating convolutional neural networks, low-precision computation has not been commonly applied to fast convolutions, such as the Winograd algorithm, due to numerical issues. In this paper, we
-
NeurIPS 20232023Spoken language understanding (SLU) systems often exhibit suboptimal performance in processing atypical speech, typically caused by neurological conditions and motor impairments. Recent advancements in Text-to-Speech (TTS) synthesis-based augmentation for more fair SLU have struggled to accurately capture the unique vocal characteristics of atypical speakers, largely due to insufficient data. To address
-
NeurIPS 2023 Workshop on I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models2023With increasing scale in model and dataset size, the training of deep neural networks becomes a massive computational burden. One approach to speed up the training process is Selective Backprop. For this approach, we perform a forward pass to obtain a loss value for each data point in a minibatch. The backward pass is then restricted to a subset of that minibatch, prioritizing high-loss examples. We build
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all