Customer-obsessed science
Research areas
-
May 15, 20265 min readA new scaling law that relates particular architectural choices to loss helps identify models that improve throughput by up to 47% with no loss of accuracy.
-
May 14, 202616 min read
-
-
April 15, 20268 min read
Featured news
-
Robotic Computing 20232023Home robots operate in diverse and dynamic environments, delivering a range of functions that enhance utility. Many of these functions span extended periods, from weeks to months, typically improving through observations and interactions. Efficient development and validation of these functions necessitate simulations that can run faster than real time. However, many current robot simulators focus on high-fidelity
-
ACM Transactions on Architecture and Code Optimization2023Low-precision computation has emerged as one of the most effective techniques for accelerating convolutional neural networks and has garnered widespread support on modern hardware. Despite its effectiveness in accelerating convolutional neural networks, low-precision computation has not been commonly applied to fast convolutions, such as the Winograd algorithm, due to numerical issues. In this paper, we
-
NeurIPS 20232023Spoken language understanding (SLU) systems often exhibit suboptimal performance in processing atypical speech, typically caused by neurological conditions and motor impairments. Recent advancements in Text-to-Speech (TTS) synthesis-based augmentation for more fair SLU have struggled to accurately capture the unique vocal characteristics of atypical speakers, largely due to insufficient data. To address
-
NeurIPS 2023 Workshop on I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models2023With increasing scale in model and dataset size, the training of deep neural networks becomes a massive computational burden. One approach to speed up the training process is Selective Backprop. For this approach, we perform a forward pass to obtain a loss value for each data point in a minibatch. The backward pass is then restricted to a subset of that minibatch, prioritizing high-loss examples. We build
-
NeurIPS 20232023Most linear experimental design problems assume homogeneous variance, even though heteroskedastic noise is present in many realistic settings. Let a learner have access to a finite set of measurement vectors X ⊂ ℝd that can be probed to receive noisy linear responses of the form y = x⊤θ* + η. Here θ* ∈ ℝd is an unknown parameter vector, and η is independent mean-zero σx2 -strictly-sub-Gaussian noise defined
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all