Customer-obsessed science
Research areas
-
April 27, 20264 min readA new framework provides a statistical method for estimating the likelihood of catastrophic failures in large language models in adversarial conversations.
-
April 15, 20268 min read
-
April 7, 202613 min read
-
April 1, 20265 min read
Featured news
-
NeurIPS 20232023We derive the first finite-time logarithmic Bayes regret upper bounds for Bayesian bandits. In Gaussian bandits, we obtain O(cΔ log n) and O(ch log2n) bounds for an upper confidence bound algorithm, where ch and cΔ are constants depending on the prior distribution and the gaps of random bandit instances sampled from it, respectively. The latter bound asymptotically matches the lower bound of Lai (1987).
-
NeurIPS 20232023Membership inference attacks are designed to determine, using black-box access to trained models, whether a particular example was used in training or not. Membership inference can be formalized as a hypothesis-testing problem. The most effective existing attacks estimate the distribution of some test statistic (usually the model’s confidence on the true label) on points that were (and were not) used in
-
NeurIPS 20232023We focus on the task of approximating the optimal value function in deep reinforcement learning. This iterative process is comprised of solving a sequence of optimization problems where the loss function changes per iteration. The common approach to solving this sequence of problems is to employ modern variants of the stochastic gradient descent algorithm such as Adam. These optimizers maintain their own
-
Robotic Computing 20232023Home robots operate in diverse and dynamic environments, delivering a range of functions that enhance utility. Many of these functions span extended periods, from weeks to months, typically improving through observations and interactions. Efficient development and validation of these functions necessitate simulations that can run faster than real time. However, many current robot simulators focus on high-fidelity
-
ACM Transactions on Architecture and Code Optimization2023Low-precision computation has emerged as one of the most effective techniques for accelerating convolutional neural networks and has garnered widespread support on modern hardware. Despite its effectiveness in accelerating convolutional neural networks, low-precision computation has not been commonly applied to fast convolutions, such as the Winograd algorithm, due to numerical issues. In this paper, we
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all