Customer-obsessed science
Research areas
-
December 1, 20258 min read“Network language models” will coordinate complex interactions among intelligent components, computational infrastructure, access points, data centers, and more.
-
-
November 20, 20254 min read
-
October 20, 20254 min read
-
October 14, 20257 min read
Featured news
-
ACL 20232022Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life applications, are not well understood. Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation
-
Nature Communications2022Light is a powerful tool for controlling mechanical motion, as shown by numerous applications in the field of cavity optomechanics. Recently, small scale optomechanical circuits, connecting a few optical and mechanical modes, have been demonstrated in an ongoing push towards multi-mode on-chip optomechanical systems. An ambitious goal driving this trend is to produce topologically protected phonon transport
-
PRX Quantum2022We consider quantum error-correcting subsystem codes whose gauge generators realize a translation-invariant, free-fermion-solvable spin model. In this setting, errors are suppressed by a Hamiltonian whose terms are the gauge generators of the code and whose exact spectrum and eigenstates can be found via a generalized Jordan-Wigner transformation. Such solutions are characterized by the frustration graph
-
NAACL 20222022There is an increasing trend in using neural methods for dialogue model evaluation. Lack of a framework to investigate these metrics can cause dialogue models to reflect their biases and cause unforeseen problems during interactions. In this work, we propose an adversarial test-suite which generates problematic variations of various dialogue aspects, e.g. logical entailment, using automatic heuristics.
-
NeurIPS 2022 Workshop on Trustworthy and Socially Responsible Machine Learning (TSRML) , ICML 2022 Workshop on the Theory and Practice of Differential Privacy2022Per-example gradient clipping is a key algorithmic step that enables practical differential private (DP) training for deep learning models. The choice of clipping threshold R, however, is vital for achieving high accuracy under DP. We propose an easy-to-use replacement, called automatic clipping, that eliminates the need to tune R for any DP optimizers, including DP-SGD, DP-Adam, DP-LAMB and many others
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all