Customer-obsessed science


Research areas
-
July 31, 2025Using ensembles of agents to generate and refine interactions annotated with chains of thought improves performance on a battery of benchmarks by an average of 29%.
Featured news
-
2024Audio-visual representations leverage information from both modalities to produce joint representations. Such representations have demonstrated their usefulness in a variety of tasks. However, both modalities incorporated in the learned model might not necessarily be present all the time during inference. In this work, we study whether and how we can make exist- ing models, trained under pristine conditions
-
Extrapolative protein design is a crucial task for automated drug discovery to design proteins with higher fitness than what has been seen in train- ing (eg. higher stability, tighter binding affinity, etc.). The current state-of-the-art methods assume that one can safely steer protein design in the extrapolation region by learning from pairs alone. We hypothesize that (1) noisy pairs do not accurately
-
2024It is well known that selecting samples with large losses/gradients can significantly reduce the number of training steps. However, the selection overhead is often too high to yield any meaningful gains in terms of overall training time. In this work, we focus on the greedy approach of selecting samples with large approximate losses instead of exact losses in order to reduce the selection overhead. For
-
Large language models (LLMs) exhibit excellent ability to understand human languages, but do they also understand their own language that appears gibberish to us? In this work we delve into this question, aiming to uncover the mechanisms underlying such behavior in LLMs. We employ the Greedy Coordinate Gradient optimizer to craft prompts that compel LLMs to generate coherent responses from seemingly nonsensical
-
2024Large models training is plagued by the intense compute cost and limited hardware memory. A practical solution is low-precision representation but is troubled by loss in numerical accuracy and unstable training rendering the model less useful. We argue that low-precision floating points can perform well provided the error is properly compensated at the critical locations in the training process. We propose
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all