Customer-obsessed science


Research areas
-
August 8, 2025A new philosophy for developing LLM architectures reduces energy requirements, speeds up runtime, and preserves pretrained-model performance.
Featured news
-
2024The rapid development of language models (LMs) brings unprecedented accessibility and usage for both models and users. On the one hand, powerful LMs achieve state-of-the-art performance over numerous downstream NLP tasks. On the other hand, more and more attention is paid to unrestricted model accesses that may bring malicious privacy risks of data leakage. To address these issues, many recent works propose
-
MICRO 20242024In this paper, we introduce Surf-Deformer, a code deformation framework that seamlessly integrates adaptive defect mitigation functionality into the current surface code workflow. It crafts several basic deformation instructions based on fundamental gauge transformations, which can be combined to explore a larger design space than previous methods. This enables more optimized deformation processes tailored
-
2024Abstraction ability is crucial in human intelligence, which can also benefit various tasks in NLP study. Existing work shows that LLMs are deficient in abstract ability, and how to improve it remains unexplored. In this work, we design the framework AbsInstruct to enhance LLMs’ abstraction ability through instruction tuning. The framework builds instructions with in-depth explanations to assist LLMs in
-
TQC 20242024Given a linear system of equations Ax = b, quantum linear system solvers (QLSSs) approximately prepare a quantum state |x⟩ for which the amplitudes are proportional to the solution vector x. Asymptotically optimal QLSSs have query complexity O(κ log(1/ε)), where κ is the condition number of A, and ε is the approximation error. However, runtime guarantees for existing optimal and near-optimal QLSSs do not
-
ICPR 20242024This paper improves upon existing data pruning methods for image classification by introducing a novel pruning metric and pruning procedure based on importance sampling. The proposed pruning metric explicitly accounts for data separability, data integrity, and model uncertainty, while the sampling procedure is adaptive to the pruning ratio and considers both intra-class and inter-class separation to further
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all