Customer-obsessed science


Research areas
-
August 8, 2025A new philosophy for developing LLM architectures reduces energy requirements, speeds up runtime, and preserves pretrained-model performance.
Featured news
-
In the dynamic marketplace, vendors continuously seek innovative ideas for new products and ways to improve existing ones. These ideas can be uncovered by analyzing text data, such as product descriptions and customer reviews. However, the ever-increasing volume of text data poses a challenge in extracting meaningful insights. Therefore, this study addresses the challenge of extracting actionable insights
-
Large language models (LLMs) have achieved remarkable progress in recent years. These models have the capability to answer complex questions about medical disorders, their pathophysiology, etiology and corresponding interventions. However, when providing information about medical products and treatments, it is important to ensure that models respond reliably with factually correct information that adheres
-
arXiv2024The peptide-protein docking problem is an important problem in structural biology that facilitates rational and efficient drug design. In this work, we explore modeling and solving this problem with the quantum-amenable quadratic unconstrained binary optimization (QUBO) formalism. Our work extends recent efforts by incorporating the objectives and constraints associated with peptide cyclization and peptide-protein
-
2024Natural language understanding over tabular data is crucial for data discovery tasks such as joinable and unionable table search. State-of-the-art approaches adopt large language models (LLMs) trained over massive text corpora to assess the table semantic relatedness, typically following a pretrain-and-finetune paradigm with labeled tabular data. Recent studies in-corporate auxiliary tasks such as entity
-
2024Chain-of-thought (CoT) prompting is a popular in-context learning (ICL) approach for large language models (LLMs), especially when tackling complex reasoning tasks. Traditional ICL approaches construct prompts using examples that contain questions similar to the input question. However, CoT prompting, which includes crucial intermediate reasoning steps (rationales) within its examples, necessitates selecting
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all