Customer-obsessed science
Research areas
-
January 13, 20267 min readLeveraging existing environment simulators and reward functions based on verifiable ground truth boosts task success rate, even with small models and small training datasets.
-
January 8, 20264 min read
-
December 29, 20256 min read
-
December 29, 20259 min read
-
December 10, 20255 min read
Featured news
-
LREC-COLING 2024 Workshop on e-Commerce and NLP2024In the dynamic marketplace, vendors continuously seek innovative ideas for new products and ways to improve existing ones. These ideas can be uncovered by analyzing text data, such as product descriptions and customer reviews. However, the ever-increasing volume of text data poses a challenge in extracting meaningful insights. Therefore, this study addresses the challenge of extracting actionable insights
-
KDD 2024 Workshop on GenAI Evaluation2024Large language models (LLMs) have achieved remarkable progress in recent years. These models have the capability to answer complex questions about medical disorders, their pathophysiology, etiology and corresponding interventions. However, when providing information about medical products and treatments, it is important to ensure that models respond reliably with factually correct information that adheres
-
arXiv2024The peptide-protein docking problem is an important problem in structural biology that facilitates rational and efficient drug design. In this work, we explore modeling and solving this problem with the quantum-amenable quadratic unconstrained binary optimization (QUBO) formalism. Our work extends recent efforts by incorporating the objectives and constraints associated with peptide cyclization and peptide-protein
-
2024Natural language understanding over tabular data is crucial for data discovery tasks such as joinable and unionable table search. State-of-the-art approaches adopt large language models (LLMs) trained over massive text corpora to assess the table semantic relatedness, typically following a pretrain-and-finetune paradigm with labeled tabular data. Recent studies in-corporate auxiliary tasks such as entity
-
2024Chain-of-thought (CoT) prompting is a popular in-context learning (ICL) approach for large language models (LLMs), especially when tackling complex reasoning tasks. Traditional ICL approaches construct prompts using examples that contain questions similar to the input question. However, CoT prompting, which includes crucial intermediate reasoning steps (rationales) within its examples, necessitates selecting
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all