Customer-obsessed science


Research areas
-
August 8, 2025A new philosophy for developing LLM architectures reduces energy requirements, speeds up runtime, and preserves pretrained-model performance.
Featured news
-
NER (named entity recognition) model aims to recognize the named entities in the keywords. However, when the entities are extremely knowledge intensive, traditional NER model cannot encode all the knowledge in its parameters, thus fails to recognize those entities with high accuracy. In this paper, we propose retrieval augmented NER model (RA-NER) to address this issue. RA-NER retrieves the most relevant
-
2024This paper introduces a novel problem of automated question generation for courtroom examinations, CourtQG. While question generation has been studied in domains such as educational testing, product description and situation report generation, CourtQG poses several unique challenges owing to its non-cooperative and agenda-driven nature. Specifically, not only the generated questions need to be relevant
-
AAAI 2024 Workshop on Neuro-Symbolic Learning and Reasoning in the Era of Large Language Models (NucLeaR)2024Recent large language models (LLMs) have enabled tremendous progress in natural-language understanding. However, they are prone to generate confident but nonsensical reasoning chains, a significant obstacle to establishing trust with users. In this work, we aim to incorporate rich human feedback on such incorrect model generated reasoning chains for multi-hop reasoning to improve performance on these tasks
-
Field Robotics2024For autonomous ground vehicles (AGVs) deployed in suburban neighborhoods and other human-centric environments the problem of localization remains a fundamental challenge. There are well established methods for localization with GPS, lidar, and cameras. But even in ideal conditions these have limitations. GPS is not always available and is often not accurate enough on its own, visual methods have difficulty
-
Nature Medicine2024Errors in pharmacy medication directions, such as incorrect instructions for dosage or frequency, can increase patient safety risk substantially by raising the chances of adverse drug events. This study explores how integrating domain knowledge with large language models (LLMs)—capable of sophisticated text interpretation and generation—can reduce these errors. We introduce MEDIC (medication direction copilot
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all