Customer-obsessed science
Research areas
-
February 2, 202610 min readEvery NFL game generates millions of tracking data points from 22 RFID-equipped players. Seventy-five machine learning models running on AWS process that data in under a second, transforming football into a sport where every movement is measured, modeled, and instantly analyzed.
-
January 13, 20267 min read
-
January 8, 20264 min read
-
-
December 29, 20256 min read
Featured news
-
2024Entity Recognition (ER) is a common natural language processing task encountered in a number of real-world applications. For common domains and named entities such as places and organisations, there exists sufficient high quality annotated data and foundational models such as T5 and GPT-3.5 also provide highly accurate predictions. However, for niche domains such as e-commerce and medicine with specialized
-
2024Modern search systems offer multiple ways for expressing information needs, including image, voice, and text. Consequently, an increasing number of users seamlessly transition between these modalities to convey their intents. This emerging trend presents new opportunities for utilizing queries in different modalities to help users complete their search journeys efficiently. In this proposal, we introduce
-
2024Robust fine-tuning aims to adapt a vision-language model to downstream tasks while preserving its zero-shot capabilities on unseen data. Recent studies have introduced fine-tuning strategies to improve in-distribution (ID) performance on the downstream tasks while minimizing deterioration in out-of-distribution (OOD) performance on unseen data. This balance is achieved either by aligning the fine-tuned
-
ACL 2024 Workshop on Natural Language Reasoning and Structured Explanations2024Reasoning encompasses two typical types: deductive reasoning and inductive reasoning. Despite extensive research into the reasoning capabilities of Large Language Models (LLMs), most studies have failed to rigorously differentiate between inductive and deductive reasoning, leading to a blending of the two. This raises an essential question: In LLM reasoning, which poses a greater challenge - deductive or
-
2024Large language models (LLMs), while exhibiting exceptional performance, suffer from hallucinations, especially on knowledge-intensive tasks. Existing works propose to augment LLMs with individual text units retrieved from external knowledge corpora to alleviate the issue. However, in many domains, texts are interconnected (e.g., academic papers in a bibliographic graph are linked by citations and co-authorships
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all