Customer-obsessed science


Research areas
-
June 12, 2025Novel architecture that fuses learnable queries and conditional queries improves a segmentation model’s ability to transfer across tasks.
Featured news
-
2025This paper introduces MO-LightGBM, an open-source library built upon LightGBM, specifically designed to offer an integrated, versatile, and easily adaptable framework for Multi-objective Learning to Rank (MOLTR). MO-LightGBM supports diverse Multi-objective optimization (MOO) settings and incorporates 12 state-of-the-art optimization strategies. Its modular architecture enhances usability and flexibility
-
2025Language localization is the adaptation of written content to different linguistic and cultural contexts. Ability to localize written content is crucial for global businesses to provide consistent and reliable customer experience across diverse markets. Traditional methods have approached localization as an application of machine translation (MT), but localization requires more than linguistic conversion
-
AAAI 2025 Workshop on Preventing and Detecting LLM Misinformation2025Unlearning aims to remove copyrighted, sensitive, or private content from large language models (LLMs) without a full retraining. In this work, we develop a multi-task unlearning benchmark (LUME) which features three tasks: (1) unlearn synthetically generated creative short novels, (2) unlearn synthetic biographies with sensitive information, and (3) unlearn a collection of public biographies. We further
-
NAACL 2025 Workshop on TrustNLP2025Large Language Models (LLMs) have demonstrated excellent capabilities in Question Answering (QA) tasks, yet their ability to identify and address ambiguous questions remains underdeveloped. Ambiguities in user queries often lead to inaccurate or misleading answers, undermining user trust in these systems. Despite prior attempts using prompt-based methods, performance has largely been equivalent to random
-
2025We present MegaBeam-Mistral-7B1, a language model that supports 512K-token context length. Our work addresses practical limitations in long-context training, supporting real-world tasks such as compliance monitoring and verification. Evaluated on three long-context benchmarks, our 7B-parameter model demonstrates superior in-context learning performance on HELMET and robust retrieval and tracing capability
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all