Customer-obsessed science


Research areas
-
August 8, 2025A new philosophy for developing LLM architectures reduces energy requirements, speeds up runtime, and preserves pretrained-model performance.
Featured news
-
2025Text style transfer in enterprise environments presents unique challenges that extend beyond traditional style transfer approaches, particularly when dealing with complex technical documentation and strict organizational guidelines. This paper introduces Onoma, a novel enterprise-scale style transfer system that addresses the fundamental challenges of maintaining consistent brand voice while preserving
-
SAC 20252025AES-GCM has seen great adoption for the last 20 years to protect data in various use-cases because of its optimal performance. It has also posed some challenges to modern applications due to its nonce, block size, and lack of key commitment. Nonce-derived schemes address these challenges by deriving a different key from random values and using GCM with the derived key. In this work, we explore efficient
-
IEEE 2025 Workshop on Machine Learning for Signal Processing (MLSP)2025In this paper we investigate cross-lingual Text-To-Speech (TTS) synthesis through the lens of adapters, in the context of lightweight TTS systems. In particular, we compare the tasks of unseen speaker and language adaptation with the goal of synthesising a target voice in a target language, in which the target voice has no recordings therein. Results from objective evaluations demonstrate the effectiveness
-
ISACE 20252025Generative AI has unlocked new possibilities in content discovery and management. Through collaboration with the National Football League (NFL), we demonstrate how a generative-AI based workflow allows media researchers and analysts to query relevant historical plays using natural language, rather than using traditional filter and click-based interfaces. The agentic workflow takes a user query in natural
-
ICML 2025 Workshop on Multi-Agent Systems in the Era of Foundation Models2025Large language models (LLMs) have demonstrated remarkable performance across a wide range of tasks in recent years. While prior work has explored leveraging LLMs to generate synthetic data for self-improvement, repeated iterations often suffer from diminishing returns due to the reliance on homogeneous reasoning patterns and limited exploration of alternative perspectives. In this paper, we introduce a
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all