Customer-obsessed science


Research areas
-
September 2, 2025Audible's ML algorithms connect users directly to relevant titles, reducing the number of purchase steps for millions of daily users.
-
-
Featured news
-
ESREL SRA-E 20252025The rapid rise of generative AI (GenAI) has sparked the sustainability community to explore its potential applications, such as climate impact modeling and renewable energy optimization. However, deploying these GenAIpowered solutions in enterprise environments raises risk concerns. In particular, chatbots and similar GenAI applications face risks of misinformation and disinformation stemming from knowledge
-
Effective attribution of causes to outcomes is crucial for optimizing complex supply chain operations. Traditional methods, often relying on waterfall logic or correlational analysis, frequently fall short in identifying the true drivers of performance issues. This paper proposes a comprehensive framework leveraging data-driven causal discovery to construct and validate Structural Causal Models (SCMs).
-
The Web Conf 2025 Workshop on Resource-Efficient Learning for the Web2025Web search engines process billions of queries daily, making the balance between computational efficiency and ranking quality crucial. While neural ranking models have shown impressive performance, their computational costs, particularly in feature extraction, pose significant challenges for large-scale deployment. This paper investigates how different configurations of feature selection and document filtering
-
NeuS 20252025The “state” of State Space Models (SSMs) represents their memory, which fades exponentially over an unbounded span. By contrast, Attention-based models have “eidetic” (i.e., verbatim, or photographic) memory over a finite span (context size). Hybrid architectures combine State Space layers with Attention, but still cannot recall the distant past and can access only the most recent tokens eidetically. Unlike
-
It is well known that Large language models (LLMs) have good zero-shot and few-shot performance which makes them a promising candidate for inference when no or few training samples are available. However, when there is abundant task data, small custom trained models perform as well or are superior in performance to pre-trained LLMs, even after accounting for in-context examples. Further, smaller models
Conferences
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all