Customer-obsessed science


Research areas
-
August 8, 2025A new philosophy for developing LLM architectures reduces energy requirements, speeds up runtime, and preserves pretrained-model performance.
Featured news
-
2024Scaling up model and data size has been quite successful for the evolution of LLMs. However, the scaling law for the diffusion based text-to-image (T2I) models is not fully explored. It is also unclear how to efficiently scale the model for better performance at reduced cost. The different training settings and expensive training cost make a fair model comparison extremely difficult. In this work, we empirically
-
2024While the recommendation system (RS) has advanced significantly through deep learning, current RS approaches usually train and finetune models on task-specific datasets, limiting their generalizability to new recommendation tasks and their ability to leverage external knowledge due to model scale and data size constraints. Thus, we designed an LLM-powered autonomous recommender agent, RecMind, which is
-
Product filters are commonly used by e-commerce websites to refine search results based on attribute values such as price, brand, size, etc. However, existing filter recommendation approaches typically generate filters independently of the user’s search query or browsing history. This can lead to suboptimal recommendations that do not account for what the user has already viewed or selected in their current
-
AISTATS 20242024The synthetic control method (SCM) has become a popular tool for estimating causal effects in policy evaluation, where a single treated unit is observed. However, SCM faces challenges in accurately predicting postintervention potential outcomes had, contrary to fact, the treatment been withheld, when the pre-intervention period is short or the post-intervention period is long. To address these issues, we
-
2024Despite being widely spoken, dialectal variants of languages are frequently considered low in resources due to lack of writing standards and orthographic inconsistencies. As a result, training natural language understanding (NLU) systems relies primarily on standard language resources leading to biased and inequitable NLU technology that underserves dialectal speakers. In this paper, we propose to address
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all