Customer-obsessed science
Research areas
-
November 6, 2025A new approach to reducing carbon emissions reveals previously hidden emission “hotspots” within value chains, helping organizations make more detailed and dynamic decisions about their future carbon footprints.
-
-
Featured news
-
Modern time-series forecasting models often fail to make full use of rich unstructured information about the time series themselves. This lack of proper conditioning can lead to "obvious" model failures; for example, models may be unaware of the details of a particular product, and hence fail to anticipate seasonal surges in customer demand in the lead up to major exogenous events like holidays for clearly
-
Research on neural networks for time series has mostly focused on developing models that learn patterns about the target signal without the use of additional auxiliary or exogenous information. In applications such as selling products on a marketplace, the target signal is influenced by these variables, and leveraging exogenous variables is important. In particular, knowing that a product would go into
-
2024We introduce VideoLISA, a video-based multimodal large language model designed to tackle the problem of language-instructed reasoning segmentation in videos. Leveraging the reasoning capabilities and world knowledge of large language models, and augmented by the Segment Anything Model, VideoLISA generates temporally consistent segmentation masks in videos based on language instructions. Existing image-based
-
Reinforcement Learning (RL) has achieved state-of-the-art performance in station-ary environments with effective simulators. However, lifelong and open-world RL applications, such as robotics, stock trading, and recommendation systems, change over time in adversarial ways. Non-stationary environments pose challenges for RL agents due to constant distribution shifts from the training data, leading to deteriorating
-
2024Visual-Language Alignment (VLA) has gained a lot of attention since CLIP’s groundbreaking work. Although CLIP performs well, the typical direct latent feature alignment lacks clarity in its representation and similarity scores. On the other hand, lexical representation, a vector whose element represents the similarity between the sample and a word from the vocabulary, is a natural sparse representation
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all