Customer-obsessed science
Research areas
-
November 20, 20254 min readA new evaluation pipeline called FiSCo uncovers hidden biases and offers an assessment framework that evolves alongside language models.
-
October 2, 20253 min read
-
-
-
September 2, 20253 min read
Featured news
-
SIGIR 20232023Immersive technologies such as virtual reality (VR) and head-mounted displays (HMD) have seen increased adoption in recent years. In this work, we study two factors that influence users’ experience when shopping in VR through voice queries: (1) context alignment of the search environment and (2) the level of detail on the Search Engine Results Page (SERP). To this end, we developed a search system for VR
-
SIGIR 20232023Conversation disentanglement aims to identify and group utterances from a conversation into separate threads. Existing methods in the literature primarily focus on disentangling multi-party conversations involving three or more speakers, which enables their models to explicitly or implicitly incorporate speaker-related feature signals while disentangling. Most existing models require a large amount of human
-
ACL 20232023Methods to generate text from structured data have advanced significantly in recent years, primarily due to fine-tuning of pre-trained lan-guage models on large datasets. However, such models can fail to produce output faithful to the input data, particularly on out-of-domain data. Sufficient annotated data is often not avail-able for specific domains, leading us to seek an unsupervised approach to improve
-
HCOMP 20232023Video object tracking annotation tasks are a form of complex data labeling that is inherently tedious and time-consuming. Prior studies of these tasks focus primarily on quality of the provided data, leaving much to be learned about how the data was generated and the factors that influenced how it was generated. In this paper, we take steps toward this goal by examining how human annotators spend their
-
ICCV 20232023We investigate compositional structures in data embeddings from pre-trained vision-language models (VLMs). Traditionally, compositionality has been associated with algebraic operations on embeddings of words from a preexisting vocabulary. In contrast, we seek to approximate representations from an encoder as combinations of a smaller set of vectors in the embedding space. These vectors can be seen as “ideal
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all