-
ICCV 20212021Contrastive learning allows us to flexibly define powerful losses by contrasting positive pairs from sets of negative samples. Recently, the principle has also been used to learn cross-modal embeddings for video and text, yet without exploiting its full potential. In particular, previous losses do not take the intra-modality similarities into account, which leads to inefficient embeddings, as the same content
-
ICCV 20212021Traditional normalization techniques (e.g., Batch Normalization and Instance Normalization) generally and simplistically assume that training and test data follow the same distribution. As distribution shifts are inevitable in real-world applications, well-trained models with previous normalization methods can perform badly in new environments. Can we develop new normalization methods to improve generalization
-
NeurIPS 20212021Episodic training is a core ingredient of few-shot learning to train models on tasks with limited labelled data. Despite its success, episodic training remains largely understudied, prompting us to ask the question: what is the best way to sample episodes? In this paper, we first propose a method to approximate episode sampling distributions based on their difficulty. Building on this method, we perform
-
NeurIPS 2021 Workshop on Data-Centric AI2021In this work we discuss One-Shot Object Detection, a challenging task of detecting novel objects in a target scene using a single reference image called a query. To address this challenge we introduce SPOT (Surfacing POsitions using Transformers), a novel transformer based end-to-end architecture which uses synergy between the provided query and target images using a learnable Robust Feature Matching module
-
WACV 20222021We revisit existing ensemble diversification approaches and present two novel diversification methods tailored for open-set scenarios. The first method uses a new loss, designed to encourage models disagreement on outliers only, thus alleviating the intrinsic accuracy-diversity trade-off. The second method achieves diversity via automated feature engineering, by training each model to disregard input features
Related content
-
March 02, 2021The newest chapter addresses a problem that often bedevils nonparametric machine learning models.
-
February 26, 2021How a team of designers, scientists, developers, and engineers worked together to create a truly unique device in Echo Show 10.
-
February 24, 2021Complex algorithms promise to fundamentally change a craft that still relies almost entirely on handwork.
-
February 16, 2021A new approach that grows networks dynamically promises improvements over GANs with fixed architectures or predetermined growing strategies.
-
January 15, 2021Two papers at WACV propose neural models for enhancing video-streaming experiences.
-
January 08, 2021Amazon distinguished scientist Gérard Medioni on the complexities of “understanding your environment through visual input”.