-
MLSys 20212021Manual debugging is a common productivity drain in the machine learning (ML) lifecycle. Identifying underperforming training jobs requires constant developer attention and deep domain expertise. As state-of-the-art models grow in size and complexity, debugging becomes increasingly difficult. Just as unit tests boost traditional software development, an automated ML debugging library can save time and money
-
AAAI 2021 Workshop on Reasoning and Learning for Human-Machine Dialogs (DEEP-DIAL21)2021Embodied instruction following is a challenging problem requiring an agent to infer a sequence of primitive actions to achieve a goal environment state from complex language and visual inputs. Action Learning From Realistic Environments and Directives (ALFRED) is a recently proposed benchmark for this problem consisting of step-by-step natural language instructions to achieve subgoals which compose to an
-
ICLR 20212021Concept-based explanation approach is a popular model interpertability tool because it expresses the reasons for a model’s predictions in terms of concepts that are meaningful for the domain experts. In this work, we study the problem of the concepts being correlated with confounding information in the features. We propose a new causal prior graph for modeling the impacts of unobserved variables and a method
-
WACV 20212021Object detection models perform well at localizing and classifying objects that they are shown during training. However, due to the difficulty and cost associated with creating and annotating detection datasets, trained models detect a limited number of object types with unknown objects treated as background content. This hinders the adoption of conventional detectors in real-world applications like large-scale
-
SLT 20212021Emotion recognition is a challenging task due to limited availability of in-the-wild labeled datasets. Self-supervised learning has shown improvements on tasks with limited labeled datasets in domains like speech and natural language. Models such as BERT learn to incorporate context in word embeddings, which translates to improved performance in ownstream tasks like question answering. In this work, we
Related content
-
July 24, 2020Amazon automated reasoning scientists showcase verification methods being applied across Amazon during CAV 2020.
-
July 24, 2020New position encoding scheme improves state-of-the-art performance on several natural-language-processing tasks.
-
July 20, 2020Method presented to ICML workshop works with any machine learning model and fairness criterion.
-
July 17, 2020Watch the keynote presentation by Alex Smola, AWS vice president and distinguished scientist, presented at the AutoML@ICML2020 workshop.
-
July 15, 2020New transferability metric is more accurate and more generally applicable than predecessors.
-
July 09, 2020Amazon scientist’s award-winning paper predates — but later found applications in — the deep-learning revolution.