Customer-obsessed science
Research areas
-
November 28, 20254 min readLarge language models are increasing the accuracy, reliability, and consistency of the product catalogue at scale.
-
November 20, 20254 min read
-
October 20, 20254 min read
-
October 14, 20257 min read
-
October 2, 20253 min read
Featured news
-
ICML 20222022Deep neural networks (DNNs) are known to be highly vulnerable to adversarial examples (AEs) that include malicious perturbations. Assumptions about the statistical differences between natural and adversarial inputs are commonplace in many detection techniques. As a best practice, AE detectors are evaluated against adaptive attackers who actively perturb their inputs to avoid detection. Due to the difficulties
-
CVPR 20222022Being able to spot defective parts is a critical component in large-scale industrial manufacturing. A particular challenge that we address in this work is the cold-start problem: fit a model using nominal (non-defective) example images only. While handcrafted solutions per class are possible, the goal is to build systems that work well simultaneously on many different tasks automatically. The best performing
-
KDD 20222022Online ads are essential to all businesses and ad headlines are one of their core creative component. Existing methods can generate headlines automatically and also optimize their click-through-rate (CTR) and quality. However, evolving ad formats and changing creative requirements make it difficult to generate optimized & customized headlines. We propose a novel method that uses prefix control tokens along
-
KDD 20222022With the increasing adoption of machine learning (ML) models and systems in high-stakes settings across different industries, guaranteeing a model’s performance after deployment has become crucial. Monitoring models in production is a critical aspect of ensuring their continued performance and reliability. We present Amazon SageMaker Model Monitor, a fully managed service that continuously monitors the
-
KDD 20222022This paper describes a new method for representing embedding tables of graph neural networks (GNNs) more compactly via tensor-train (TT) decomposition. We consider the scenario where (a) the graph data that lack node features, thereby requiring the learning of embeddings during training; and (b) we wish to exploit GPU platforms, where smaller tables are needed to reduce host-to-GPU communication even for
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all