Customer-obsessed science
Research areas
-
December 1, 20258 min read“Network language models” will coordinate complex interactions among intelligent components, computational infrastructure, access points, data centers, and more.
-
-
November 20, 20254 min read
-
October 20, 20254 min read
-
October 14, 20257 min read
Featured news
-
RecSys 20212021Existing recommender systems in the e-commerce domain primarily focus on generating a set of relevant items as recommendations; however, few existing systems utilize underlying item attributes as a key organizing principle in presenting recommendations to users. Mining important attributes of items from customer perspectives and presenting them along with item sets as recommendations can provide users more
-
ACL-IJCNLP 2021 Workshop on Meta-Learning and its Applications to NLP2021Multilingual pre-trained contextual embedding models (Devlin et al., 2019) have achieved impressive performance on zero-shot cross-lingual transfer tasks. Finding the most effective strategy to fine-tune these models on high-resource languages so that it transfers well to the zero-shot languages is a nontrivial task. In this paper, we propose a novel meta-optimizer to soft-select which layers of the pre-trained
-
arXiv2021Lattice surgery protocols allow for the efficient implementation of universal gate sets with two-dimensional topological codes where qubits are constrained to interact with one another locally. In this work, we first introduce a decoder capable of correcting spacelike and timelike errors during lattice surgery protocols. Afterwards, we compute logical failure rates of a lattice surgery protocol for a biased
-
EMNLP 20212021The task of learning from only a few examples (called a few-shot setting) is of key importance and relevance to a real-world setting. For question answering (QA), the current state-of-the-art pre-trained models typically need finetuning on tens of thousands of examples to obtain good results. Their performance degrades significantly in a few-shot setting (< 100 examples). To address this, we propose a finetuning
-
arXiv2021Biased-noise qubits are a promising candidate for realizing hardware efficient fault-tolerant quantum computing. One promising biased-noise qubit is the Kerr cat qubit, which has recently been demonstrated experimentally. Despite various unique advantages of Kerr cat qubits, we explain how the noise bias of Kerr cat qubits is severely limited by heating-induced leakage in their current implementations.
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all