-
KDD 2023 Workshop on Multi-Armed Bandits and Reinforcement Learning (MARBLE), ICML 2023 Workshop on The Many Facets of Preference-based Learning2023Motivated by bid recommendation in online ad auctions, this paper considers a general class of multi-level and multi-agent games, with two major characteristics: one is a large number of anonymous agents, and the other is the intricate interplay between competition and cooperation. To model such complex systems, we propose a novel and tractable bi-objective optimization formulation with mean-field approximation
-
KDD 2023 Workshop on Mining and Learning with Graphs2023Substitute recommendation in e-commerce has attracted increasing attention in recent years, to help improve customer experience. In this work, we propose a multi-task graph learning framework that jointly learns from supervised and unsupervised objectives with heterogeneous graphs. Particularly, we propose a new contrastive method that extracts global information from both positive and negative neighbors
-
IEEE ICIP 20232023Convolutional neural networks (CNNs) have shown promising improvements in video coding efficiency when included in traditional block-based codecs as a loop filter. Unfortunately, these coding gains are often accompanied by significant increases in complexity, measured by the number of multiply-accumulate (MAC) operations, that make them intractable in practice. As a result, there is considerable interest
-
ICML 2023 Workshop on Sampling and Optimization in Discrete Spaces2023Recent developments in natural language processing (NLP) have highlighted the need for substantial amounts of data for models to capture textual information accurately. This raises concerns regarding the computational resources and time required for training such models. This paper introduces SEmantics for data SAliency in Model performance Estimation (SeSaME). It is an efficient data sampling mechanism
-
IEEE 2023 Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA)2023Classical speech coding uses low-complexity postfilters with zero lookahead to enhance the quality of coded speech, but their effectiveness is limited by their simplicity. Deep Neural Networks (DNNs) can be much more effective, but require high complexity and model size, or added delay. We propose a DNN model that generates classical filter kernels on a per-frame basis with a model of just 300 K parameters
Related content
-
March 21, 2019Sentiment analysis is the attempt, computationally, to determine from someone’s words how he or she feels about something. It has a host of applications, in market research, media analysis, customer service, and product recommendation, among other things. Sentiment classifiers are typically machine learning systems, and any given application of sentiment analysis may suffer from a lack of annotated data for training purposes.
-
March 20, 2019Although deep neural networks have enabled accurate large-vocabulary speech recognition, training them requires thousands of hours of transcribed data, which is time-consuming and expensive to collect. So Amazon scientists have been investigating techniques that will let Alexa learn with minimal human involvement, techniques that fall in the categories of unsupervised and semi-supervised learning.
-
March 11, 2019In experiments involving sound recognition, technique reduces error rate by 15% to 30%.
-
March 5, 2019The 2018 Alexa Prize featured eight student teams from four countries, each of which adopted distinctive approaches to some of the central technical questions in conversational AI. We survey those approaches in a paper we released late last year, and the teams themselves go into even greater detail in the papers they submitted to the latest Alexa Prize Proceedings. Here, we touch on just a few of the teams’ innovations.
-
February 27, 2019To ensure that Alexa Prize contestants can concentrate on dialogue systems — the core technology of socialbots — Amazon scientists and engineers built a set of machine learning modules that handle fundamental conversational tasks and a development environment that lets contestants easily mix and match existing modules with those of their own design.
-
January 30, 2019Many of today’s most popular AI systems are, at their core, classifiers. They classify inputs into different categories: this image is a picture of a dog, not a cat; this audio signal is an instance of the word “Boston”, not the word “Seattle”; this sentence is a request to play a video, not a song. But what happens if you need to add a new class to your classifier — if, say, someone releases a new type of automated household appliance that your smart-home system needs to be able to control?