Vienna, Austria
ICML 2024
July 21 - 27, 2024
Vienna, Austria

Overview

The International Conference on Machine Learning (ICML) is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. Learn more about Amazon's 20+ accepted publications in our quick guide.

Sponsorship Details

Accepted publications

Workshops

ICML 2024 Workshop on Foundation Models in the Wild
July 26
ICML 2024 Workshop on NextGenAISafety
July 26
ICML 2024 Workshop on Automated Reinforcement Learning: Exploring Meta-Learning, AutoML, and LLMs
July 27
The past few years have seen a surge of interest in reinforcement learning, with breakthrough successes in various domains, but the technology remains brittle, often relying on heavily engineered solutions. Several recent works have demonstrated that reinforcement learning algorithms are sensitive to design choices, making it challenging to effectively apply them in practice, especially on novel problems, and limiting their potential impact. This workshop aims to bring together different communities working on solving these problems, including those in reinforcement learning, meta-learning, AutoML, and language models, with the goal of fostering collaboration and cross-pollination of ideas to advance the field of automated reinforcement learning.

Website: https://autorlworkshop.github.io/

Amazon co-organizer: Vu Nguyen
ICML 2024 Workshop on In-Context Learning
July 27
In-context learning (ICL) is an emerging capability of large-scale models, including large language models (LLMs) like GPT-3, to acquire new capabilities directly from the context of an input example without separate training or fine-tuning, enabling these models to adapt rapidly to new tasks, datasets, and domains. This workshop brings together diverse perspectives on this new paradigm to assess progress, synthesize best practices, and chart open problems. Core topics will include architectural and other inductive biases enabling in-context skill acquisition, and reliable evaluation of ICL in application domains including reinforcement learning, representation learning, and safe and reliable machine learning.

Website: https://iclworkshop.github.io/

Academics at Amazon

  • The program is designed for academics from universities around the globe who want to work on large-scale technical challenges while continuing to teach and conduct research at their universities.
  • The program offers recent PhD graduates an opportunity to advance research while working alongside experienced scientists with backgrounds in industry and academia.
US, WA, Seattle
Amazon is looking for a talented Postdoctoral Scientist to join its AI Research on foundation models, large-scale representation learning, and distributed learning methods and systems for a one-year, full-time research position. At Amazon, you will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and innovate on new representation learning solutions. You will interact closely with our customers and with the academic and research communities. You will be at the heart of a growing and exciting focus area and work with other acclaimed engineers and world famous scientists. Large-scale foundation models have been the powerhouse in many of the recent advancements in computer vision, natural language processing, automatic speech recognition, recommendation systems, and time series modeling. Developing such models requires not only skillful modeling in individual modalities, but also understanding of how to synergistically combine them, and how to scale the modeling methods to learn with huge models and on large datasets. Join us to work as an integral part of a team that has diverse experiences in this space. We actively work on these areas: * Hardware-informed efficient model architecture, training objective and curriculum design * Distributed training, accelerated optimization methods * Continual learning, multi-task/meta learning * Reasoning, interactive learning, reinforcement learning * Robustness, privacy, model watermarking * Model compression, distillation, pruning, sparsification, quantization In this role, you will have the opportunity to: - Leverage machine learning models and advanced statistical techniques to extract valuable insights from historical data and comparable item information. - Tackle data scarcity challenges by developing innovative approaches to maximize the utility of available data sources. - Collaborate with cross-functional teams to develop and deploy production-ready solutions. - Participate in research activities, including publishing papers, attending conferences, and collaborating with academic institutions to advance the state-of-the-art in relevant fields. Key job responsibilities In this role you will: • Work closely with a senior science advisor, collaborate with other scientists and engineers, and be part of Amazon’s vibrant and diverse global science community. • Publish your innovation in top-tier academic venues and hone your presentation skills. • Be inspired by challenges and opportunities to invent cutting-edge techniques in your area(s) of expertise.