Amazon's annual machine learning conference focuses on community and connections
Internal event designed to replicate external science conferences.
Amazon’s annual internal science conference occurred recently, featuring keynote and oral presentations, workshops and tutorials aimed at highlighting the high-quality science behind many of the company’s businesses, and creating connections among scientists and engineers working on similar challenges within different businesses, or among scientists with similar research interests.
This year’s Amazon Machine Learning Conference (AMLC) was a hybrid event featuring an in-person component in Dallas, TX, and a virtual experience open to Amazon employees worldwide. Joseph Sirosh, vice president of Alexa Shopping, opened the in-person event by telling attendees, “Together, we are building culture, community and shared context for the transformational technology of our generation.”
Other internal speakers included Yoelle Maarek, Alexa Shopping vice president of research and science, who defined what the company means by customer-obsessed science; Alex Smola, vice president and distinguished scientist, Amazon Web Services Deep Learning, whose talk focused on AWS AutoML for text and images; and Trishul Chilimbi, vice president and distinguished scientist, Amazon Search Science, who provided a description of the company’s artificial intelligence (AI) flywheel, and how AI models are fueled with data.
“Since its inception in 2013, AMLC has focused on providing learning and teaching opportunities primarily for our science and engineering communities,” said Muthu Muthukrishnan, the event’s executive sponsor, and vice president of sponsored products, Performance Advertising Technology. “Our goal is to create community and connections that allow us to continually raise the bar on scientific and engineering inventions on behalf of our customers.”
This week we’re hosting the 9th Amazon Machine Learning Conference, a hybrid event with keynotes, tutorials, and oral presentations, where Amazon scientists discuss advancements in the application of machine learning across the breadth of the company’s businesses. #AMLC2022 pic.twitter.com/P2BsnnOjcd— Amazon Science (@AmazonScience) October 11, 2022
The event also included five keynote presentations by members of the global academic community: Pascale Fung, chair professor of the Hong Kong University of Science and Technology (HKUST) Department of Electronic and Computer Engineering; Fei-Fei Li, the inaugural Sequoia Professor in the Stanford University Computer Science Department, and co-director of Stanford’s Human-Center AI Institute; Linda Smith, distinguished professor and chancellor’s professor within the Indiana University Department of Psychological and Brain Sciences; Eric Topol, Gary and Mary West Endowed Chair of Innovative Medicine at Scripps Research Institute, and professor of molecular medicine and director and founders of Scripps Research Translational Institute; and Antonio Torralba, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science at MIT, and investigator at MIT’s Computer Science and Artificial Intelligence Laboratory.
Several of the external presenters’ talks have been made publicly available; each is provided below.
- Pascale Fung: “Value Based NLP”
In this talk, Fung asks the questions: Why do we need to do responsible AI and why do we care about human values alignment? She argues that all AI should be responsible AI, but cites two core challenges: 1) What are these values and who defines them? and 2) How can AI algorithms and models be made to align with these values?Value Based NLP | Pascale Fung at AMLC 2022
- Fei Fei Li: "From Seeing to Doing: Understanding and Interacting with the Real World"
In this presentation, Li explains that she's long been fascinated by vision, and her belief that understanding visual intelligence is the key to understanding general intelligence. She then discusses work in her lab focused on helping computers both understand and interact within the real world.
"Our hope is to create agents or computers that can perceive the real world," she says. She adds that there's a need to connect perception with action, and that what excites her today is the ability for systems "to interact with the world through sophisticated perception and understanding of the world." She discusses work her lab has been doing on robotic learning, adding that "the breakthrough that's down the road must be in unstructured environments."From Seeing to Doing: Understanding and Interacting with the Real World | Fei-Fei Li at AMLC 2022
- Linda Smith: “Episodes of Experience and Generative Intelligence”
In this talk, Smith explains that humans, including toddlers, are adept at taking knowledge from past experiences and using it in compelling new ways, and highlights findings from studies of toddler’s natural egocentric experiences. Her primary point: everyday experiences occur in time-extended episodes, and each unique episode is characterized by a suite of coherence statistics. Professor Smith proposes that these statistics are the secret ingredient to innovative intelligence, and that they provide novel insights into the internal processes that learn, generalize and innovate.Episodes of Experience and Generative Intelligence | Linda B. Smith at AMLC 2022
- Antonio Torralba: “Learning to See by Looking at Noise”
In this presentation, Torralba talks about two distinct projects aimed at reducing the amount of data required for computer vision and machine learning models. “Generally," he says, “computer vision systems and machine learning in general requires a lot of data. We need millions of images in order to train computer vision systems that are capable of solving visual tasks with pretty good performance nowadays. And the question really is how much data is really necessary? Do we need all this data? Can we get away with just very, very little data? And does this data need to be real?”
He concludes his talk by saying: “Our goal is to get rid of images and labels in order to train visual systems,” adding that he hopes there will be an evolution in the development of computer vision datasets. The evolution, he suggests, should be toward smaller and smaller synthetic datasets that are capable of reproducing the performance of larger datasets created from real data.Learning to See by Looking at Noise | Antonio Torralba at AMLC 2022