ICLR: Why does deep learning work, and what are its limits?

Two recent trends in the theory of deep learning are examinations of the double-descent phenomenon and more-realistic approaches to neural kernel methods.

At this year’s International Conference on Learning Representations (ICLR), René Vidal, a professor of radiology and electrical engineering at the University of Pennsylvania and an Amazon Scholar, was a senior area chair, overseeing a team of reviewers charged with evaluating paper submissions to the conference. And the paper topic that his team focused on, Vidal says, was the theory of deep learning.

Vidal at ICLR.AS.16x9.png
René Vidal, the Rachleff University Professor at the University of Pennsylvania, with joint appointments in the School of Medicine's Department of Radiology and the Department of Electrical and Systems Engineering, a Penn Integrates Knowledge University Professor, and an Amazon Scholar.

“While representation learning and deep learning have been incredibly successful and have produced spectacular results for many application domains, deep networks remain black boxes,” Vidal explains. “How you design deep networks remains an art; there is a lot of trial and error on each and every dataset. So by and large, the area of mathematics of deep learning aims to have theorems, mathematical proofs, that guarantee the performance of deep networks.

“You can ask questions such as ‘Why is it the case that deep networks generalize from one data set to another?’ ‘Can you have a theorem that tells you the classification error on a new dataset versus the classification error on the training data set?’ ‘Can you derive a bound on that error as, say, a function of the number of training examples?’

“There are questions that pertain to optimization. These days, you are minimizing a loss function over, sometimes, billions of parameters. And because the optimization problems are so large, and you have so many training examples, for computational reasons, you are limited to very simple optimization methods. Can you prove convergence for these nonconvex problems? Can you understand what you converge to? Why is it the case that these very simple optimization methods are so successful for these very complex problems?’”

Double descent

In particular, Vidal says, two topics in the theory of deep learning have been drawing increased attention recently. The first is the so-called double-descent phenomenon. The conventional wisdom in AI used to hold that the size of a neural network had to be carefully tailored to both the problem it addressed and the amount of training data available. If the network was too small, it couldn’t learn complex patterns in the data; but if it got too large, it could simply memorize the correct answers for all the data in its training set — a particularly egregious case of overfitting — and it wouldn’t generalize to new inputs.

Related content
The surprising dynamics related to learning that are common to artificial and biological systems.

As a consequence, for a given problem and a given set of training data, as the size of a neural network grows, its error rate on the previously unseen data of the test set goes down. At some point, however, the error rate starts to go up again, as the network begins to overfit the data.

In the last few years, however, a number of papers have reported the surprising result that as the network continues to grow, the error rate goes back down again. This the double-descent phenomenon — and no one is sure why it happens.

“The error goes down as the size of the model grows, then back up as it overfits,” Vidal explains. “And it gets to a peak at the so-called interpolation limit, which is exactly when, during training, you can achieve zero error, because the network is big enough that it can memorize. But from then on, the testing error goes down again. There have been a lot of papers trying to explain why this happens.”

The neural tangent kernel

Another interesting recent trend in the theory of deep networks, Vidal says, involves new forms of analysis based on the neural tangent kernel.

Related content
Machine learning systems often act on “features” extracted from input data. In a natural-language-understanding system, for instance, the features might include words’ parts of speech, as assessed by an automatic syntactic parser, or whether a sentence is in the active or passive voice.

“In the past — say, the year 2000 — the way we did learning was by using so-called kernel methods,” Vidal explains. “Kernel methods are based on taking your data and embedding it with a fixed embedding into a very-high-dimensional space, where everything looks linear. We can use classical linear learning techniques in that embedding space, but the embedding space was fixed.

“You can think of deep learning as learning that embedding — mapping the input data to some high-dimensional space. In fact, that’s exactly representation learning. The neural-tangent-kernel regime — a type of initialization, a type of neural network, a type of training — is a regime under which you can approximate the learning dynamics of a deep network using kernels. And therefore you can use classical techniques to understand why they generalize and why not.

“That regime is very unrealistic — networks with infinite width or initializations that don't change the weights too much during training. In this very contrived and specialized setting, things are easier and we can understand them better. The current trend is how we go away from these unrealistic assumptions and acknowledge that the problem is hard: you do want weights to change during training, because if they don't, you're not learning much.”

Related content
Technique that mixes public and private training data can meet differential-privacy criteria while cutting error increase by 60%-70%.

Indeed, Vidal has engaged this topic himself, in a paper accepted to this year’s Conference on Artificial Intelligence and Statistics (AISTATS), whose coauthors are his old research team from Johns Hopkins University.

“The three assumptions we are trying to get rid of are, one, can we get theorems for networks with finite width as opposed to infinite width?” Vidal says. “Number two is, can we get theorems for gradient-descent-like methods that have a finite step size? Because many earlier theorems assumed a really teeny tiny step size — like, infinitesimally small. And the third assumption we are relaxing is this assumption on the initialization, which becomes much more general.”

The limits of representation learning

When ICLR was founded, in 2013, it was a venue for researchers to explore alternatives to machine learning methods, such as kernel methods, that represented data in fixed, prespecified ways. Now, however, deep learning — which uses learned representations — has taken over the field of machine learning, and the difference between ICLR and the other major machine learning conferences has shrunk.

As someone who spent 20 years as a professor of biomedical engineering at Hopkins, however, Vidal has a keen awareness of the limitations of representation learning. For some applications, he says, domain knowledge is still essential.

Related content
The first step in training a neural network to solve a problem is usually the selection of an architecture: a specification of the number of computational nodes in the network and the connections between them. Architectural decisions are generally based on historical precedent, intuition, and plenty of trial and error.

“It happens in domains where data or labels may not be abundant,” he explains. “This is the case, for example, in the medical domain, where maybe there are 100 patients in a study, or maybe you can't put the data on a website where everyone can annotate it.

“Just to give you one concrete example, I had a project where we needed to produce a blood test, and we needed to classify white blood cells into different kinds. No one is ever going to take videos of millions of cells, and you're not going to have a pathologist annotate each and every cell to do object detection the way we do in computer vision.

“So all we could get were the actual results of the blood test: what are the concentrations? And you might have a million cells of class one, class two, and class three, and you just have these very weak labels. But the domain experts said, we can do cell purification by adding these chemicals here and there, and we do centrifugation and I don't know what, and then we get cells of only one type in this specimen. Therefore you can now pretend that you have labels, because we know that cells that had different labels didn't survive this chemistry. And we said, ‘Wow, that’s great!’

“If you do things with 100% people who are all data scientists and machine learning people, they tend to think that all you need is a bigger network and more data. But I think, as at Amazon, where you need to think backwards from the customer, you need to solve real problems, and the solution isn't always more data and more annotations.”

Research areas

Related content

US, CA, Pasadena
Do you enjoy solving challenging problems and driving innovations in research? As a Research Science intern with the Quantum Algorithms Team at CQC, you will work alongside global experts to develop novel quantum algorithms, evaluate prospective applications of fault-tolerant quantum computers, and strengthen the long-term value proposition of quantum computing. A strong candidate will have experience applying methods of mathematical and numerical analysis to assess the performance of quantum algorithms and establish their advantage over classical algorithms. Key job responsibilities We are particularly interested in candidates with expertise in any of the following subareas related to quantum algorithms: quantum chemistry, many-body physics, quantum machine learning, cryptography, optimization theory, quantum complexity theory, quantum error correction & fault tolerance, quantum sensing, and scientific computing, among others. A day in the life Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. This is not a remote internship opportunity. About the team Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Research Scientist specializing in hardware design for cryogenic environements. The candidate should have expertise in 3D CAD (SolidWorks), thermal and structural FEA (Ansys/COMSOL), hardware design for cryogenic applications, design for manufacturing, and mechanical engineering principles. The candidate must have demonstrated driving designs through full product development cycles (requirements, conceptual design, detailed design, manufacturing, integration, and testing). Candidates must have a strong background in both cryogenic mechanical engineering theory and implementation. Working effectively within a cross-functional team environment is critical. Key job responsibilities Our scientists and engineers collaborate across diverse teams and projects to offer state of the art, cost effective solutions for scaling the signal delivery to AWS quantum processor systems at cryogenic temperatures. Equally important is the ability to scale the thermal performance and improve EMI mitigation of the cryogenic environment. You'll bring passion, enthusiasm, and innovation to work on the following: - High density novel packaging solutions for quantum processor units. - Cryogenic mechanical design for novel cryogenic signal conditioning sub-assemblies. - Cryogenic mechanical design for signal delivery systems. - Simulation driven designs (shielding, filtering, etc.) to reduce sources of EMI within the qubit environment. - Own end-to-end product development through requirements, design reports, design reviews, assembly/testing documentation, and final delivery. A day in the life As you design and implement cryogenic hardware solutions, from requirements definition to deployment, you will also: - Participate in requirements, design, and test reviews and communicate with internal stakeholders. - Work cross-functionally to help drive decisions using your unique technical background and skill set. - Refine and define standards and processes for operational excellence. - Work in a high-paced, startup-like environment where you are provided the resources to innovate quickly. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, CA, Santa Clara
Amazon Web Services (AWS) is assembling an elite team of world-class scientists and engineers to pioneer the next generation of AI-driven development tools. Join the Amazon Kiro LLM-Training team and help create groundbreaking generative AI technologies including Kiro IDE and Amazon Q Developer that are transforming the software development landscape. Key job responsibilities As a key member of our team, you'll be at the forefront of innovation, where cutting-edge research meets real-world application: - Push the boundaries of reinforcement learning and post-training methodologies for large language models specialized in code intelligence - Invent and implement state-of-the-art machine learning solutions that operate at unprecedented Amazon scale - Deploy revolutionary products that directly impact the daily workflows of millions of developers worldwide - Break new ground in AI and machine learning, challenging what's possible in intelligent code assistance - Publish and present your pioneering work at premier ML and NLP conferences (NeurIPS, ICML, ICLR , ACL, EMNLP) - Accelerate innovation by working directly with customers to rapidly transition research breakthroughs into production systems About the team The AWS Developer Agents and Experiences (DAE) team is reimagining the builder experience through generative AI and foundation models. We're leveraging the latest advances in AI to transform how engineers work from IDE environments to web-based tools and services, empowering developers to tackle projects of any scale with unprecedented efficiency. Broadly, AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Basic Qualifications: - Master’s or PhD in computer science, statistics or a related field - 1-2 years experience in deep learning, machine learning, and data science. - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. - Papers published in AI/ML venues of repute Preferred Qualifications: - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - The motivation to achieve results in a fast-paced environment. - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment
IN, KA, Bengaluru
Alexa+ is Amazon’s next-generation, AI-powered virtual assistant. Building on the original Alexa, it uses generative AI to deliver a more conversational, personalised, and effective experience. Alexa Sensitive Content Intelligence (ASCI) team is developing responsible AI (RAI) solutions for Alexa+, empowering it to provide useful information responsibly. The team is currently looking for Senior Applied Scientists with a strong background in NLP and/or CV to design and develop ML solutions in the RAI space using generative AI across all languages and countries. A Senior Applied Scientist will be a tech lead for a team of exceptional scientists to develop novel algorithms and modeling techniques to advance the state of the art in NLP or CV related tasks. You will work in a dynamic, fast-paced organization where scientists, engineers, and product managers work together to build customer facing experiences. You will collaborate with and mentor other scientists to raise the bar of scientific research in Amazon. Your work will directly impact our customers in the form of products and services that make use of speech, language, and computer vision technologies. We are looking for a leader with strong technical experiences a passion for building scientific driven solutions in a fast-paced environment. You should have good understanding of Artificial Intelligence (AI), Natural Language Understanding (NLU), Machine Learning (ML), Dialog Management, Automatic Speech Recognition (ASR), and Audio Signal Processing where to apply them in different business cases. You leverage your exceptional technical expertise, a sound understanding of the fundamentals of Computer Science, and practical experience of building large-scale distributed systems to creating reliable, scalable, and high-performance products. In addition to technical depth, you must possess exceptional communication skills and understand how to influence key stakeholders. You will be joining a select group of people making history producing one of the most highly rated products in Amazon's history, so if you are looking for a challenging and innovative role where you can solve important problems while growing as a leader, this may be the place for you. Key job responsibilities You'll lead the science solution design, run experiments, research new algorithms, and find new ways of optimizing customer experience. You set examples for the team on good science practice and standards. Besides theoretical analysis and innovation, you will work closely with talented engineers and ML scientists to put your algorithms and models into practice. Your work will directly impact the trust customers place in Alexa, globally. You contribute directly to our growth by hiring smart and motivated Scientists to establish teams that can deliver swiftly and predictably, adjusting in an agile fashion to deliver what our customers need. A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test scientific proposal/solutions to improve our sensitive contents detection and mitigation. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, and model development. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. You work closely with partner teams across Alexa to deliver platform features that require cross-team leadership. About the hiring group About the team The mission of the Alexa Sensitive Content Intelligence (ASCI) team is to (1) minimize negative surprises to customers caused by sensitive content, (2) detect and prevent potential brand-damaging interactions, and (3) build customer trust through appropriate interactions on sensitive topics. The term “sensitive content” includes within its scope a wide range of categories of content such as offensive content (e.g., hate speech, racist speech), profanity, content that is suitable only for certain age groups, politically polarizing content, and religiously polarizing content. The term “content” refers to any material that is exposed to customers by Alexa (including both 1P and 3P experiences) and includes text, speech, audio, and video.
US, WA, Bellevue
Amazon’s Middle Mile Planning Research and Optimization Science group (mmPROS) is looking for a Senior Research Scientist specializing in design and evaluation of algorithms for predictive modeling and optimization applied to large-scale transportation planning systems. This includes the development of novel machine learning and causal modeling techniques to improve on marketplace optimization solutions. Middle Mile Air and Ground transportation represents one of the fastest growing logistics areas within Amazon. Amazon Fulfillment Services transports millions of packages via air and ground and continues to grow year over year. The scale of this operation challenges Amazon to design, build and operate robust transportation networks that minimize the overall operational cost while meeting all customer deadlines. The Middle Mile Planning Research and Optimization Science group is charged with developing an evolving suite of decision support and optimization tools to facilitate the design of efficient air and ground transport networks, optimize the flow of packages within the network to efficiently align network capacity and shipment demand, set prices, and effectively utilize scarce resources, such as aircraft and trucks. Time horizons for these tools vary from years and months for long-term planning to hours and minutes for near-term operational decision making and disruption recovery. These tools rely heavily on mathematical optimization, stochastic simulation, meta-heuristic and machine learning techniques. In addition, Amazon often finds existing techniques do not effectively match our unique business needs which necessitates the innovation and development of new approaches and algorithms to find an adequate solution. As an Applied Scientist responsible for middle mile transportation, you will be working closely with different teams including business leaders and engineers to design and build scalable products operating across multiple transportation modes. You will create experiments and prototype implementations of new learning algorithms and prediction techniques. You will have exposure to top level leadership to present findings of your research. You will also work closely with other scientists and also engineers to implement your models within our production system. You will implement solutions that are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility, and make decisions that affect the way we build and integrate algorithms across our product portfolio.
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. The ideal candidate will contribute to research and implementation that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Implement and optimize control algorithms for robot locomotion - Support development of behaviors that enable robots to traverse diverse terrain - Contribute to methods that integrate stability, locomotion, and manipulation tasks - Help create dynamics models and simulations that enable sim2real transfer of algorithms - Collaborate effectively with multi-disciplinary teams on hardware and algorithms for loco-manipulation
US, VA, Arlington
The Global Real Estate and Facilities (GREF) team provides real estate transaction expertise, business partnering, space & occupancy planning, design and construction, capital investment program management and facility maintenance and operations for Amazon’s corporate office portfolio across multiple countries. We partner with suppliers to ensure quality, innovation and operational excellence with Amazon’s business and utilize customer driven feedback to continuously improve and exceed employee expectations. Within GREF, the newly formed Global Transformation & Insights (GTI) team is responsible for Customer Insights, Business Insights, Creative, and Communications. We are a group of builders, creators, innovators and go getters. We are customer obsessed, and index high on Ownership. We Think Big, and move fast, and constantly challenge one another while collaborating on "what else", "how might we", and "how can I help". We celebrate the unique perspectives we each bring to the table. We thrive in ambiguity. The ideal Senior Data Scientist candidate thrives in ambiguous environments where the business problem is known, though the technical strategy is not defined. They are able to investigate and develop strategies and concepts to frame a solution set and enable detailed design to commence. They must have strong problem-solving capabilities to isolate, define, resolve complex problems, and implement effective and efficient solutions. They should have experience working in large scale organizations – where data sets are large and complex. They should have high judgement with the ability to balance the right data fidelity with right speed with right confidence level for various stages of analysis and purposes. They should have experience partnering with a broad set of functional teams and levels with the ability to adjust and synthesize their approaches, assumptions, and recommendations to audiences with varying levels of technical knowledge. They are mentors and strong partners with research scientists and other data scientists. Key job responsibilities - Demonstrate advanced technical expertise in data science - Provide scientific and technical leadership within the team - Stay current with emerging technologies and methodologies - Apply data science techniques to solve business problems - Guide and mentor junior data scientists - Share knowledge about scientific advancements with team members - Contribute to the technical growth of the organization - Work on complex, high-impact projects - Influence data science strategy and direction - Collaborate across teams to drive data-driven decision making
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities - Develop ML models for various recommendation & search systems using deep learning, online learning, and optimization methods - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals A day in the life We're using advanced approaches such as foundation models to connect information about our videos and customers from a variety of information sources, acquiring and processing data sets on a scale that only a few companies in the world can match. This will enable us to recommend titles effectively, even when we don't have a large behavioral signal (to tackle the cold-start title problem). It will also allow us to find our customer's niche interests, helping them discover groups of titles that they didn't even know existed. We are looking for creative & customer obsessed machine learning scientists who can apply the latest research, state of the art algorithms and ML to build highly scalable page personalization solutions. You'll be a research leader in the space and a hands-on ML practitioner, guiding and collaborating with talented teams of engineers and scientists and senior leaders in the Prime Video organization. You will also have the opportunity to publish your research at internal and external conferences. About the team Prime Video Recommendation Science team owns science solution to power recommendation and personalization experience on various Prime Video surfaces and devices. We work closely with the engineering teams to launch our solutions in production.
US, NY, New York
About Sponsored Products and Brands The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through industry leading generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. About our team The Search Ranking and Interleaving (R&I) team within Sponsored Products and Brands is responsible for determining which ads to show and the quality of ads shown on the search page (e.g., relevance, personalized and contextualized ranking to improve shopper experience, where to place them, and how many ads to show on the search page. This helps shoppers discover new products while helping advertisers put their products in front of the right customers, aligning shoppers’, advertisers’, and Amazon’s interests. To do this, we apply a broad range of GenAI and ML techniques to continuously explore, learn, and optimize the ranking and allocation of ads on the search page. We are an interdisciplinary team with a focus on improving the SP experience in search by gaining a deep understanding of shopper pain points and developing new innovative solutions to address them. A day in the life As an Applied Scientist on this team, you will identify big opportunities for the team to make a direct impact on customers and the search experience. You will work closely with with search and retail partner teams, software engineers and product managers to build scalable real-time GenAI and ML solutions. You will have the opportunity to design, run, and analyze A/B experiments that improve the experience of millions of Amazon shoppers while driving quantifiable revenue impact while broadening your technical skillset. Key job responsibilities - Solve challenging science and business problems that balance the interests of advertisers, shoppers, and Amazon. - Drive end-to-end GenAI & Machine Learning projects that have a high degree of ambiguity, scale, complexity. - Develop real-time machine learning algorithms to allocate billions of ads per day in advertising auctions. - Develop efficient algorithms for multi-objective optimization using deep learning methods to find operating points for the ad marketplace then evolve them - Research new and innovative machine learning approaches.