Do large language models understand the world?

In addition to its practical implications, recent work on “meaning representations” could shed light on some old philosophical questions.

For centuries, theories of meaning have been of interest almost exclusively to philosophers, debated in seminar rooms and at conferences for small specialty audiences.

But the advent of large language models (LLMs) and other “foundation models” has changed that. Suddenly, mainstream media are alive with speculation about whether models trained only to predict the next word in a sequence can truly understand the world.

Trager:Soatto.png
Applied scientist Matthew Trager (left) and vice president and distinguished scientist Stefano Soatto (right).

Skepticism naturally arises. How can a machine that generates language in such a mechanical way grasp words’ meanings? Simply processing text, however fluently, would not seem to imply any sort of deeper understanding.

This kind of skepticism has a long history. In 1980, the philosopher John Searle proposed a thought experiment known as the Chinese room, in which a person who does not know Chinese follows a set of rules to manipulate Chinese characters, producing Chinese responses to Chinese questions. The experiment is meant to show that, since the person in the room never understands the language, symbolic manipulation alone cannot lead to semantic understanding.

Similarly, today’s critics often argue that since LLMs are able only to process “form” — symbols or words — they cannot in principle achieve understanding. Meaning depends on relations between form (linguistic expressions, or sequences of tokens in a language model) and something external, these critics argue, and models trained only on form learn nothing about those relations.

But is that true? In this essay, we will argue that language models not only can but do represent meanings.

Probability space

At Amazon Web Services (AWS), we have been investigating concrete ways to characterize meaning as represented by LLMs. The first challenge with these models is that there is no clear candidate for “where” meanings could reside. Today’s LLMs are usually decoder-only models; unlike encoder-only or encoder-decoder models, they do not use a vector space to represent data. Instead, they represent words in a distributed way, across the many layers and attention heads of a transformer model. How should we think of meaning representation in such models?

Related content
Novel architectures and carefully prepared training data enable state-of-the-art performance.

In our paper “Meaning representations from trajectories in autoregressive models”, we propose an answer to this question. For a given sentence, we consider the probability distribution over all possible sequences of tokens that can follow it, and the set of all such distributions defines a representational space.

To the extent that two sentences have similar continuation probabilities — or trajectories — they’re closer together in the representational space; to the extent that their probability distributions differ, they’re farther apart. Sentences that produce the same distribution of continuations are “equivalent”, and together, they define an equivalence class. A sentence’s meaning representation is then the equivalence class that it belongs to.

Trajectory likelihood distributions.png
Sentences with similar meanings produce similar score distributions over their continuations (top), while sentences with different meanings produce different score distributions over their continuations (bottom).

In the field of natural-language processing (NLP), it is widely recognized that the distribution of words in language is closely related to their meaning. This idea is known as the “distributional hypothesis” and is often invoked in the context of methods like word2vec embeddings, which build meaning representations from statistics on word co-occurrence. But we believe we are the first to use the distributions themselves as the primary way to represent meaning. This is possible since LLMs offer a way to evaluate these distributions computationally.

Related content
Two papers from Amazon Web Services AI present algorithms that alleviate the intensive hyperparameter search and fine-tuning required by privacy-preserving deep learning at very large scales.

Of course, the possible continuations of a single sentence are effectively infinite, so even using an LLM we can never completely describe their distribution. But this impossibility reflects the fundamental indeterminacy of meaning, which holds for people and AI models alike. Meanings are not directly observed: they are encoded in the billions of synapses in a brain or the billions of activations of a trained model, which can be used to produce expressions. Any finite number of expressions may be compatible with multiple (indeed, infinitely many) meanings; which meaning the human — or the language model — intends to convey can never be known for sure.

What is surprising, however, is that, despite the large dimensionality of today’s models, we do not need to sample billions or trillions of trajectories in order to characterize a meaning. A handful — say, 10 or 20 — is sufficient. Again, this is consistent with human linguistic practice. A teacher asked what a particular statement means will typically rephrase it in a few ways, in what could be described as an attempt to identify the equivalence class to which the statement belongs.

In experiments reported in our paper, we showed that a measure of sentence similarity that uses off-the-shelf LLMs to sample token trajectories largely agrees with human annotations. In fact, our strategy outperforms all competing approaches on zero-shot benchmarks for semantic textual similarity (STS).

Form and content

Does this suggest that our paper’s definition of meaning — a distribution over possible trajectories — reflects what humans do when they ascribe meaning? Again, skeptics would say that it couldn’t possibly: text continuations are based only on “form” and lack the external grounding necessary for meaning.

But probabilities over continuations may capture something deeper about how we interpret the world. Consider a sentence that begins “On top of the dresser stood … ” and the probabilities of three possible continuations of that sentence: (1) “a photo”; (2) “an Oscar statuette”; and (3) “an ingot of plutonium”. Don’t those probabilities tell you something about what, in fact, you can expect to find on top of someone’s dresser? The probabilities over all possible sentence continuations might be a good guide to the likelihood of finding different objects on the tops of dressers; in that case, the “formal” patterns encoded by the LLM would tell you something particular about the world.

Related content
Novel “checkpointing” scheme that uses CPU memory reduces the time wasted on failure recovery by more than 92%.

The skeptic might reply, however, that it’s the mapping of words to objects that gives the words meaning, and the mapping isn’t intrinsic to the words themselves; it requires human interpretation or some other mechanism external to the LLM.

But how do humans do that mapping? What happens inside you when you read the phrase “the objects on top of the dresser”? Maybe you envision something that feels somehow indefinite — a superposition of the dresser viewed from multiple angles or heights, say, with abstract objects in a certain range of sizes and colors on top. Maybe you also envision the possible locations of the dresser in the room, the room’s other furnishings, the feel of the wood of the dresser, the scent of the dresser or of the objects on top of it, and so on.

All of those possibilities can be captured by probability distributions, over data in multiple sensory modalities and in multiple conceptual schemas. So maybe meaning for humans involves probabilities over continuations, too, but in a multisensory space instead of a textual space. And on that view, when an LLM computes continuations of token sequences, it’s accessing meaning in a way that resembles what humans do, just in a more limited space.

Skeptics might argue that the passage from the multisensory realm to written language is a bottleneck that meaning can’t squeeze through. But that passage could also be interpreted as a simple projection, similar to the projection from a three-dimensional scene down to a two-dimensional image. The two-dimensional image provides only partial information, but in many situations, the scene remains quite understandable. And since language is our main tool for communicating our multisensory experiences, the projection into text might not be that "lossy" after all.

Multimodal projection.png
The passage from the multisensory realm to written language could be interpreted as a simple projection, similar to the projection from a three-dimensional scene down to a two-dimensional image.

This is not to say that today’s LLMs grasp meanings in the same way that humans do. Our work shows only that large language models develop internal representations with semantic value. We’ve also found evidence that such representations are composed of discrete entities, which relate to each other in complex ways — not just proximity but directionality, entailment, and containment.

But those structural relationships may differ from the structural relationships in the languages used to train the models. That would remain true even if we trained the model on sensory signals: we cannot directly see what meaning subtends a particular expression, for a model any more than for a human.

Related content
Finding that 70% of attention heads and 20% of feed-forward networks can be excised with minimal effect on in-context learning suggests that large language models are undertrained.

If the model and human have been exposed to similar data, however, and if they have shared enough experiences (today, annotation is the medium of sharing), then there is a basis on which to communicate. Alignment can then be seen as the process of translating between the model’s emergent “inner language” — we call it “neuralese” — and natural language.

How faithful can that alignment be? As we continue to improve these models, we will need to face the fact that even humans lack a stable, universal system of shared meanings. LLMs, with their distinct approach to processing information, may simply be another voice in a diverse chorus of interpretations.

In one form or another, questions about the relationship between the world and its representation have been central to philosophy for at least 400 years, and no definitive answers have emerged. As we move toward a future in which LLMs are likely to play a larger and larger role, we should not dismiss ideas based only on our intuitions but continue to ask these difficult questions. The apparent limitations of LLMs might be only a reflection of our poor understanding of what meaning actually is.

Research areas

Related content

US, WA, Seattle
Do you want to re-invent how millions of people consume video content on their TVs, Tablets and Alexa? We are building a free to watch streaming service called Fire TV Channels (https://techcrunch.com/2023/08/21/amazon-launches-fire-tv-channels-app-400-fast-channels/). Our goal is to provide customers with a delightful and personalized experience for consuming content across News, Sports, Cooking, Gaming, Entertainment, Lifestyle and more. You will work closely with engineering and product stakeholders to realize our ambitious product vision. You will get to work with Generative AI and other state of the art technologies to help build personalization and recommendation solutions from the ground up. You will be in the driver's seat to present customers with content they will love. Using Amazon’s large-scale computing resources, you will ask research questions about customer behavior, build state-of-the-art models to generate recommendations and run these models to enhance the customer experience. You will participate in the Amazon ML community and mentor Applied Scientists and Software Engineers with a strong interest in and knowledge of ML. Your work will directly benefit customers and you will measure the impact using scientific tools.
IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Senior Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Basic Qualifications: - Master’s or PhD in computer science, statistics or a related field - 2-7 years experience in deep learning, machine learning, and data science. - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. - Papers published in AI/ML venues of repute Preferred Qualifications: - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - The motivation to achieve results in a fast-paced environment. - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment
US, WA, Seattle
Have you ever wondered how Amazon launches and maintains a consistent customer experience across hundreds of countries and languages it serves its customers? Are you passionate about data and mathematics, and hope to impact the experience of millions of customers? Are you obsessed with designing simple algorithmic solutions to very challenging problems? If so, we look forward to hearing from you! At Amazon, we strive to be Earth's most customer-centric company, where both internal and external customers can find and discover anything they want in their own language of preference. Our Translations Services (TS) team plays a pivotal role in expanding the reach of our marketplace worldwide and enables thousands of developers and other stakeholders (Product Managers, Program Managers, Linguists) in developing locale specific solutions. Amazon Translations Services (TS) is seeking an Applied Scientist to be based in our Seattle office. As a key member of the Science and Engineering team of TS, this person will be responsible for designing algorithmic solutions based on data and mathematics for translating billions of words annually across 130+ and expanding set of locales. The successful applicant will ensure that there is minimal human touch involved in any language translation and accurate translated text is available to our worldwide customers in a streamlined and optimized manner. With access to vast amounts of data, cutting-edge technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the way customers and stakeholders engage with Amazon and our platform worldwide. Together, we will drive innovation, solve complex problems, and shape the future of e-commerce. Key job responsibilities * Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language translation-related challenges in the eCommerce space. * Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. * Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance seller performance and customer experiences across various international marketplaces. * Continuously explore and evaluate state-of-the-art modeling techniques and methodologies to improve the accuracy and efficiency of language translation-related systems. * Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. About the team We are a start-up mindset team. As the long-term technical strategy is still taking shape, there is a lot of opportunity for this fresh Science team to innovate by leveraging Gen AI technoligies to build scalable solutions from scratch. Our Vision: Language will not stand in the way of anyone on earth using Amazon products and services. Our Mission: We are the enablers and guardians of translation for Amazon's customers. We do this by offering hands-off-the-wheel service to all Amazon teams, optimizing translation quality and speed at the lowest cost possible.
GB, London
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply cutting edge Generative AI algorithms to solve real world problems with significant impact? The AWS Industries Team at AWS helps AWS customers implement Generative AI solutions and realize transformational business opportunities for AWS customers in the most strategic industry verticals. This is a team of data scientists, engineers, and architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. The team helps customers imagine and scope the use cases that will create the greatest value for their businesses, select and train and fine tune the right models, define paths to navigate technical or business challenges, develop proof-of-concepts, and build applications to launch these solutions at scale. The AWS Industries team provides guidance and implements best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. In this Data Scientist role you will be capable of using GenAI and other techniques to design, evangelize, and implement and scale cutting-edge solutions for never-before-solved problems. Key job responsibilities - Collaborate with AI/ML scientists, engineers, and architects to research, design, develop, and evaluate cutting-edge generative AI algorithms and build ML systems to address real-world challenges - Interact with customers directly to understand the business problem, help and aid them in implementation of generative AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production - Create and deliver best practice recommendations, tutorials, blog posts, publications, sample code, and presentations adapted to technical, business, and executive stakeholder - Provide customer and market feedback to Product and Engineering teams to help define product direction About the team Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, WA, Seattle
We’re working on the future. If you are seeking an iterative fast-paced environment where you can drive innovation, apply state-of-the-art technologies to solve large-scale real world delivery challenges, and provide visible benefit to end-users, this is your opportunity. Come work on the Amazon Prime Air Team! We are seeking a highly skilled weather scientist to help invent and develop new models and strategies to support Prime Air’s drone delivery program. In this role, you will develop, build, and implement novel weather solutions using your expertise in atmospheric science, data science, and software development. You will be supported by a team of world class software engineers, systems engineers, and other scientists. Your work will drive cross-functional decision-making through your excellent oral and written communication skills, define system architecture and requirements, enable the scaling of Prime Air’s operation, and produce innovative technological breakthroughs that unlock opportunities to meet our customers' evolving demands. About the team Prime air has ambitious goals to offer its service to an increasing number of customers. Enabling a lot of concurrent flights over many different locations is central to reaching more customers. To this end, the weather team is building algorithms, tools and services for the safe and efficient operation of prime air's autonomous drone fleet.
US, CA, Santa Clara
Amazon Q Business is an AI assistant powered by generative technology. It provides capabilities such as answering queries, summarizing information, generating content, and executing tasks based on enterprise data. We are seeking a Language Data Scientist II to join our data team. Our mission is to engineer high-quality datasets that are essential to the success of Amazon Q Business. From human evaluations and Responsible AI safeguards to Retrieval-Augmented Generation and beyond, our work ensures that Generative AI is enterprise-ready, safe, and effective for users. As part of our diverse team—including language engineers, linguists, data scientists, data engineers, and program managers—you will collaborate closely with science, engineering, and product teams. We are driven by customer obsession and a commitment to excellence. In this role, you will leverage data-centric AI principles to assess the impact of data on model performance and the broader machine learning pipeline. You will apply Generative AI techniques to evaluate how well our data represents human language and conduct experiments to measure downstream interactions. Key job responsibilities * oversee end-to-end evaluation data pipeline and propose evaluation metrics and methods * incorporate your knowledge of linguistic fundamentals, NLU, NLP to the data pipeline * process and analyze diverse media formats including audio recordings, video, images and text * perform statistical analysis of the data * write intuitive data generation & annotation guidelines * write advanced and nuanced prompts to optimize LLM outputs * write python scripts for data wrangling * automate repetitive workflows and improve existing processes * perform background research and vet available public datasets on topics such as long text retrieval, text generation, summarization, question-answering, and reasoning * leverage and integrate AWS services to optimize data collection workflows * collaborate with scientists, engineers, and product managers in defining data quality metrics and guidelines. * lead dive deep sessions with data annotators About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, CA, Palo Alto
Amazon Sponsored Products is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of GenAI/LLM powered self-service performance advertising products that drive discovery and sales. Our products are strategically important to Amazon’s Selling Partners and key to driving their long-term growth. We deliver billions of ad impressions and clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving team with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. This role will be pivotal within the Autonomous Campaigns org of Sponsored Products Ads, where we're pioneering the development of AI-powered advertising innovations that will redefine the future of campaign management and optimization. As a Principal Applied Scientist, you will lead the charge in creating the next generation of self-operating, GenAI-driven advertising systems that will set a new standard for the industry. Our team is at the forefront of designing and implementing these transformative technologies, which will leverage advanced Large Language Models (LLMs) and sophisticated chain-of-thought reasoning to achieve true advertising autonomy. Your work will bring to life systems capable of deeply understanding the nuanced context of each product, market trends, and consumer behavior, making intelligent, real-time decisions that surpass human capabilities. By harnessing the power of these future-state GenAI systems, we will develop advertising solutions capable of autonomously selecting optimal keywords, dynamically adjusting bids based on complex market conditions, and optimizing product targeting across various Amazon platforms. Crucially, these systems will continuously analyze performance metrics and implement strategic pivots, all without requiring manual intervention from advertisers, allowing them to focus on their core business while our AI works tirelessly on their behalf. This is not simply about automating existing processes; your work will redefine what's possible in advertising. Our GenAI systems will employ multi-step reasoning, considering a vast array of factors, from seasonality and competitive landscape to macroeconomic trends, to make decisions that far exceed human speed and effectiveness. This autonomous, context-aware approach represents a paradigm shift in how advertising campaigns are conceived, executed, and optimized. As a Principal Applied Scientist, you will be at the forefront of this transformation, tackling complex challenges in natural language processing, reinforcement learning, and causal inference. Your pioneering efforts will directly shape the future of e-commerce advertising, with the potential to influence marketplace dynamics on a global scale. This is an unparalleled opportunity to push the boundaries of what's achievable in AI-driven advertising and leave an indelible mark on the industry. Key job responsibilities • Seek to understand in depth the Sponsored Products offering at Amazon and identify areas of opportunities to grow our business using GenAI, LLM, and ML solutions. • Mentor and guide the applied scientists in our organization and hold us to a high standard of technical rigor and excellence in AI/ML. • Design and lead organization-wide AI/ML roadmaps to help our Amazon shoppers have a delightful shopping experience while creating long term value for our advertisers. • Work with our engineering partners and draw upon your experience to meet latency and other system constraints. • Identify untapped, high-risk technical and scientific directions, and devise new research directions that you will drive to completion and deliver. • Be responsible for communicating our Generative AI/ Traditional AI/ML innovations to the broader internal & external scientific community.
US, WA, Seattle
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses, responsible for defining and delivering a collection of advertising products that drive discovery and sales. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! As a Data Scientist on this team you will: - Lead Data Science solutions from beginning to end. - Deliver with independence on challenging large-scale problems with complexity and ambiguity. - Write code (Python, R, Scala, SQL, etc.) to obtain, manipulate, and analyze data. - Build Machine Learning and statistical models to solve specific business problems. - Retrieve, synthesize, and present critical data in a format that is immediately useful to answering specific questions or improving system performance. - Analyze historical data to identify trends and support optimal decision making. - Apply statistical and machine learning knowledge to specific business problems and data. - Formalize assumptions about how our systems should work, create statistical definitions of outliers, and develop methods to systematically identify outliers. Work out why such examples are outliers and define if any actions needed. - Given anecdotes about anomalies or generate automatic scripts to define anomalies, deep dive to explain why they happen, and identify fixes. - Build decision-making models and propose effective solutions for the business problems you define. - Conduct written and verbal presentations to share insights to audiences of varying levels of technical sophistication. Why you will love this opportunity: Amazon has invested heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video ~ https://youtu.be/zD_6Lzw8raE