New contrastive-learning methods for better data representation

New loss functions enable better approximation of the optimal loss and more-useful representations of multimodal data.

Many recent advances in artificial intelligence are the result of representation learning: a machine learning model learns to represent data items as vectors in a multidimensional space, where geometric relationships between vectors correspond to semantic relationships between items.

The M5 team at Amazon strives to construct general-purpose semantic representations of data related to the Amazon Store — product descriptions, queries, reviews, and more — that can be employed by machine learning (ML) systems throughout Amazon. Our approach involves leveraging all accessible data for each entity, often spanning multiple modalities.

One of the most successful ways to produce general-purpose representations is through contrastive learning, in which a model is trained on pairs of inputs, which are either positive (similar inputs/products) or negative (dissimilar inputs/products). The model learns to pull positive examples together and push negative examples apart.

Related content
Four CVPR papers from Prime Video examine a broad set of topics related to efficient model training for understanding and synthesizing long-form cinematic content.

In a pair of recent papers, M5 researchers have made substantial contributions to the theory and practice of contrastive learning. In “Why do we need large batch sizes in contrastive learning? A gradient-bias perspective”, presented at the 2022 Neural Information Processing Systems (NeurIPS) conference, we propose a new contrastive-learning loss function that enables models to converge on useful representations with lower memory cost and less training data.

And in “Understanding and constructing latent modality structures in multi-modal representation learning”, presented at this year’s Computer Vision and Pattern Recognition conference (CVPR), we propose geometric constraints on the representations of different modes of the same data item — say, image and text — that are more useful for downstream tasks than simply trying to resolve both representations to the same point in the representational space.

Do we need large batch sizes in contrastive learning?

In contrast with standard ML methods, contrastive learning typically requires very large batch sizes to achieve good performance: several popular models, for instance, require tens of thousands of training examples, significantly increasing the memory overhead; reducing the batch size can impair performance. In our NeurIPS paper, we attempt to understand this phenomenon and to propose techniques for mitigating it.

Related content
Two methods presented at CVPR achieve state-of-the-art results by imposing additional structure on the representational space.

Part of the appeal of contrastive learning is that it’s unsupervised, meaning it doesn’t require data annotation. Positive pairs can be generated by mathematically transforming an “anchor sample” and pairing the transformed version with the original; negative pairs can be generated by pairing an anchor sample with transformed versions of other anchor samples. With image data, a transformation might involve re-cropping, reversing, or distorting the colors of the anchor sample; with textual data, a transformation might involve substituting synonyms for the words in a sentence.

Given a measure of similarity between vectors in the representational space, the standard loss function for contrastive learning involves a ratio whose numerator includes the similarity between an anchor sample and one of its transformations; the denominator includes the sum of the similarities of the anchor sample and all possible negative samples. The goal of training is to maximize that ratio.

In principle, given the possibility of applying transformations to negative samples, “all possible negative samples” could describe an infinite set. In practice, contrastive learning typically just relies on the negative examples available in the training batch. Hence the need for large batch sizes — to approximate an infinite sum.

contrastive_learning [Read-Only].png
The contrastive-learning framework. Approximating an infinite sum with the samples in a finite minibatch of training data can introduce gradient bias.

If the distribution of minibatch samples differs from the distribution of possible negatives, however, this approximation can bias the model. One difficulty in correcting the bias is that, because the loss function contrasts each positive pair with all possible negatives at once, in a ratio, it cannot be decomposed into a sum of sub-losses.

We address the decomposability problem using Bayesian augmentation. The general approach is that, for each anchor sample, we create a random auxiliary variable, which can be thought of as a weight applied to the anchor sample’s similarity scores. Using identity under the gamma function, we can show that the auxiliary variable follows a gamma distribution, which is easy to sample. As a consequence, we can rewrite the loss in an exponential rather than a fractional form, making it decomposable.

During training, we begin by sampling the auxiliary variables for the current batch of data from a gamma distribution, giving us the weight of the similarity scores for all the anchor samples. Conditioned on the sampled values, we then apply maximum likelihood estimation to optimize the parameters of the model, which will consider the sampled weights on the similarity scores from the first step. We then repeat this process for the entire dataset, summing a sequence of (weighted) sub-losses to produce a cumulative loss. In our paper, we show that this procedure will converge toward the expected loss for the original contrastive-loss function, with its infinite sum in the denominator.

Contrastive-learning losses.png
Results of 10 training runs on synthetic data with added noise, comparing a model trained with our decomposable loss function (red) to one trained with the conventional loss function (blue). With our loss, the model consistently converged to the optimum (1.0), while with the conventional loss, it never did.

We evaluate our approach through a number of experiments. In one, we used simulated data, into which we injected noise to simulate bias. Then we used both our loss and the conventional loss function to train a model 10 times, with different initialization values. At heavy noise levels, the model trained with the conventional loss failed to converge, while ours consistently converged to the optimum.

We also evaluated the models on a variety of downstream tasks, including zero-/few-shot image classification and image/text retrieval. Our approach showed significant performance improvement over state-of-the-art baseline methods.

What geometries work best for multimodal representation matching?

At M5, we are building scalable models that can handle multimodal data — for instance, multilingual models that translate between product descriptions in different languages or multi-entity models that jointly model different images of the same product. Contrastive learning is a promising method for building such models: data in different modalities that are associated with the same products can be treated as positive pairs, and contrastive learning pulls them together in the representational space.

Related content
A new metric-learning loss function groups together superclasses and learns commonalities within them.

We theoretically investigated whether the standard contrastive-learning framework is optimal in terms of the prediction error rate on downstream tasks, and the surprising answer is no. In our CVPR paper, we prove that if the information gap between two modalities is large — that is, if you can’t infer much about one modality from the other — then the best prediction error we can hope to achieve using standard contrastive-learning representations is larger than that we can achieve if we simply train a machine learning model directly on data in a single modality.

This makes some intuitive sense. Ideally, contrastive learning would pull the different modalities so tightly together that they would essentially resolve to a single point in the representational space. But of course, the reason to use multimodal representations for downstream tasks is that each modality may capture useful information that the other does not. Collapsing the different modalities’ representations together neutralizes this advantage.

Consequently, in our CVPR paper, we explore different geometrical relationships in the representational space that can establish correlations between multimodal data without sacrificing information specific to each mode. We propose three general approaches to constructing modality structures in the representational space, suited to intramodal representation, intermodal representation, and a combination of the two:

  1. a deep feature separation loss for intramodality regularization, which uses two types of neural network components to separate different modality information: one component captures information that’s shared between modalities (tuned according to the standard contrastive-learning loss), and the other, which is orthogonal to the first, captures information unique to the modality;
  2. a “Brownian-bridge” loss for intermodality regularization, which uses Brownian motion to plot several trajectories/transitions between the representation of one modality (say, text) and the other (say, an image) and constrains representations of augmented data to lie along one of those paths; and
  3. a geometric-consistency loss for both intra- and intermodality regularization, which enforces symmetry in the geometric relationships between representations in one modality and the corresponding representations in the other modality, while simultaneously enforcing symmetries in cross-modal geometric relationships.
Contrastive learning.png
Three types of modality structures that can improve modality representation learning for downstream tasks. (1) With deep feature separation, a model produces two orthogonal vectors for each modality, one that encodes information shared across modalities and one that encodes mode-specific information. (2) Brownian bridges use Brownian motion to generate trajectories/transitions between representations of data in different modes, defining a subspace in which the representations of augmented data are induced to lie. (3) Geometric consistency enforces symmetries in the relationships between data representations, both within modes (orange-orange and blue-blue) and across modes (blue-orange).

We have conducted extensive experiments on two popular multimodal representation-learning frameworks, the CLIP-based two-tower model and the ALBEF-based fusion model. We tested our model on a variety of tasks, including zero-/few-shot image classification, image-text retrieval, visual question answering, visual reasoning, and visual entailment. Our method achieves consistent improvements over existing methods, demonstrating the effectiveness and generalizability of our proposed approach on multimodal representation learning.

Going forward

Our NeurIPS and CVPR papers represent only two interesting projects from our M5 team. There is a lot more research on multimodal learning going on in M5. This includes generative models for images, videos, and text (e.g. Stable Diffusion, DreamBooth) to enable data synthesis and representation learning and training and applying large language models to enhance customer shopping experiences. We expect to report on more research highlights in the near future.

Research areas

Related content

GB, London
"Are you a MS or PhD student interested in the fields of Computer Science or Operational Research? Do you enjoy diving deep into hard technical problems and coming up with solutions that enable successful products? If this describes you, come join our research teams at Amazon. " Key job responsibilities The candidate will be responsible for the design and implementation of optimization algorithms. The candidate will translate high-level business problems into mathematical ones. Then, they will design and implement optimization algorithms to solve them. The candidate will be responsible also for the analysis and design of KPIs and input data quality. About the team ATS stands for Amazon Transportation Service, we are the middle-mile planners: we carry the packages from the warehouses to the cities in a limited amount of time to enable the “Amazon experience”. As the core research team, we grow with ATS business to support decision making in an increasingly complex ecosystem of a data-driven supply chain and e-commerce giant. We take pride in our algorithmic solutions: We schedule more than 1 million trucks with Amazon shipments annually; our algorithms are key to reducing CO2 emissions, protecting sites from being overwhelmed during peak days, and ensuring a smile on Amazon’s customer lips. We do not shy away from responsibility. Our mathematical algorithms provide confidence in leadership to invest in programs of several hundreds millions euros every year. Above all, we are having fun solving real-world problems, in real-world speed, while failing & learning along the way. We employ the most sophisticated tools: We use modular algorithmic designs in the domain of combinatorial optimization, solving complicated generalizations of core OR problems with the right level of decomposition, employing parallelization and approximation algorithms. We use deep learning, bandits, and reinforcement learning to put data into the loop of decision making. We like to learn new techniques to surprise business stakeholders by making possible what they cannot anticipate. For this reason, we work closely with Amazon scholars and experts from Academic institutions. We are open to hiring candidates to work out of one of the following locations: London, GBR
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a highly-skilled Senior Applied Scientist, to lead the development and implementation of cutting-edge algorithms and push the boundaries of efficient inference for Generative Artificial Intelligence (GenAI) models. As a Senior Applied Scientist, you will play a critical role in driving the development of GenAI technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Design and execute experiments to evaluate the performance of different decoding algorithms and models, and iterate quickly to improve results - Develop deep learning models for compression, system optimization, and inference - Collaborate with cross-functional teams of engineers and scientists to identify and solve complex problems in GenAI - Mentor and guide junior scientists and engineers, and contribute to the overall growth and development of the team We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Boston, MA, USA | New York, NY, USA
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Knowledge of econometrics, and basic familiarity with Python or R, is necessary. Experience with SQL is a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and apply econometric methods to support business decisions, collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. Key job responsibilities Collaborate with business and science colleagues to understand the business problem and collect relevant data. Provide statistically rigorous analysis of data that contributes to business decision-making. Effectively communicate your results to colleagues and business leaders. A day in the life Meet with colleagues to discuss how the business currently works. Discuss ways in which the customer experience could be improved, and what data you'd need to test your hypotheses. Meet with data and business intelligence engineers to build an efficient data pipeline using SQL, spark and other big data tools. Propose and execute a plan to analyze your data, working closely with your econ colleagues. Use Amazon's development tools, coding your estimators in Python or R. Draft your findings for an internal knowledge sharing session. Iterate to improve your work and communicate your final results in a business document. About the team We are a team of four economists that works within the delivery experience org. Our goal is to improve the delivery experience for our customers while reducing costs. This mission puts us in a unique position to influence both the front end customer experience and the supply chain that ultimately places constraints on that experience. This means we often work with and influence teams outside of our own organization. As a result, we have the privilege of working with a diverse group of experts, including those in supply chain, operations, capacity management, and user experience. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
ES, B, Barcelona
"Are you a MS or PhD student interested in the fields of Computer Science or Operational Research? Do you enjoy diving deep into hard technical problems and coming up with solutions that enable successful products? If this describes you, come join our research teams at Amazon. " Key job responsibilities The candidate will be responsible for the design and implementation of optimization algorithms. The candidate will translate high-level business problems into mathematical ones. Then, they will design and implement optimization algorithms to solve them. The candidate will be responsible also for the analysis and design of KPIs and input data quality. About the team ATS stands for Amazon Transportation Service, we are the middle-mile planners: we carry the packages from the warehouses to the cities in a limited amount of time to enable the “Amazon experience”. As the core research team, we grow with ATS business to support decision making in an increasingly complex ecosystem of a data-driven supply chain and e-commerce giant. We take pride in our algorithmic solutions: We schedule more than 1 million trucks with Amazon shipments annually; our algorithms are key to reducing CO2 emissions, protecting sites from being overwhelmed during peak days, and ensuring a smile on Amazon’s customer lips. We do not shy away from responsibility. Our mathematical algorithms provide confidence in leadership to invest in programs of several hundreds millions euros every year. Above all, we are having fun solving real-world problems, in real-world speed, while failing & learning along the way. We employ the most sophisticated tools: We use modular algorithmic designs in the domain of combinatorial optimization, solving complicated generalizations of core OR problems with the right level of decomposition, employing parallelization and approximation algorithms. We use deep learning, bandits, and reinforcement learning to put data into the loop of decision making. We like to learn new techniques to surprise business stakeholders by making possible what they cannot anticipate. For this reason, we work closely with Amazon scholars and experts from Academic institutions. We are open to hiring candidates to work out of one of the following locations: Barcelona, B, ESP
IN, TN, Chennai
DESCRIPTION The Digital Acceleration (DA) team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms for solving Digital businesses problems. Key job responsibilities - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues BASIC QUALIFICATIONS - Experience building machine learning models or developing algorithms for business application - PhD, or a Master's degree and experience in CS, CE, ML or related field - Knowledge of programming languages such as C/C++, Python, Java or Perl - Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. PREFERRED QUALIFICATIONS - 3+ years of building machine learning models or developing algorithms for business application experience - Have publications at top-tier peer-reviewed conferences or journals - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment We are open to hiring candidates to work out of one of the following locations: Chennai, TN, IND
US, VA, Arlington
Machine learning (ML) has been strategic to Amazon from the early years. We are pioneers in areas such as recommendation engines, product search, eCommerce fraud detection, and large-scale optimization of fulfillment center operations. The Generative AI team helps AWS customers accelerate the use of Generative AI to solve business and operational challenges and promote innovation in their organization. As an applied scientist, you are proficient in designing and developing advanced ML models to solve diverse challenges and opportunities. You will be working with terabytes of text, images, and other types of data to solve real-world problems. You'll design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for talented scientists capable of applying ML algorithms and cutting-edge deep learning (DL) and reinforcement learning approaches to areas such as drug discovery, customer segmentation, fraud prevention, capacity planning, predictive maintenance, pricing optimization, call center analytics, player pose estimation, event detection, and virtual assistant among others. AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. The AWS Global Support team interacts with leading companies and believes that world-class support is critical to customer success. AWS Support also partners with a global list of customers that are building mission-critical applications on top of AWS services. Key job responsibilities The primary responsibilities of this role are to: - Design, develop, and evaluate innovative ML models to solve diverse challenges and opportunities across industries - Interact with customer directly to understand their business problems, and help them with defining and implementing scalable Generative AI solutions to solve them - Work closely with account teams, research scientist teams, and product engineering teams to drive model implementations and new solution About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Atlanta, GA, USA | Austin, TX, USA | Houston, TX, USA | New York, NJ, USA | New York, NY, USA | San Francisco, CA, USA | Santa Clara, CA, USA | Seattle, WA, USA
US, WA, Seattle
Prime Video offers customers a vast collection of movies, series, and sports—all available to watch on hundreds of compatible devices. U.S. Prime members can also subscribe to 100+ channels including Max, discovery+, Paramount+ with SHOWTIME, BET+, MGM+, ViX+, PBS KIDS, NBA League Pass, MLB.TV, and STARZ with no extra apps to download, and no cable required. Prime Video is just one of the savings, convenience, and entertainment benefits included in a Prime membership. More than 200 million Prime members in 25 countries around the world enjoy access to Amazon’s enormous selection, exceptional value, and fast delivery. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As a Data Scientist at Amazon Prime Video, you will work with massive customer datasets, provide guidance to product teams on metrics of success, and influence feature launch decisions through statistical analysis of the outcomes of A/B experiments. You will develop machine learning models to facilitate understanding of customer's streaming behavior and build predictive models to inform personalization and ranking systems. You will work closely other scientists, economists and engineers to research new ways to improve operational efficiency of deployed models and metrics. A successful candidate will have a strong proven expertise in statistical modeling, machine learning, and experiment design, along with a solid practical understanding of strength and weakness of various scientific approaches. They have excellent communication skills, and can effectively communicate complex technical concepts with a range of technical and non-technical audience. They will be agile and capable of adapting to a fast-paced environment. They have an excellent track-record on delivering impactful projects, simplifying their approaches where necessary. A successful data scientist will own end-to-end team goals, operates with autonomy and strive to meet key deliverables in a timely manner, and with high quality. About the team Prime Video discovery science is a central team which defines customer and business success metrics, models, heuristics and econometric frameworks. The team develops, owns and operates a suite of data science and machine learning models that feed into online systems that are responsible for personalization and search relevance. The team is responsible for Prime Video’s experimentation practice and continuously innovates and upskills teams across the organization on science best practices. The team values diversity, collaboration and learning, and is excited to welcome a new member whose passion and creativity will help the team continue innovating and enhancing customer experience. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, NJ, Newark
Employer: Audible, Inc. Title: Data Scientist II Location: 1 Washington Street, Newark, NJ, 07102 Duties: Design and implement scalable and reliable approaches to support or automate decision making throughout the business. Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult business problems and cases in which the solution approach is unclear. Acquire data by building the necessary SQL/ETL queries. Import processes through various company specific interfaces for accessing RedShift, and S3/edX storage systems. Build relationships with stakeholders and counterparts, and communicate model outputs, observations, and key performance indicators (KPIs) to the management to develop sustainable and consumable products. Explore and analyze data by inspecting univariate distributions and multivariate interactions, constructing appropriate transformations, and tracking down the source and meaning of anomalies. Build production-ready models using statistical modeling, mathematical modeling, econometric modeling, machine learning algorithms, network modeling, social network modeling, natural language processing, or genetic algorithms. Validate models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. Implement models that comply with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production. Position reports into Newark, NJ office; however, telecommuting from a home office may be allowed. Requirements: Requires a Master’s in Statistics, Computer Science, Data Science, Machine Learning, Applied Math, Operations Research, Economics, or a related field plus two (2) years of Data Scientist or other occupation/position/job title with research or work experience related to data processing and predictive Machine Learning modeling at scale. Experience may be gained concurrently and must include: Two (2) years in each of the following: - Building statistical models and machine learning models using large datasets from multiple resources - Non-linear models including Neural Nets or Deep Learning, and Gradient Boosting - Applying specialized modelling software including Python, R, SAS, MATLAB, or Stata. One (1) year in the following: - Using database technologies including SQL or ETL. Alternatively, will accept a Bachelor's and five (5) years of experience. Multiple positions. Apply online: www.amazon.jobs Job Code: ADBL135. We are open to hiring candidates to work out of one of the following locations: Newark, NJ, USA
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of audio technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in AGI in audio domain. About the team Our team has a mission to push the envelope of AGI in audio domain, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Boston, MA, USA
DE, BE, Berlin
Are you fascinated by revolutionizing Alexa user experience with LLM? The Artificial General Intelligence (AGI) team is looking for an Applied Scientist with background in Large Language Model, Natural Language Process, Machine/Deep learning. You will be at the heart of the Alexa LLM transition working with a team of applied and research scientists to bring classic Alexa features and beyond into LLM empowered Alexa. You will interact in a cross-functional capacity with science, product and engineering leaders. Key job responsibilities * Work on core LLM technologies (supervised fine tuning, prompt optimization, etc.) to enable Alexa use cases * Research and develop novel metrics and algorithms for LLM evaluation * Communicating effectively with leadership team as well as with colleagues from science, engineering and business backgrounds. We are open to hiring candidates to work out of one of the following locations: Berlin, BE, DEU