Amazon senior principal engineer Luu Tran is seen sitting indoors, staring into the camera while smiling, he is wearing a sweater over a dress shirt and there are chairs, a desk, and a whiteboard in the background
Amazon senior principal engineer Luu Tran has overseen the plan-build-deploy-scale cycle for many Alexa features: timers, alarms, reminders, the calendar, recipes, Drop In, Announcements, and more.

Writing Alexa’s next chapter by combining engineering and science

Amazon senior principal engineer Luu Tran is helping the Alexa team innovate by collaborating closely with scientist colleagues.

For many of us, using our voices to interact with computers, phones, and other devices is a relatively new experience made possible by services like Amazon's Alexa.

But it’s old hat for Luu Tran.

An Amazon senior principal engineer, Tran has been talking to computers for more than three decades. An uber-early adopter of voice computing, Tran remembers the days when PCs came without sound cards, microphones, or even audio jacks. So he built his own solution.

“I remember when I got my first Sound Blaster sound card, which came with a microphone and software called Dragon Naturally Speaking,” Tran recalls.

With a little plug-and-play engineering, Tran could suddenly use his voice to open and save files on a mid-1990s-era PC. Replacing his keyboard and mouse with his voice was a magical experience and gave him a glimpse into the future of voice-powered computing.

Fast forward to 2023, and we’re in the the golden age of voice computing, made possible by advances in machine learning, AI, and voice assistants like Alexa. “Amazon’s vision for Alexa was always to be a conversational, natural personal assistant that knows you, understands you, and has some personality,” says Tran.

In his role, Tran has overseen the plan-build-deploy-scale cycle for many Alexa features: timers, alarms, reminders, the calendar, recipes, Drop In, Announcements, and more. Now, he’s helping Amazon by facilitating collaboration between the company’s engineers and academic scientists who can help advance machine learning and AI — both full-time academics and those participating in Amazon’s Scholars and Visiting Academics programs.

Tran is no stranger to computing paradigm shifts. His previous experiences at Akamai, Mint.com, and Intuit gave him a front-row seat to some of tech’s most dramatic shifts, including the birth of the internet, the explosion of mobile, and the shift from on-premise to cloud computing.

Bringing his three decades of experience to bear in his role at Amazon, Tran is helping further explore the potential of voice computing by spurring collaborations between Amazon’s engineering and science teams. On a daily basis, Tran encourages engineers and scientists to work together as one — shoulder-to-shoulder — fusing the latest scientific research with cutting-edge engineering.

It's no accident Tran is helping lead Alexa’s next engineering chapter. Growing up watching Star Trek, he’d always been fascinated with the idea that you could speak to a computer and it could speak back using AI.

“I'd always believed that AI was out of reach of my career and lifetime. But now look at where we are today,” Tran says.

The science of engineering Alexa

Tran believes collaboration with scientists is essential to continued innovation, both with Alexa and AI in general.

I'm coming from the perspective of an engineer who has studied some theory but has worked for decades translating technology ideas into reality, within real world constraints.
Luu Tran

“Bringing them together — the engineering and the science — is a powerful combination. Many of our projects are not simply deterministic engineering problems we can solve with more code and better algorithms,” he says. “We must bring to bear a lot of different tech and leverage science to fill in the gaps, such as machine learning modeling and training.”

Helping engineers and scientists work closely together is a nontrivial endeavor, because they often come from different backgrounds, have different goals and incentives, and in some cases even speak different “languages.” For example, Tran points out that the word “feature” means something very different to product managers and engineers than it does to scientists.

“I'm coming from the perspective of an engineer who has studied some theory but has worked for decades translating technology ideas into reality, within real-world constraints. For me, it’s been less important to understand why something works than what works,” Tran says.

Related content
How Alexa scales machine learning models to millions of customers.

To realize the best of both worlds, Tran says, the Alexa team is employing an even more agile approach than it’s used in the past — assembling project teams of product managers, engineers, and scientists, often with different combinations based on the goal, feature, or tech required. There’s no dogma or doctrine stating what roles must be on a particular team.

What’s most important, Tran points out, is that each team understands from the outset the customer need, the use case, the product market fit, and even the monetization strategy. Bringing scientists into projects from the start is critical. “We always have product managers on teams with engineers and scientists. Some teams are split 50–50 between scientists and engineers. Some are 90% scientists. It just depends on the problem we're going after.”

The makeup of teams changes as projects progress. Some start out heavily weighted toward engineering and then determine a use case or problem that requires scientific research. Others start out predominantly science-based and, once a viable solution is in sight, gradually add more engineers to build, test, and iterate. This push/pull among how teams form and change — and the autonomy to organize and reorganize to iterate quickly — is key, Tran believes.

“Often, it’s still product managers who describe the core customer need and use case and how we're going to solve it,” Tran says. “Then the scientists will say, ‘Yeah, that's doable, or no, that's still science fiction.’ And then we iterate and kind of formalize the project. This way, we can avoid spending months and months trying to build something that, had we done the research up front, wasn’t possible with current tech.”

Engineering + science = Smarter recipe recommendations

A recent project that benefited from the new agile, collaborative approach is Alexa’s new recipe recommendation engine. To deliver a relevant recipe recommendation to a customer who asks for one — perhaps to an Amazon Echo Show on a kitchen counter — Alexa must select a single recipe from its vast collection while also understanding the customer’s desires and context. All of us have unique tastes, dietary preferences, potential food allergies, and real-time contextual factors, such as what’s in the fridge, what time of day it is, and how much time we have to prepare a meal.

This is not something you can build using brute force engineering, It requires a lot of science.
Luu Tran

Alexa, Tran explains, must factor all parameters into its recipe recommendation and — in milliseconds — return a recipe it believes is both highly relevant (e.g., a Mexican dish) and personal (e.g., no meat for vegetarian customers). The technology involved to respond with relevant, safe, satisfying recommendations for every customer is mind-bogglingly complex. “This is not something you can build using brute-force engineering,” Tran notes. “It requires a lot of science.”

Building the new recipe engine required two parallel projects: a new machine learning model trained to look through and select recipes from a corpus of millions of online recipes and a new inference engine to ensure each request Alexa receives is appended with de-identified personal and contextual data. “We broke it down, just like any other process of building software,” Tran says. “We wrote our plan, identified the tasks, and then decided whether each task was best handled by a scientist or an engineer, or maybe a combination of both working together.”

Tran says the scientists on the team largely focused on the machine learning model. They started by researching all existing, publicly available ML approaches to recipe recommendation — cataloguing the model types and narrowing them down based on what they believed would perform best. “The scientists looked at a lot of different approaches — Bayesian models, graph-based models, cross-domain models, neural networks, and collaborative filtering — and settled on a set of six models they felt would be best for us to try,” Tran explains. “That helped us quickly narrow down without having to exhaustively try every potential model approach.”

The engineers, meanwhile, got to work designing and building the new inference engine to better capture and analyze user signals, both implicit (e.g., time of day) and explicit (whether the user asked for a dinner or lunch recipe). “You don’t want to recommend cocktail recipes at breakfast time, but sometimes people want to eat pancakes for dinner,” jokes Tran.

Related content
A new method based on Transformers and trained with self-supervised learning achieves state-of-the-art performance.

The inference engine had to be built to accommodate queries from existing users and new users who’ve never asked for a recipe recommendation. Performance and privacy were key requirements. The engineering team had to design and deploy the engine to optimize throughput while minimizing computation and storage costs and complying with customer requests to delete personal information from their histories.

Once the new inference engine was ready, the engineers integrated it with the six ML models built and trained by the scientists, connected it to the new front-end interface built by the design team, and tested the models against each other to compare the results. Tran says all six models improved conversion (a “conversion event” is triggered when a user selects a recommended recipe) vs. baseline recommendations, but one model outperformed others by more than 100%. The team selected that model, which is in production today.

The recipe project doesn’t end here, though. Now that it’s live and in production, there’s a process of continual improvement. “We’re always learning from customer behavior. Which are the recipes that customers were really happy with? And which are the ones they never pick?” Tran says. “There's continued collaboration between engineers and scientists on that, as well, to refine the solution.”

The future: Alexa engineering powered by science

To further accelerate Alexa innovation, Amazon formed the Alexa Principal Community — a matrixed team of several hundred engineers and scientists who work on and contribute to Alexa and Alexa-related technologies. “We have people from all parts of the company, regardless of who they report to,” adds Tran. “What brings us together is that we’re working together on the technologies behind Alexa, which is fantastic.”

Related content
A behind-the-scenes look at the unique challenges the engineering teams faced, and how they used scientific research to drive fundamental innovation to overcome those challenges.

Earlier this year, more than 100 members of that community convened, both in person and remotely, to share, discuss, and debate Alexa technology. “In my role as a member of the community’s small leadership team, I presented a few sessions, but I was mostly there to learn from, connect with, and influence my peers.”

Tran is thoroughly enjoying his work with scientists, and he feels he’s benefiting greatly from the collaboration. “Working closely with lots of scientists helps me understand what state-of-the-art AI is capable of so that I can leverage it in the systems that I design and build. But they also help me understand its limitations so that I don't overestimate and try to build something that's just not achievable in any realistic timeframe.”

Tran says that today, more than ever, is an amazing time to be at Alexa. “Imagination has been unlocked in the population and in our customer base,” he says. “So the next question they have is, ‘Where's Alexa going?’ And we're working as fast as we can to bring new features to life for customers. We have lots of things in the pipeline that we're working on to make that a reality.”

Research areas

Related content

CA, QC, Montreal
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scene understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through ground breaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, CA, Santa Clara
The AWS Neuron Science Team is looking for talented scientists to enhance our software stack, accelerating customer adoption of Trainium and Inferentia accelerators. In this role, you will work directly with external and internal customers to identify key adoption barriers and optimization opportunities. You'll collaborate closely with our engineering teams to implement innovative solutions and engage with academic and research communities to advance state-of-the-art ML systems. As part of a strategic growth area for AWS, you'll work alongside distinguished engineers and scientists in an exciting and impactful environment. We actively work on these areas: - AI for Systems: Developing and applying ML/RL approaches for kernel/code generation and optimization - Machine Learning Compiler: Creating advanced compiler techniques for ML workloads - System Robustness: Building tools for accuracy and reliability validation - Efficient Kernel Development: Designing high-performance kernels optimized for our ML accelerator architectures A day in the life AWS Utility Computing (UC) provides product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Additionally, this role may involve exposure to and experience with Amazon's growing suite of generative AI services and other cloud computing offerings across the AWS portfolio. About the team AWS Neuron is the software of Trainium and Inferentia, the AWS Machine Learning chips. Inferentia delivers best-in-class ML inference performance at the lowest cost in the cloud to our AWS customers. Trainium is designed to deliver the best-in-class ML training performance at the lowest training cost in the cloud, and it’s all being enabled by AWS Neuron. Neuron is a Software that include ML compiler and native integration into popular ML frameworks. Our products are being used at scale with external customers like Anthropic and Databricks as well as internal customers like Alexa, Amazon Bedrocks, Amazon Robotics, Amazon Ads, Amazon Rekognition and many more. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, WA, Seattle
Application deadline: Applications will be accepted on an ongoing basis Amazon Ads is re-imagining advertising through cutting-edge generative artificial intelligence (AI) technologies. We combine human creativity with AI to transform every aspect of the advertising life cycle—from ad creation and optimization to performance analysis and customer insights. Our solutions help advertisers grow their brands while enabling millions of customers to discover and purchase products through delightful experiences. We deliver billions of ad impressions and millions of clicks daily, breaking fresh ground in product and technical innovations. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Why you’ll love this role: This role offers unprecedented breadth in ML applications and access to extensive computational resources and rich datasets that will enable you to build truly innovative solutions. You'll work on projects that span the full advertising life cycle, from sophisticated ranking algorithms and real-time bidding systems to creative optimization and measurement solutions. You'll work alongside talented engineers, scientists, and product leaders in a culture that encourages innovation, experimentation, and bias for action, and you’ll directly influence business strategy through your scientific expertise. What makes this role unique is the combination of scientific rigor with real-world impact. You’ll re-imagine advertising through the lens of advanced ML while solving problems that balance the needs of advertisers, customers, and Amazon's business objectives. Your impact and career growth: Amazon Ads is investing heavily in AI and ML capabilities, creating opportunities for scientists to innovate and make their marks. Your work will directly impact millions. Whether you see yourself growing as an individual contributor or moving into people management, there are clear paths for career progression. This role combines scientific leadership, organizational ability, technical strength, and business understanding. You'll have opportunities to lead technical initiatives, mentor other scientists, and collaborate with senior leadership to shape the future of advertising technology. Most importantly, you'll be part of a community that values scientific excellence and encourages you to push the boundaries of what's possible with AI. Watch two Applied Scientists at Amazon Ads talk about their work: https://www.youtube.com/watch?v=vvHsURsIPEA Learn more about Amazon Ads: https://advertising.amazon.com/ Key job responsibilities As a Senior Applied Scientist in Amazon Ads, you will: - Research and implement cutting-edge ML approaches, including applications of generative AI and large language models - Develop and deploy innovative ML solutions spanning multiple disciplines – from ranking and personalization to natural language processing, computer vision, recommender systems, and large language models - Drive end-to-end projects that tackle ambiguous problems at massive scale, often working with petabytes of data - Build and optimize models that balance multiple stakeholder needs - helping customers discover relevant products while enabling advertisers to achieve their goals efficiently - Build ML models, perform proof-of-concept, experiment, optimize, and deploy your models into production, working closely with cross-functional teams including engineers, product managers, and other scientists - Design and run A/B experiments to validate hypotheses, gather insights from large-scale data analysis, and measure business impact - Develop scalable, efficient processes for model development, validation, and deployment that optimize traffic monetization while maintaining customer experience
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a highly skilled and experienced Applied Scientist, to support the development and implementation of state-of-the-art algorithms and models for supervised fine-tuning and reinforcement learning through human feedback and and complex reasoning; with a focus across text, image, and video modalities. As an Applied Scientist, you will play a critical role in supporting the development of Generative AI (Gen AI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in Gen AI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports
US, NY, New York
We are looking for a passionate Applied Scientist to help pioneer the next generation of agentic AI applications for Amazon advertisers. In this role, you will design agentic architectures, develop tools and datasets, and contribute to building systems that can reason, plan, and act autonomously across complex advertiser workflows. You will work at the forefront of applied AI, developing methods for fine-tuning, reinforcement learning, and preference optimization, while helping create evaluation frameworks that ensure safety, reliability, and trust at scale. You will work backwards from the needs of advertisers—delivering customer-facing products that directly help them create, optimize, and grow their campaigns. Beyond building models, you will advance the agent ecosystem by experimenting with and applying core primitives such as tool orchestration, multi-step reasoning, and adaptive preference-driven behavior. This role requires working independently on ambiguous technical problems, collaborating closely with scientists, engineers, and product managers to bring innovative solutions into production. Key job responsibilities - Design and build agents for our autonomous campaigns experience. - Design and implement advanced model and agent optimization techniques, including supervised fine-tuning, instruction tuning and preference optimization (e.g., DPO/IPO). - Curate datasets and tools for MCP. - Build evaluation pipelines for agent workflows, including automated benchmarks, multi-step reasoning tests, and safety guardrails. - Develop agentic architectures (e.g., CoT, ToT, ReAct) that integrate planning, tool use, and long-horizon reasoning. - Prototype and iterate on multi-agent orchestration frameworks and workflows. - Collaborate with peers across engineering and product to bring scientific innovations into production. - Stay current with the latest research in LLMs, RL, and agent-based AI, and translate findings into practical applications. About the team The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through the latest generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Autonomous Campaigns team within Sponsored Products and Brands is focused on guiding and supporting 1.6MM advertisers to meet their advertising needs of creating and managing ad campaigns. At this scale, the complexity of diverse advertiser goals, campaign types, and market dynamics creates both a massive technical challenge and a transformative opportunity: even small improvements in guidance systems can have outsized impact on advertiser success and Amazon’s retail ecosystem. Our vision is to build a highly personalized, context-aware campaign creation and management system that leverages LLMs together with tools such as auction simulations, ML models, and optimization algorithms. This agentic framework, will operate across both chat and non-chat experiences in the ad console, scaling to natural language queries as well as proactively delivering guidance based on deep understanding of the advertiser. To execute this vision, we collaborate closely with stakeholders across Ad Console, Sales, and Marketing to identify opportunities—from high-level product guidance down to granular keyword recommendations—and deliver them through a tailored, personalized experience. Our work is grounded in state-of-the-art agent architectures, tool integration, reasoning frameworks, and model customization approaches (including tuning, MCP, and preference optimization), ensuring our systems are both scalable and adaptive.
US, CA, San Francisco
The AGI Autonomy Perception team performs applied machine learning research, including model training, dataset design, pre- and post- training. We train Nova Act, our state-of-the art computer use agent, to understand arbitrary human interfaces in the digital world. We are seeking a Machine Learning Engineer who combines strong ML expertise with software engineering excellence to scale and optimize our ML workflows. You will be a key member on our research team, helping accelerate the development of our leading computer-use agent. We are seeking a strong engineer who has a passion for scaling ML models and datasets, designing new ML frameworks, improving engineering practices, and accelerating the velocity of AI development. You will be hired as a Member of Technical Staff. Key job responsibilities * Design, build, and deploy machine learning models, frameworks, and data pipelines * Optimize ML training, inference, and evaluation workflows for reliability and performance * Evaluate and improve ML model performance and metrics * Develop tools and infrastructure to enhance ML development productivity
US, CA, San Francisco
Do you want to create intelligent, adaptable robots with global impact? We are seeking an experienced Applied Science Manager to lead a team of talented applied scientists and software engineers developing and deploying advanced manipulation strategies and algorithms. You will drive innovation that enables manipulation in high-contact, high-density, and diverse conditions with the speed and reliability that will delight our customers. Collaborating with cross-functional teams across hardware, software, and science, you will deliver reliable and high-performing solutions that will scale across geographies, applications, and conditions. You should enjoy the process of solving real-world problems that, quite frankly, haven’t been solved at scale anywhere before. Along the way, we guarantee you’ll get opportunities to be a disruptor, prolific innovator, and a reputed problem solver—someone who truly enables robotics to significantly impact the lives of millions of consumers. A day in the life - Prioritize being a great people manager: motivating, rewarding, and coaching your diverse team is the most important part of this role. You will recruit and retain top talent and excel in people and performance management tasks. - Set a vision for the team and create the technical roadmap that deliver results for customers while thinking big for future applications. - Guide the research, design, deployment, and evaluation of complex motion planning and control algorithms for contact-rich, cluttered, real-world manipulation problems. - Work closely with perception, hardware, and software teams to create integrated robotic solutions that are better than the sum of their parts. - Implement best practices in applied research and software development, managing project timelines, resources, and deliverables effectively. Amazon offers a full range of benefits for you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!
US, WA, Seattle
Amazon Economics is seeking Structural Economist (STRUC) Interns who are passionate about applying structural econometric methods to solve real-world business challenges. STRUC economists specialize in the econometric analysis of models that involve the estimation of fundamental preferences and strategic effects. In this full-time internship (40 hours per week, with hourly compensation), you'll work with large-scale datasets to model strategic decision-making and inform business optimization, gaining hands-on experience that's directly applicable to dissertation writing and future career placement. Key job responsibilities As a STRUC Economist Intern, you'll specialize in structural econometric analysis to estimate fundamental preferences and strategic effects in complex business environments. Your responsibilities include: - Analyze large-scale datasets using structural econometric techniques to solve complex business challenges - Applying discrete choice models and methods, including logistic regression family models (such as BLP, nested logit) and models with alternative distributional assumptions - Utilizing advanced structural methods including dynamic models of customer or firm decisions over time, applied game theory (entry and exit of firms), auction models, and labor market models - Building datasets and performing data analysis at scale - Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations - Tackling diverse challenges including pricing analysis, competition modeling, strategic behavior estimation, contract design, and marketing strategy optimization - Helping business partners formalize and estimate business objectives to drive optimal decision-making and customer value - Build and refine comprehensive datasets for in-depth structural economic analysis - Present complex analytical findings to business leaders and stakeholders
US, WA, Seattle
Amazon Economics is seeking Reduced Form Causal Analysis (RFCA) Economist Interns who are passionate about applying econometric methods to solve real-world business challenges. RFCA represents the largest group of economists at Amazon, and these core econometric methods are fundamental to economic analysis across the company. In this full-time internship (40 hours per week, with hourly compensation), you'll work with large-scale datasets to analyze causal relationships and inform strategic business decisions, gaining hands-on experience that's directly applicable to dissertation writing and future career placement. Key job responsibilities As an RFCA Economist Intern, you'll specialize in econometric analysis to determine causal relationships in complex business environments. Your responsibilities include: - Analyze large-scale datasets using advanced econometric techniques to solve complex business challenges - Applying econometric techniques such as regression analysis, binary variable models, cross-section and panel data analysis, instrumental variables, and treatment effects estimation - Utilizing advanced methods including differences-in-differences, propensity score matching, synthetic controls, and experimental design - Building datasets and performing data analysis at scale - Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations - Tackling diverse challenges including program evaluation, elasticity estimation, customer behavior analysis, and predictive modeling that accounts for seasonality and time trends - Build and refine comprehensive datasets for in-depth economic analysis - Present complex analytical findings to business leaders and stakeholders
US, WA, Seattle
Amazon Economics is seeking Forecasting, Macroeconomics and Finance (FMF) Economist Interns who are passionate about applying time-series econometric methods to solve real-world business challenges. FMF economists interpret and forecast Amazon business dynamics by combining advanced time-series statistical methods with strong economic analysis and intuition. In this full-time internship (40 hours per week, with hourly compensation), you'll work with large-scale datasets to forecast business trends and inform strategic decisions, gaining hands-on experience that's directly applicable to dissertation writing and future career placement. Key job responsibilities As an FMF Economist Intern, you'll specialize in time-series econometric analysis to understand, predict, and optimize Amazon's business dynamics. Your responsibilities include: - Analyze large-scale datasets using advanced time-series econometric techniques to solve complex business challenges - Applying frontier methods in time series econometrics, including forecasting models, dynamic systems analysis, and econometric models that combine macro and micro data - Developing formal models to understand past and present business dynamics, predict future trends, and identify relevant risks and opportunities - Building datasets and performing data analysis at scale using world-class data tools - Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations - Tackling diverse challenges including analyzing drivers of growth and profitability, forecasting business metrics, understanding how customer experience interacts with external conditions, and evaluating short, medium, and long-term business dynamics - Build and refine comprehensive datasets for in-depth time-series economic analysis - Present complex analytical findings to business leaders and stakeholders