Tools for generating synthetic data helped bootstrap Alexa’s new-language releases

In the past few weeks, Amazon announced versions of Alexa in three new languages: Hindi, U.S. Spanish, and Brazilian Portuguese.

Like all new-language launches, these addressed the problem of how to bootstrap the machine learning models that interpret customer requests, without the ability to learn from customer interactions. At a high level, the solution is to use synthetic data. These three locales were the first to benefit from two new in-house tools, developed by the Alexa AI team, that produce higher-quality synthetic data more efficiently.

Each new locale has its own speech recognition model, which converts an acoustic speech signal into text. But interpreting that text — determining what the customer wants Alexa to do — is the job of Alexa’s natural-language-understanding (NLU) systems.

When a new-language version of Alexa is under development, training data for its NLU systems is scarce. Alexa feature teams will propose some canonical examples of customer requests in the new language, which we refer to as “golden utterances”; training data from existing locales can be translated by machine translation systems; crowd workers may be recruited to generate sample texts; and some data may come from Cleo, an Alexa skill that allows multilingual customers to help train new-language models by responding to voice prompts with open-form utterances.

Even when data from all these sources is available, however, it’s sometimes not enough to train a reliable NLU model. The new bootstrapping tools, from Alexa AI’s Applied Modeling and Data Science group, treat the available sample utterances as templates and generate new data by combining and varying those templates.

One of the tools, which uses a technique called grammar induction, analyzes a handful of golden utterances to learn general syntactic and semantic patterns. From those patterns, it produces a series of rewrite expressions that can generate thousands of new, similar sentences. The other tool, guided resampling, generates new sentences by recombining words and phrases from examples in the available data. Guided resampling concentrates on optimizing the volume and distribution of sentence types, to maximize the accuracy of the resulting NLU models.

Rules of Grammar

Grammars have been a tool in Alexa’s NLU toolkit since well before the first Echo device shipped. A grammar is a set of rewrite rules for varying basic template sentences through word insertions, deletions, and substitutions.

Below is a very simple grammar, which models requests to play either pop or rock music, with or without the modifiers “more” and “some”. Below the rules of the grammar is a diagram of a computational system (a finite-state transducer, or FST) that implements them.

diagram of the resulting finite-state transducer
A toy grammar, which can model requests to play pop or rock music, with or without the modifiers “some” or “more”, and a diagram of the resulting finite-state transducer. The question mark indicates that the some_more variable is optional.

Given a list of, say, 50 golden utterances, a computational linguist could probably generate a representative grammar in a day, and it could be operationalized by the end of the following day. With the Applied Modeling and Data Science (AMDS) group’s grammar induction tool, that whole process takes seconds.

AMDS research scientists Ge Yu and Chris Hench and language engineer Zac Smith experimented with a neural network that learned to produce grammars from golden utterances. But they found that an alternative approach, called Bayesian model merging, offered similar performance with advantages in reproducibility and iteration speed.

The resulting system identifies linguistic patterns in lists of golden utterances and uses them to generate candidate rules for varying sentence templates. For instance, if two words (say, “pop” and “rock”) consistently occur in similar syntactic positions, but the phrasing around them varies, then one candidate rule will be that (in some defined contexts) “pop” and “rock” are interchangeable.

After exhaustively listing candidate rules, the system uses Bayesian probability to calculate which rule accounts for the most variance in the sample data, without overgeneralizing or introducing inconsistencies. That rule becomes an eligible variable in further iterations of the process, which recursively repeats until the grammar is optimized.

Crucially, the tool’s method for creating substitution rules allows it to take advantage of existing catalogues of frequently occurring terms or phrases. If, for instance, the golden utterances were sports related, and the grammar induction tool determined that the words “Celtics” and “Lakers” were interchangeable, it would also conclude that they were interchangeable with “Warriors”, “Spurs”, “Knicks”, and all the other names of NBA teams in a standard catalogue used by a variety of Alexa services.

From a list of 50 or 60 golden utterances, the grammar induction tool might extract 100-odd rules that can generate several thousand sentences of training data, all in a matter of seconds.

Safe Swaps

The guided-resampling tool also uses catalogues and existing examples to augment training data. Suppose that the available data contains the sentences “play Camila Cabello” and “can you play a song by Justin Bieber?”, which have been annotated to indicate that “Camila Cabello” and “Justin Bieber” are of the type ArtistName. In NLU parlance, ArtistName is a slot type, and “Camila Cabello” and “Justin Bieber” are slot values.

The guided-resampling tool generates additional training examples by swapping out slot values — producing, for instance, “play Justin Bieber” and “can you play a song by Camila Cabello?” Adding the vast Amazon Music databases of artist names and song titles to the mix produces many additional thousands of training sentences.

Blindly swapping slot values can lead to unintended consequences, so which slot values can be safely swapped? For example, in the sentences “play jazz music” and “read detective books”, both “jazz” and “detective” would be labeled with the slot type GenreName. But customers are unlikely to ask Alexa to play “detective music”, and unnatural training data would degrade the performance of the resulting NLU model.

AMDS’s Olga Golovneva, a research scientist, and Christopher DiPersio, a language engineer, used the Jaccard index — which measures the overlap between two sets — to evaluate pairwise similarity between slot contents in different types of requests. On that basis, they defined a threshold for valid slot mixing.

Quantifying Complexity

As there are many different ways to request music, another vital question is how many variations of each template to generate in order to produce realistic training data. One answer is simply to follow the data distributions from languages that Alexa already supports.

Comparing distributions of sentence types across languages requires representing customer requests in a more abstract form. We can encode a sentence like “play Camila Cabello” according to the word pattern other + ArtistName, where other represents the verb “play”, and ArtistName represents “Camila Cabello”. For “play ‘Havana’ by Camila Cabello”, the pattern would be other + SongName + other + ArtistName. To abstract away from syntactic differences between languages, we can condense this pattern further to other + ArtistName + SongName, which represents only the semantic concepts included in the request.

Given this level of abstraction, Golovneva and DiPersio investigated several alternative techniques for determining the semantic distributions of synthetic data.

Using Shannon entropy, which is a measure of uncertainty, Golovneva and DiPersio calculated the complexity of semantic sentence patterns, focusing on slots and their combinations. Entropy for semantic slots takes into consideration how many different values each slot might have, as well as how frequent each slot is in the data set overall. For example, the slot SongName occurs very frequently in music requests, and its potential values (different song titles) number in the millions; in contrast, GenreName also occurs frequently in music requests, but its set of possible values (music genres) is fairly small.

Customer requests to Alexa often include multiple slots (such as “play ‘Vogue’|SongName by Madonna|ArtistName” or “set a daily|RecurrenceType reminder to {walk the dog}|ReminderContent for {seven a. m.}|Time”), which increases the pattern complexity further.

In their experiments, Golovneva and DiPersio used the entropy measures from slot distributions in the data and the complexity of slot combinations to determine the optimal distribution of semantic patterns in synthetic training data. This results in proportionally larger training sets for more complex patterns than for less complex ones. NLU models trained on such data sets achieved higher performance than those trained on datasets which merely “borrowed” slot distributions from existing languages.

Alexa is always getting smarter, and these and other innovations from AMDS researchers help ensure the best experience possible when Alexa launches in a new locale.

Acknowledgments: Ge Yu, Chris Hench, Zac Smith, Olga Golovneva, Christopher DiPersio, Karolina Owczarzak, Sreekar Bhaviripudi, Andrew Turner

Research areas

Related content

US, MA, North Reading
We are looking for experienced scientists and engineers to explore new ideas, invent new approaches, and develop new solutions in the areas of Controls, Dynamic modeling and System identification. Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart team of doers that work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. Key job responsibilities Applied Scientists take on big unanswered questions and guide development team to state-of-the-art solutions. We want to hear from you if you have deep industry experience in the Mechatronics domain and : * the ability to think big and conceive of new ideas and novel solutions; * the insight to correctly identify those worth exploring; * the hands-on skills to quickly develop proofs-of-concept; * the rigor to conduct careful experimental evaluations; * the discipline to fast-fail when data refutes theory; * and the fortitude to continue exploring until your solution is found We are open to hiring candidates to work out of one of the following locations: North Reading, MA, USA | Westborough, MA, USA
DE, BE, Berlin
Are you excited about developing state-of-the-art computer vision models that revolutionize Amazon’s Fulfillment network? Are you looking for opportunities to apply AI on real-world problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics, we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience — at Amazon scale. To this end, we are looking for an Applied Scientist who will build and deploy models that make smarter decisions on a wide array of multi-modal signals. Together, we will be pushing beyond the state of the art in optimizing one of the most complex systems in the world: Amazon's Fulfillment Network. Key job responsibilities In this role, you will build computer vision and multi-modal deep learning models that understand the state of products and packages flowing through Amazon’s fulfillment network. You will build models that solve challenging problems like product identification and damage detection on Amazon's entire retail catalog (billions of different items, thousands of new items every day). You will primarily work with very large real-world vision datasets, as well as a diverse set of multi-modal datasets, including natural language and structured data. You will face a high level of research ambiguity and problems that require creative, ambitious, and inventive solutions. A day in the life AFT AI delivers the AI solutions that empower Amazon’s fulfillment network to make smarter decisions. You will work on an interdisciplinary team of scientists and engineers with deep expertise in developing cutting-edge AI solutions at scale. You will work with images, videos, natural language, and sequences of events from existing or new hardware. You will adapt state-of-the-art machine learning and computer vision techniques to develop solutions for business problems in the Amazon Fulfillment Network. About the team Amazon Fulfillment Technologies (AFT) powers Amazon’s global fulfillment network. We invent and deliver software, hardware, and science solutions that orchestrate processes, robots, machines, and people. We harmonize the physical and virtual world so Amazon customers can get what they want, when they want it. AFT AI is spread across multiple locations in NA (Bellevue WA and Nashville, TN) and Europe (Berlin, Germany). We are hiring candidates to work out of the Berlin location. Publicly available articles showcasing some of our work: - Damage Detection: https://www.amazon.science/latest-news/the-surprisingly-subtle-challenge-of-automating-damage-detection - Product ID: https://www.amazon.science/latest-news/how-amazon-robotics-is-working-on-new-ways-to-eliminate-the-need-for-barcodes We are open to hiring candidates to work out of one of the following locations: Berlin, BE, DEU
LU, Luxembourg
Have you ever wished to build high standard Operations Research and Machine Learning algorithms to optimize one of the most complex logistics network? Have you ever ordered a product on Amazon websites and wondered how it got delivered to you so fast, and what kinds of algorithms & processes are running behind the scenes to power the whole operation? If so, this role is for you. The team: Global transportation services, Research and applied science - Operations is at the heart of the Amazon customer experience. Each action we undertake is on behalf of our customers, as surpassing their expectations is our passion. We improve customer experience through continuously optimizing the complex movements of goods from vendors to customers throughout Europe. - Global transportation analytical teams are transversal centers of expertise, composed of engineers, analysts, scientists, technical program managers and developers. We are focused on Amazon most complex problems, processes and decisions. We work with fulfillment centers, transportation, software developers, finance and retail teams across the world, to improve our logistic infrastructure and algorithms. - GTS RAS is one of those Global transportation scientific team. We are obsessed by delivering state of the art OR and ML tools to support the rethinking of our advanced end-to-end supply chain. Our overall mission is simple: we want to implement the best logistics network, so Amazon can be the place where our customers can be delivered the next-day. The role: Applied scientist, speed and long term network design The person in this role will have end-to-end ownership on augmenting RAS Operation Research and Machine Learning modeling tools. They will help understand where are the constraints in our transportation network, and how we can remove them to make faster deliveries at a lower cost. You will be responsible for designing and implementing state-of-the-art algorithmic in transportation planning and network design, to expand the scope of our Operations Research and Machine Learning tools, to reflect the constantly evolving constraints in our network. You will enable the creation of a product that drives ever-greater automation, scalability and optimization of every aspect of transportation, planning the best network and modeling the constraints that prevent us from offering more speed to our customer, to maximize the utilization of the associated resources. The impact of your work will be in the Amazon EU global network. The product you will build will span across multiple organizations that play a role in Amazon’s operations and transportation and the shopping experience we deliver to customer. Those stakeholders include fulfilment operations and transportation teams; scientists and developers, and product managers. You will understand those teams constraints, to include them in your product; you will discuss with technical teams across the organization to understand the existing tools and assess the opportunity to integrate them in your product.You will engage with fellow scientists across the globe, to discuss the solutions they have implemented and share your peculiar expertise with them. This is a critical role and will require an aptitude for independent initiative and the ability to drive innovation in transportation planning and network design. Successful candidates should be able to design and implement high quality algorithm solutions, using state-of-the art Operations Research and Machine Learning techniques. Key job responsibilities - Engage with stakeholders to understand what prevents them to build a better transportation network for Amazon - Review literature to identify similar problems, or new solving techniques - Build the mathematical model representing your problem - Implement light version of the model, to gather early feed-back from your stakeholders and fellow scientists - Implement the final product, leveraging the highest development standards - Share your work in internal and external conferences - Train on the newest techniques available in your field, to ensure the team stays at the highest bar About the team GTS Research and Applied Science is a team of scientists and engineers whom mission is to build the best decision support tools for strategic decisions. We model and optimize Amazon end-to-end operations. The team is composed of enthusiastic members, that love to discuss any scientific problem, foster new ideas and think out of the box. We are eager to support each others and share our unique knowledge to our colleagues. We are open to hiring candidates to work out of one of the following locations: Luxembourg, LUX
US, CA, Santa Clara
Amazon AI is looking for world class scientists and engineers to join its AWS AI Labs. This group is entrusted with developing core data mining, natural language processing, deep learning, and machine learning algorithms for AWS. You will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and explore conceptually new solutions. You will interact closely with our customers and with the academic community. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA | Santa Clara, CA, USA | Seattle, WA, USA
IN, KA, Bengaluru
Job Description ATE (Analytics, Technology and Engineering) is a multi-disciplinary team of scientists, engineers, and technicians, all working to innovate in operations for the benefit of our customers. Our team is responsible for creating core analytics, science capabilities, platforms development and data engineering. We develop scalable analytics applications and research modeling to optimize operation processes.. You will work with professional software development managers, data engineers, data scientists, applied scientists, business intelligence engineers and product managers using rigorous quantitative approaches to ensure high quality data tech products for our customers around the world, including India, Australia, Brazil, Mexico, Singapore and Middle East. We are on the lookout for an enthusiastic and highly analytical individual to be a part of our journey. Amazon is growing rapidly and because we are driven by faster delivery to customers, a more efficient supply chain network, and lower cost of operations, our main focus is in the development of strategic models and automation tools fed by our massive amounts of available data. You will be responsible for building these models/tools that improve the economics of Amazon’s worldwide fulfillment networks in emerging countries as Amazon increases the speed and decreases the cost to deliver products to customers. You will identify and evaluate opportunities to reduce variable costs by improving fulfillment center processes, transportation operations and scheduling, and the execution to operational plans. You will also improve the efficiency of capital investment by helping the fulfillment centers to improve storage utilization and the effective use of automation. Finally, you will help create the metrics to quantify improvements to the fulfillment costs (e.g., transportation and labor costs) resulting from the application of these optimization models and tools. Major responsibilities include: · In this role, you will be responsible for developing and implementing innovative, scalable models and tools aimed at tackling novel challenges within Amazon’s global fulfillment network. Collaborating with fellow scientists from various teams, you will work on integrated solutions to enhance fulfillment speed, reduce costs. Your in-depth comprehension of business challenges will enable you to provide scientific analyses that underpin critical business decisions, utilizing a diverse range of methodologies. You’ll have the opportunity to design scientific tool platforms, deploy models, create efficient data pipelines, and streamline existing processes. Join us in shaping the future of Amazon’s global retail business by optimizing delivery speed at scale and making a lasting impact on the world of e-commerce. If you’re passionate about solving complex problems and driving innovation, we encourage you to apply. About the team This team is responsible for applying science based algo and techniques to solve the problems in operation and supply chain. Some of these problems include, volume forecasting, capacity planning, fraud detection, scenario simulation and using LLM/GenAI for process efficiency We are open to hiring candidates to work out of one of the following locations: Bengaluru, KA, IND
IL, Tel Aviv
Are you passionate about pushing the boundaries of computer vision, generative AI, deep learning, and machine learning? Ready to tackle challenges in document understanding at scale? We’re looking for innovative minds to join our world-class team at AWS, where you’ll collaborate with leading researchers, academics, and engineers on Amazon Textract. Why AWS? Be part of the leading cloud service provider powering innovation and positive impact. Work on real-world problems alongside tech and business giants. Access to unlimited data and computational resources. Collaborate with world-class researchers and developers. Deploy solutions at AWS scale and publish your work at top conferences. Focus Areas: - LLMs, document understanding, scene text recognition. - Visual question answering, NLP+vision, layout understanding. Locations: Tel Aviv and Haifa Think you’re a fit? Dive into the world of AWS Computer Vision and help us innovate at the forefront of technology. Key job responsibilities - Design cutting-edge neural network architectures. - Create document understanding solutions for complex scenarios and large visual datasets. - Set benchmarks and success criteria for model performance. - Collaborate across AWS and Amazon to bring scientific breakthroughs to our customers. - Add your unique creativity to our multidisciplinary team. - Mentor junior scientists and interns/PhD students. We are open to hiring candidates to work out of one of the following locations: Haifa, ISR | Tel Aviv, ISR
US, WA, Seattle
Here at Amazon, we embrace our differences. We are committed to furthering our culture of diversity and inclusion of our teams within the organization. How do you get items to customers quickly, cost-effectively, and—most importantly—safely, in less than an hour? And how do you do it in a way that can scale? Our teams of hundreds of scientists, engineers, aerospace professionals, and futurists have been working hard to do just that! We are delivering to customers, and are excited for what’s to come. Check out more information about Prime Air on the About Amazon blog (https://www.aboutamazon.com/news/transportation/amazon-prime-air-delivery-drone-reveal-photos). If you are seeking an iterative environment where you can drive innovation, apply state-of-the-art technologies to solve real world delivery challenges, and provide benefits to customers, Prime Air is the place for you. Come work on the Amazon Prime Air Team! Prime Air is seeking an experienced Research Scientist in the Flight Sciences High-Fidelity Methods (HFM) team within Flight Sciences, you will develop and verify aerodynamics models used for engineering analyses and vehicle simulation. These models are the backbone of every flight simulation performed within Prime Air and are a critical element in the aircraft design, verification and certification process. These models are used to predict many attributes of the vehicle performance including range, maneuverability, tracking error, and aircraft stability. They are a key input to design decisions, vehicle component sizing and flight software algorithm development. The accuracy and reliability of these flight model are critical to the success of Prime Air. For this role we are looking for a scientist to develop surrogate or machine learning models to represent the complex aerodynamic behavior of our drones. This scientist will develop techniques to validate these models using flight testing, quantify the model uncertainty, and assess the impact of this uncertainty on downstream engineering analyses. Key job responsibilities A Research Scientist in this role is responsible for owning the development, deployment, verification, and maintenance of models from end-to-end. This includes the initial gathering of the downstream customer needs, identifying the most suitable modelling approach, coordinating the generation of input data, training models, developing and maintaining software interfaces, and verifying the model accuracy. A Research Scientist in this role is responsible for determining the most suitable modeling approach for a given physical phenomena. They need to possess knowledge of various machine learning techniques, and their respective advantages and limitations. They will need to have a detailed understanding of the types of physics to be modelled including vehicle aerodynamics, multibody dynamics, and atmosphere physics. This role is responsible for designing experiments for generating data used to train and verify surrogate models. They need to have a basic understanding of the methods used to generate high-fidelity aerodynamics predictions including CFD, wind tunnel testing, and flight testing. They will be responsible for validating the models by leveraging uncertainty quantification, system identification, and statical analyses. Export Control License This position may require a deemed export control license for compliance with applicable laws and regulations. Placement is contingent on Amazon’s ability to apply for and obtain an export control license on your behalf. A day in the life A Research Scientist in the High-Fidelity Methods (HFM) team will have the opportunity to work on a wide variety of tasks. The ideal candidate should be adaptable and thrive in an everchanging environment. Depending on the phase of model or vehicle development, a typical day might consist of reading research papers on machine learning techniques, developing test plans for wind tunnel testing, writing code to train and verify models, reviewing flight test results, or writing documentation. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the extreme. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. As a Applied Scientist at the intersection of machine learning and the life sciences, you will participate in developing exciting products for customers. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the cutting edge of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with others teams. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Bellevue
As a Principal Research Scientist in the Amazon Artificial General Intelligence (AGI) Data Services organization, you will be responsible for sourcing and quality of massive datasets powering Amazon's AI. You will play a critical role in driving innovation and advancing the state-of-the-art in natural language processing and machine learning. You will be responsible for developing and implementing cutting-edge algorithms and techniques to extract valuable insights from large-scale data sources. You will work closely with cross-functional teams, including product managers, engineers, and data scientists to ensure that our AI systems are aligned with human policies and preferences. Key job responsibilities - Responsible for sourcing and quality of massive datasets powering Amazon's AI. - Collaborate with cross-functional teams to ensure that Amazon’s AI models are aligned with human preferences. - Develop and implement strategies to improve the efficiency and effectiveness of programs delivering massive datasets. - Identify and prioritize research opportunities that have the potential to significantly impact our AI systems. - Communicate research findings and progress to senior leadership and stakeholders. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Boston, MA, USA
US, WA, Redmond
Project Kuiper is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband connectivity to unserved and underserved communities around the world. We are searching for talented candidates with experience in spaceflight trajectory modeling and simulation, orbit mechanics, and launch vehicle mission planning. Key job responsibilities This position requires experience in simulation and analysis of astrodynamics models and spaceflight trajectories. Strong analysis skills are required to develop engineering studies of complex large-scale dynamical systems. This position requires demonstrated expertise in computational analysis automation and tool development. Working with the Kuiper engineering team, you will: - Develop modeling techniques for analysis and simulation of deployment dynamics of multiple satellites - Support Project Kuiper’s Launch Vehicle Mission Management team with technical expertise in Launch Vehicle trajectory requirements specification - Develop tools to support Mission Management planning for over 80 launches! - Work collaboratively with launch vehicle system technical teams Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. We are open to hiring candidates to work out of one of the following locations: Redmond, WA, USA