How to make on-device speech recognition practical

Branching encoder networks make operation more efficient, while “neural diffing” reduces bandwidth requirements for model updates.

Historically, Alexa’s automatic-speech-recognition models, which convert speech to text, have run in the cloud. But in recent years, we’ve been working to move more of Alexa’s computational capacity to the edge of the network — to Alexa-enabled devices themselves.

The move to the edge promises faster response times, since data doesn’t have to travel to and from the cloud; lower consumption of Internet bandwidth, which is important in some applications; and availability on devices with inconsistent Internet connections, such as Alexa-enabled in-car sound systems.

At this year’s Interspeech, we and our colleagues presented two papers describing some of the innovations we’re introducing to make it practical to run Alexa at the edge.

In one paper, “Amortized neural networks for low-latency speech recognition”, we show how to reduce the computational cost of neural-network-based automatic speech recognition (ASR) by 45% with no loss in accuracy. Our method also has lower latencies than similar methods for reducing computation, meaning that it enables Alexa to respond more quickly to customer requests.

In the other paper, “Learning a neural diff for speech models”, we show how to dramatically reduce the bandwidth required to update neural models on the edge. Instead of transmitting a complete model, we transmit a set of updates for some select parameters. In our experiments, this reduced the size of the update by as much as 98% with negligible effect on model accuracy.

Amortized neural networks

Neural ASR models are usually encoder-decoder models. The input to the encoder is a sequence of short speech snippets called frames, which the encoder converts into a representation that’s useful for decoding. The decoder translates that representation into text.

Neural encoders can be massive, requiring millions of computations for each input. But much of a speech signal is uninformative, consisting of pauses between syllables or redundant sounds. Passing uninformative frames through a huge encoder is just wasted computation.

Our approach is to use multiple encoders, of differing complexity, and decide on the fly which should handle a given frame of speech. That decision is made by a small neural network called an arbitrator, which must process every input frame before it’s encoded. The arbitrator adds some computational overhead to the process, but the time savings from using a leaner encoder is more than enough to offset it.

Researchers have tried similar approaches in domains other than speech, but when they trained their models, they minimized the average complexity of the frame-encoding process. That leaves open the possibility that the last few frames of the signal may pass to the more complex encoder, causing delays (increasing latency).

amortized-loss-2.png
Both processing flows above (a and b) distribute the same number of frames to the fast and slow (F and S) encoders, respectively, resulting in the same average computational cost. But the top flow incurs a significantly higher latency.

In our paper, we propose a new loss function that adds a penalty (Lamr in the figure above) for routing frames to the fast encoder when we don’t have a significant audio backlog. Without the penalty term, our branched-encoder model reduces latency to 29 to 234 milliseconds, versus thousands of milliseconds for models with a single encoder. But adding the penalty term cuts latency even further, to the 2-to-9-millisecond range.

AmazonScience_AmnetDemo_V1.gif
The audio backlog is one of the factors that the arbitrator considers when deciding which encoder should receive a given frame of audio.

In our experiments, we used two encoders, one complex and one lean, although in principle, our approach could generalize to larger numbers of encoders.

We train the arbitrator and both encoders together, end to end. During training, the same input passes through both encoders, and based on the accuracy of the resulting speech transcription, the arbitrator learns a probability distribution, which describes how often it should route frames with certain characteristics to the slow or fast encoder.

Over multiple epochs — multiple passes through the training data — we turn up the “temperature” on the arbitrator, skewing the distribution it learns more dramatically. In the first epoch, the split for a certain type of frame might be 70%-30% toward one encoder or the other. After three or four epochs, however, all of the splits are more like 99.99%-0.01% — essentially binary classifications.

We used three baselines in our experiments, all of which were single-encoder models. One was the full-parameter model, and the other two were compressed versions of the same model. One of these was compressed through sparsification (pruning of nonessential network weights), the other through matrix factorization (decomposing the model’s weight matrix into two smaller matrices that are multiplied together). 

Against the baselines, we compared two versions of our model, which were compressed through the same two methods. We ran all the models on a single-threaded processor at 650 million FLOPs per second.

Our sparse model had the lowest latency —two milliseconds, compared to 3,410 to 6,154 milliseconds for the baselines — and our matrix factorization model required the fewest number of floating-point operations per frame — 23 million, versus 30 million to 43 million for the baselines. Our accuracy remained comparable, however — a word error rate of 8.6% to 8.7%, versus 8.5% to 8.7% for the baselines.

Neural diffs

The ASR models that power Alexa are constantly being updated. During the Olympics, for instance, we anticipated a large spike in requests that used words like “Ledecky” and “Kalisz” and updated our models accordingly.

With cloud-based ASR, when we’ve updated a model, we simply send copies of it to a handful of servers in a data center. But with edge ASR, we may ultimately need to send updates to millions of devices simultaneously. So one of our research goals is to minimize the bandwidth requirements for edge updates.

In our other Interspeech paper, we borrow an idea from software engineering — that of the diff, or a file that charts the differences between the previous version of a codebase and the current one.

Our idea was that, if we could develop the equivalent of a diff for neural networks, we could use it to update on-device ASR models, rather than having to transmit all the parameters of a complete network with every update.

We experimented with two different approaches to creating a diff, matrix sparsification and hashing. With matrix sparsification we begin with two matrices of the same size, one that represents the weights of the connections in the existing ASR model and one that’s all zeroes.

Then, when we retrain the ASR model on new data, we update, not the parameters of the old model, but the entries in the second matrix — the diff. The updated model is a linear combination of the original weights and the values in the diff.

sparse_mask_training_image_only.png
Over successive training epochs, we prune the entries of matrices with too many non-zeroes, gradually sparsifying the diff.

When training the diff, we use an iterative procedure that prunes matrices with too many non-zero entries. As we did when training the arbitrator in the branched-encoder network, we turn up the temperature over successive epochs to make the diff sparser and sparser.

Our other approach to creating diffs was to use a hash function, a function that maps a large number of mathematical objects to a much smaller number of storage locations, or “buckets”. Hash functions are designed to distribute objects evenly across buckets, regardless of the objects’ values.

With this approach, we hash the locations in the diff matrix to buckets, and then, during training, we update the values in the buckets, rather than the values in the matrices. Since each bucket corresponds to multiple locations in the diff matrix, this reduces the amount of data we need to transfer to update a model. 

Hashed diffing.jpg
With hash diffing, a small number of weights (in the hash buckets at bottom) are used across a matrix with a larger number of entries.
Credit: Glynis Condon

One of the advantages of our approach, relative to other approaches to compression, such as matrix factorization, is that with each update, our diffs can target a different set of model weights. By contrast, traditional compression methods will typically lock you into modifying the same set of high-importance weights with each update.

AmazonScience_CarModel_V1.gif
An advantage of our diffing approach is that we can target a different set of weights with each model update, which gives us more flexibility in adapting to a changing data landscape.

In our experiments, we investigated the effects of three to five consecutive model updates, using different diffs for each. Hash diffing sometimes worked better for the first few updates, but over repeated iterations, models updated through hash diffing diverged more from full-parameter models. With sparsification diffing, the word error rate of a model updated five times in a row was less than 1% away from that of the full-parameter model, with diffs whose size was set at 10% of the full model’s.

Related content

US, CA, Santa Clara
AWS AI/ML is looking for world class scientists and engineers to join its AI Research and Education group working on foundation models, large-scale representation learning, and distributed learning methods and systems. At AWS AI/ML you will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and innovate on new representation learning solutions. You will interact closely with our customers and with the academic and research communities. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. Large-scale foundation models have been the powerhouse in many of the recent advancements in computer vision, natural language processing, automatic speech recognition, recommendation systems, and time series modeling. Developing such models requires not only skillful modeling in individual modalities, but also understanding of how to synergistically combine them, and how to scale the modeling methods to learn with huge models and on large datasets. Join us to work as an integral part of a team that has diverse experiences in this space. We actively work on these areas: * Hardware-informed efficient model architecture, training objective and curriculum design * Distributed training, accelerated optimization methods * Continual learning, multi-task/meta learning * Reasoning, interactive learning, reinforcement learning * Robustness, privacy, model watermarking * Model compression, distillation, pruning, sparsification, quantization About Us Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: Santa Clara, CA, USA
LU, Luxembourg
Have you ever wondered how Amazon delivers timely and reliably hundreds of millions of packages to customer’s doorsteps? Are you passionate about data and mathematics, and hope to impact the experience of millions of customers? Are you obsessed with designing simple algorithmic solutions to very challenging problems? If so, we look forward to hearing from you! Amazon Transportation Services is seeking Applied (or Research) Scientists. As a key member of the central Research Science Team of ATS operations, these persons will be responsible for designing algorithmic solutions based on data and mathematics for optimizing the middle-mile Amazon transportation network. The job is opened in the EU Headquarters in Luxembourg (alternatively: Barcelona, Berlin or London), designed to maximize interaction with the team and stakeholders, but we will consider applicants with remote work requirements as well. Key job responsibilities Solve complex optimization and machine learning problems using scalable algorithmic techniques. Design and develop efficient research prototypes that address real-world problems in the middle-mile operations of Amazon. Lead complex time-bound, long-term as well as ad-hoc analyses to assist decision making. Communicate to leadership results from business analysis, strategies and tactics. A day in the life You will be brainstorming algorithmic approaches with team-mates to solve challenging problems for the middle-mile operations of Amazon. You will be developing and testing prototype solutions with above algorithmic techniques. You will be scavenging information from the sea of Amazon data to improve these solutions. You will be meeting with other scientists, engineers, stakeholders and customers to enhance the solutions and get them adopted. About the team The Science and Tech team of ATS EU is looking for candidates who are looking to impact the world with their mathematical and data-driven skills. ATS stands for Amazon Transportation Service, we are the middle-mile planners: we carry the packages from the warehouses to the cities in a limited amount of time to enable the “Amazon experience”. As the core research team, we grow with ATS business to support decision making in an increasingly complex ecosystem of a data-driven supply chain and e-commerce giant. We schedule more than 1 million trucks with Amazon shipments annually; our algorithms are key to reducing CO2 emissions, protecting sites from being overwhelmed during peak days, and ensuring a smile on Amazon’s customer lips. Our mathematical algorithms provide confidence in leadership to invest in programs of several hundreds millions euros every year. Above all, we are having fun solving real-world problems, in real-world speed, while failing & learning along the way. We use modular algorithmic designs in the domain of combinatorial optimization, solving complicated generalizations of core OR problems with the right level of decomposition, employing parallelization and approximation algorithms. We use deep learning, bandits, and reinforcement learning to put data into the loop of decision making. We like to learn new techniques to surprise business stakeholders by making possible what they cannot anticipate. For this reason, we work closely with Amazon scholars and experts from Academic institutions. We code our prototypes to be production-ready We prefer provably optimal solutions than heuristics, though we settle for heuristics when performance dictates it. Overall, we appreciate the value of correct modeling. We are open to hiring candidates to work out of one of the following locations: Luxembourg, LUX
US, WA, Seattle
Are you fascinated by the power of Natural Language Processing (NLP) and Large Language Models (LLM) to transform the way we interact with technology? Are you passionate about applying advanced machine learning techniques to solve complex challenges in the e-commerce space? If so, Amazon's International Seller Services team has an exciting opportunity for you as a Research Scientist. At Amazon, we strive to be Earth's most customer-centric company, where customers can find and discover anything they want to buy online. Our International Seller Services team plays a pivotal role in expanding the reach of our marketplace to sellers worldwide, ensuring customers have access to a vast selection of products. As a Research Scientist, you will join a talented and collaborative team that is dedicated to driving innovation and delivering exceptional experiences for our customers and sellers. You will be part of a global team that is focused on acquiring new merchants from around the world to sell on Amazon’s global marketplaces around the world. The position is based in Seattle but will interact with global leaders and teams in Europe, Japan, China, Australia, and other regions. Join us at the Central Science Team of Amazon's International Seller Services and become part of a global team that is redefining the future of e-commerce. With access to vast amounts of data, cutting-edge technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the way sellers engage with our platform and customers worldwide. Together, we will drive innovation, solve complex problems, and shape the future of e-commerce. Please visit https://www.amazon.science for more information Key job responsibilities - Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language-related challenges in the international seller services domain. - Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. - Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance seller performance and customer experiences across various international marketplaces. - Continuously explore and evaluate state-of-the-art NLP techniques and methodologies to improve the accuracy and efficiency of language-related systems. - Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, VA, Herndon
Do you love decomposing problems to develop machine learning (ML) products that impact millions of people around the world? Would you enjoy identifying, defining, and building ML software solutions that revolutionize how businesses operate? The Global Practice Organization in Professional Services at Amazon Web Services (AWS) is looking for a Software Development Engineer II to build, deliver, and maintain complex ML products that delight our customers and raise our performance bar. You’ll design fault-tolerant systems that run at massive scale as we continue to innovate best-in-class services and applications in the AWS Cloud. Key job responsibilities Our ML Engineers collaborate across diverse teams, projects, and environments to have a firsthand impact on our global customer base. You’ll bring a passion for the intersection of software development with generative AI and machine learning. You’ll also: - Solve complex technical problems, often ones not solved before, at every layer of the stack. - Design, implement, test, deploy and maintain innovative ML solutions to transform service performance, durability, cost, and security. - Build high-quality, highly available, always-on products. - Research implementations that deliver the best possible experiences for customers. A day in the life As you design and code solutions to help our team drive efficiencies in ML architecture, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects. You’ll also: - Build high-impact ML solutions to deliver to our large customer base. - Participate in design discussions, code review, and communicate with internal and external stakeholders. - Work cross-functionally to help drive business solutions with your technical input. - Work in a startup-like development environment, where you’re always working on the most important stuff. About the team The Global Practice Organization for Analytics is a team inside the AWS Professional Services Organization. Our mission in the Global Practice Organization is to be at the forefront of defining machine learning domain strategy, and ensuring the scale of Professional Services' delivery. We define strategic initiatives, provide domain expertise, and oversee the development of high-quality, repeatable offerings that accelerate customer outcomes. Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 85,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life Balance Our team puts a high value on work-life harmony. Striking a healthy balance between your personal and professional life is crucial to your happiness and success here. We are a customer-obsessed organization—leaders start with the customer and work backwards. They work vigorously to earn and keep customer trust. As such, this is a customer facing role in a hybrid delivery model. Project engagements include remote delivery methods and onsite engagement that will include travel to customer locations as needed. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future. This is a customer-facing role and you will be required to travel to client locations and deliver professional services as needed. We are open to hiring candidates to work out of one of the following locations: Atlanta, GA, USA | Austin, TX, USA | Boston, MA, USA | Chicago, IL, USA | Herndon, VA, USA | Minneapolis, MN, USA | New York, NC, USA | San Diego, CA, USA | San Francisco, CA, USA | Seattle, WA, USA
US, MA, North Reading
Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart team of doers that work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. Amazon Robotics is seeking Applied Science Interns and Co-ops with a passion for robotic research to work on cutting edge algorithms for robotics. Our team works on challenging and high-impact projects within robotics. Examples of projects include allocating resources to complete a million orders a day, coordinating the motion of thousands of robots, autonomous navigation in warehouses, identifying objects and damage, and learning how to grasp all the products Amazon sells. As an Applied Science Intern/Co-op at Amazon Robotics, you will be working on one or more of our robotic technologies such as autonomous mobile robots, robot manipulators, and computer vision identification technologies. The intern/co-op project(s) and the internship/co-op location are determined by the team the student will be working on. Please note that by applying to this role you would be considered for Applied Scientist summer intern, spring co-op, and fall co-op roles on various Amazon Robotics teams. These teams work on robotics research within areas such as computer vision, machine learning, robotic manipulation, navigation, path planning, perception, optimization and more. Learn more about Amazon Robotics: https://amazon.jobs/en/teams/amazon-robotics We are open to hiring candidates to work out of one of the following locations: North Reading, MA, USA | Seattle, WA, USA | Westborough, MA, USA
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the extreme. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best. Key job responsibilities • Develop automated laboratory workflows. • Perform data QC, document results, and communicate to stakeholders. • Maintain updated understanding and knowledge of methods. • Identify and escalate equipment malfunctions; troubleshoot common errors. • Participate in the updating of protocols and database to accurately reflect the current practices. • Maintain equipment and instruments in good operating condition • Adapt to unexpected schedule changes and respond to emergency situations, as needed. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Seattle
The economics team within Recruiting Engine uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, well-being, and the value of work to Amazonians. We are an interdisciplinary team, which uses a range of approaches to develop and deliver solutions that measurably achieve this goal. We are looking for an Economist who is able to provide structure around complex business problems, hone those complex problems into specific, scientific questions, and test those questions to generate insights. The ideal candidate will work with various science, engineering, operations and analytics teams to estimate models and algorithms on large scale data, design pilots and measure their impact, and transform successful prototypes into improved policies and programs at scale. She/He/They will produce robust, objective research results and insights which can be communicated to a broad audience inside and outside of Amazon. Ideal candidates will work closely with business partners to develop science that solves the most important business challenges. She/He/They will work well in a team setting with individuals from diverse disciplines and backgrounds. She/He/They will serve as an ambassador for science and a scientific resource for business teams. Ideal candidates will own the development of scientific models and manage the data analysis, modeling, and experimentation that is necessary for estimating and validating the model. They will be customer-centric – clearly communicating scientific approaches and findings to business leaders, listening to and incorporate their feedback, and delivering successful scientific solutions. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Seattle, WA, USA
US, WA, Bellevue
We’re seeking a thought leader to direct Generative AI and machine learning initiatives aimed at scaling the $600B+ Amazon ecommerce business. This person will also be a deep learning practitioner/thinker and guide the research in these areas. They’ll also have the ability to drive cutting edge, product oriented research and should have a notable publication record. This intellectual thought leader will help enhance the science in addition to developing the thinking of our team. This leader will direct and shape the science philosophy, planning and strategy for the team, as we explore multi-modal, multi lingual use cases through the use of Generative AI. Joining this team, you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon [Earth's most customer-centric internet company]. About the team The Applied AI team uses advanced ML and Generative AI techniques to help scale the inputs for our large e-commerce business. Scaling in the past was limited by roles that could be done manually, in a timely manner. This is a new focus for our business, and the opportunity is huge! We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, WA, Seattle
Are you excited about developing generative AI and foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale. We are looking for scientists, engineers and program managers for a variety of roles. The Amazon Robotics software team is seeking a Applied Scientist to focus on large vision and manipulation machine learning models. This includes building multi-viewpoint and time-series computer vision systems. It includes using machine learning to drive hardware movement. It includes building large-scale models using data from many different tasks and scenes. This work spans from basic research such as cross domain training, to experimenting on prototype in the lab, to running wide-scale A/B tests on robots in our facilities. Key job responsibilities * Research vision - Where should we be focusing our efforts * Research delivery – Proving/dis-proving strategies in offline data or in the lab * Production studies - Insights from production data or ad-hoc experimentation. About the team This team invents and runs robots focused on grasping and packing items. These are typically 6-dof style robotic arms. Our work ranges from the long-term-research on basic science to deploying/supporting large production fleets handling billions of items per year. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, VA, Arlington
Amazon launched the Generative AI (GenAI) Innovation Center (GAIIC) in Jun 2023 to help AWS customers accelerate enterprise innovation and success with Generative AI (https://press.aboutamazon.com/2023/6/aws-announces-generative-ai-innovation-center). Customers such as Highspot, Lonely Planet, Ryanair, and Twilio are engaging with the GAI Innovation Center to explore developing generative solutions. GAIIC provides opportunities to innovate in a fast-paced organization that contributes to game-changing projects and technologies that get deployed on devices and in the cloud. As a data scientist at GAIIC, you are proficient in designing and developing advanced Generative AI based solutions to solve diverse customer problems. You will be working with terabytes of text, images, and other types of data to solve real-world problems through Gen AI. You will be working closely with account teams and ML strategists to define the use case, and with other scientists and ML engineers on the team to design experiments, and find new ways to deliver value to the customer. The successful candidate will possess both technical and customer-facing skills that will allow you to be the technical “face” of AWS within our solution providers’ ecosystem/environment as well as directly to end customers. You will be able to drive discussions with senior technical and management personnel within customers and partners. This position requires that the candidate selected be a US Citizen and currently possess and maintain an active Top Secret security clearance. About the team Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Denver, CO, USA