Syntiant NDP101
Syntiant's NDP architecture is built from the ground up to run deep learning algorithms. The company says its NDP101 neural decision processor achieves breakthrough performance by coupling computation and memory, and exploiting the inherent parallelism of deep learning and computing at only required numerical precision.
Credit: Syntiant

3 questions with Jeremy Holleman: How to design and develop ultra-low-power AI processors

Holleman, the chief scientist of Alexa Fund company Syntiant, explains why the company’s new architecture allows machine learning to be deployed practically anywhere.  

Editor’s Note: This article is the latest installment within a series Amazon Science is publishing related to the science behind products and services from companies in which Amazon has invested. Syntiant, founded in 2017, has shipped more than 10 million units to customers worldwide, and has obtained $65 million in funding from leading technology companies, including the Amazon Alexa Fund.

In late July, Amazon held its Alexa Live event, where the company introduced more than 50 features to help developers and device makers build ambient voice-computing experiences, and drive the growth of voice computing.

Jeremy Holleman, Syntiant's chief scientist
Jeremy Holleman is Syntiant's chief scientist, and a professor of electrical and computer engineering at the University of North Carolina at Charlotte.
Credit: Syntiant

The event included an Amazon Alexa Startups Showcase in which Syntiant, a semiconductor company founded in 2017, and based in Irvine, California, shared its vision for making voice the computing interface of the future.  

In 2017, Kurt Busch, Syntiant’s chief executive officer, and Jeremy Holleman, Syntiant’s chief scientist, and a professor of electrical and computer engineering at the University of North Carolina at Charlotte, were focused on finding an answer to the question: How do you optimize the performance of machine learning models on power- and cost-constrained hardware?

According to Syntiant, they — and other members of Syntiant’s veteran management team — had the idea for a processor architecture that could deliver 200 times the efficiency, 20 times the performance, and at half the cost of existing edge processors. One key to their approach — optimizing for memory access versus traditional processors’ focus on logic.

This insight, and others, led them to the formation of Syntiant, which for the past four years has been designing and developing ultra-low-power, high-performance, deep neural network processors for computing at the network’s edge, helping to reduce latency, and increase the privacy and security of power- and cost-constrained applications running on devices as small as earbuds, and as large as automobiles.

Syntiant’s processors enable always-on voice (AOV) control for most battery-powered devices, from cell phones and earbuds, to drones, laptops and other voice-activated products. The company’s Neural Decision Processors (NDPs) provide highly accurate wake word, command word and event detection in a tiny package with near-zero power consumption.

Syntiant CEO on the future of ambient computing
During the Amazon Alexa Startups Showcase, Kurt Busch, CEO of Syntiant, an Alexa Fund company, explained how they're using the latest in voice technology to invent the future of ambient computing, and why he thinks voice will be the next user interface.

Holleman is considered a leading authority on ultra-low-power integrated circuits, and directs the Integrated Silicon Systems Laboratory at the University of North Carolina, Charlotte, where he is an associate professor. He’s also is a coauthor of the book “Ultra Low-Power Integrated Circuit Design for Wireless Neural Interfaces”, which was first published in 2011.

Amazon Science asked Holleman three questions about the challenges of designing and developing ultra-low-power AI processors, and why he believes voice will become the predominant user interface of the future.

Q. You are one of 22 authors on a paper, "MLPerf Tiny Benchmark", which has been accepted to the NeurIPS 2021 Conference. What does this benchmark suite comprise, and why is it significant to the tinyML field?

The MLPerf Tiny Benchmark actually includes four tests meant to measure the performance and efficiency of very small devices on ML inference: keyword spotting, person detection, image recognition, and anomaly detection. For each test, there is a reference model, and code to measure the latency and power on a reference platform.

I try to think about the benchmark from the standpoint of a system developer – someone building a device that needs some local intelligence. They have to figure out, with a given energy budget and system requirements, what solution is going to work for them. So they need to understand the power consumption and speed of different hardware. When you look at most of the information available, everyone measures their hardware on different things, so it’s really hard to compare. The benchmark makes it clear exactly what is being measured and – in the closed division – every submission is running the exact same model, so it’s a clear apples-to-apples comparison.

Then the open division takes the same principle – every submission does the same thing – but allows for some different tradeoffs by just defining the problem and allowing submitters to run different models that may take advantage of particular aspects of their hardware. So you wind up with a Pareto surface of accuracy, power, and speed.  I think this last part is particularly important in the “tiny” space because there is a lot of room to jointly optimize models, hardware, and features to get high-performing and high-efficiency end-to-end systems.

Q. What do you consider Syntiant’s key ingredients in your development and design of ultra-low-power AI processors, and how will your team’s work contribute to voice becoming the predominant user interface of the future?

I would say there are two major elements that have been key to our success. The first is, as I mentioned before, that edge ML requires tight coupling between the hardware and the algorithms. From the very beginning at Syntiant, we’ve had our silicon designers and our modelers working closely together. That shows up in office arrangement, with hardware and software groups all intermingled; in code and design reviews, really all across the company. And I think that’s paid off in outcomes. We see how easy it is to map a given algorithm to our hardware, because the hardware was designed to do all the hard work of coordinating memory access in a way that’s optimized for exactly the types of computation we see in ML workloads. And for the same reason, we see the benefits of that approach in power and performance.

The second big piece is that we realized that deep learning is still such a new field that the expertise required to deliver production-grade solutions is still very rare. It’s easy enough to download an MNIST or CIFAR demo, train it up and you think, “I’ve got this figured out!” But when you deploy a device to millions of people who interact with it on a daily basis, the job becomes much harder. You need to acquire data, validate it, debug models, and it’s a big job. We knew that for most customers, we couldn’t just toss a piece of silicon over the fence and leave the rest to them. That led us to put a lot of effort into building a complete pipeline addressing the data tasks, training, and evaluation, so we can provide a complete solution to customers who don’t have a ton of ML expertise in house.

Q. What in particular makes edge processing difficult?

On the hardware side, the big challenges are power and cost. Whether you’re talking about a watch, an earbud, or a phone, consumers have some pretty hard requirements for how long a battery needs to last – generally a day – and how much they will pay for something. And on the modeling side, edge devices find themselves in a tremendously diverse set of environments, so you need a voice assistant to recognize you not just in the kitchen or in the car, but on a factory floor, at a football game, and everywhere else you can imagine going.

Then those three things push against each other like the classical balloon analogy. If you push down cost by choosing a lower-end processor, it may not have the throughput to run the model quickly, so you run at a lower frame rate, under-sampling the input signal, and you miss events. Or you find a model that works well, and you run it fast enough, but then the power required to run it limits battery life. This tradeoff is especially difficult for features that are always on, like a wakeword detector, or person detection in a security camera. At Syntiant, we had to address all of these issues simultaneously, which is why it was so important to have all of our teams tightly connected, work through the use cases, and know how each piece affected all the other pieces.

Conventional general-purpose processors don’t have the efficiency to run strong models within the constraints that edge devices have. With our new architecture, powerful machine learning can be deployed practically anywhere for the first time.
Jeremy Holleman

Having done that work, the result is that you get the power of modern ML in tiny devices with almost no impact on the battery life. And the possibilities, especially for voice interfaces, is very exciting. We’ve all grown accustomed to interacting with our phone by voice and we’ve seen how often we want to do something but don’t have a free hand available for a tactile interface.

Syntiant’s technology is making it possible to bring that experience to smaller and cheaper devices with all of the processing happening locally. So many of the devices we use have useful information they can’t share with us because the interface would be too expensive. Imagine being able to say “TV remote, where are you?” or “Smoke alarm, why are you beeping?” and getting a clear and quick answer. We’ve forgotten that some annoying things we’ve gotten so used to can be fixed. And of course you don’t want all of the cost and the privacy concerns associated with sending all of that information to the cloud.

So we’re focused on putting that level of intelligence right in the device. To deliver that, we need all of these pieces to come together: the data pipeline, the models, and the hardware. Conventional general-purpose processors don’t have the efficiency to run strong models within the constraints that edge devices have. With our new architecture, powerful machine learning can be deployed practically anywhere for the first time.

Research areas

Related content

IN, TS, Hyderabad
Welcome to the Worldwide Returns & ReCommerce team (WWR&R) at Amazon.com. WWR&R is an agile, innovative organization dedicated to ‘making zero happen’ to benefit our customers, our company, and the environment. Our goal is to achieve the three zeroes: zero cost of returns, zero waste, and zero defects. We do this by developing products and driving truly innovative operational excellence to help customers keep what they buy, recover returned and damaged product value, keep thousands of tons of waste from landfills, and create the best customer returns experience in the world. We have an eye to the future – we create long-term value at Amazon by focusing not just on the bottom line, but on the planet. We are building the most sustainable re-use channel we can by driving multiple aspects of the Circular Economy for Amazon – Returns & ReCommerce. Amazon WWR&R is comprised of business, product, operational, program, software engineering and data teams that manage the life of a returned or damaged product from a customer to the warehouse and on to its next best use. Our work is broad and deep: we train machine learning models to automate routing and find signals to optimize re-use; we invent new channels to give products a second life; we develop highly respected product support to help customers love what they buy; we pilot smarter product evaluations; we work from the customer backward to find ways to make the return experience remarkably delightful and easy; and we do it all while scrutinizing our business with laser focus. You will help create everything from customer-facing and vendor-facing websites to the internal software and tools behind the reverse-logistics process. You can develop scalable, high-availability solutions to solve complex and broad business problems. We are a group that has fun at work while driving incredible customer, business, and environmental impact. We are backed by a strong leadership group dedicated to operational excellence that empowers a reasonable work-life balance. As an established, experienced team, we offer the scope and support needed for substantial career growth. Amazon is earth’s most customer-centric company and through WWR&R, the earth is our customer too. Come join us and innovate with the Amazon Worldwide Returns & ReCommerce team!
GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside a science team to delight customers by aiding in recommendations relevancy, and raise the profile of Amazon as a global leader in machine learning and personalization. Successful candidates will have strong technical ability, focus on customers by applying a customer-first approach, excellent teamwork and communication skills, and a motivation to achieve results in a fast-paced environment. Our position offers exceptional opportunities for every candidate to grow their technical and non-technical skills. If you are selected, you have the opportunity to make a difference to our business by designing and building state of the art machine learning systems on big data, leveraging Amazon’s vast computing resources (AWS), working on exciting and challenging projects, and delivering meaningful results to customers world-wide. Key job responsibilities Develop machine learning algorithms for high-scale recommendations problems. Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement. Collaborate with software engineers to integrate successful experimental results into large-scale, highly complex Amazon production systems capable of handling 100,000s of transactions per second at low latency. Report results in a manner which is both statistically rigorous and compellingly relevant, exemplifying good scientific practice in a business environment.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! In Prime Video READI, our mission is to automate infrastructure scaling and operational readiness. We are growing a team specialized in time series modeling, forecasting, and release safety. This team will invent and develop algorithms for forecasting multi-dimensional related time series. The team will develop forecasts on key business dimensions with optimization recommendations related to performance and efficiency opportunities across our global software environment. As a founding member of the core team, you will apply your deep coding, modeling and statistical knowledge to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on retrieving, cleansing and preparing large scale datasets, training and evaluating models and deploying them to production where we continuously monitor and evaluate. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with complete independence and are often assigned to focus on areas where the business and/or architectural strategy has not yet been defined. You must be equally comfortable digging in to business requirements as you are drilling into design with development teams and developing production ready learning models. You consistently bring strong, data-driven business and technical judgment to decisions. You will work with internal and external stakeholders, cross-functional partners, and end-users around the world at all levels. Our team makes a big impact because nothing is more important to us than delivering for our customers, continually earning their trust, and thinking long term. You are empowered to bring new technologies to your solutions. If you crave a sense of ownership, this is the place to be.
US, WA, Seattle
Amazon Advertising operates at the intersection of eCommerce and advertising, and is investing heavily in building a world-class advertising business. We are defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long-term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products to improve both shopper and advertiser experience. With a broad mandate to experiment and innovate, we grow at an unprecedented rate with a seemingly endless range of new opportunities. The Ad Response Prediction team in Sponsored Products organization build advanced deep-learning models, large-scale machine-learning pipelines, and real-time serving infra to match shoppers’ intent to relevant ads on all devices, for all contexts and in all marketplaces. Through precise estimation of shoppers’ interaction with ads and their long-term value, we aim to drive optimal ads allocation and pricing, and help to deliver a relevant, engaging and delightful ads experience to Amazon shoppers. As the business and the complexity of various new initiatives we take continues to grow, we are looking for talented Applied Scientists to join the team. Key job responsibilities As a Applied Scientist II, you will: * Conduct hands-on data analysis, build large-scale machine-learning models and pipelines * Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production * Run regular A/B experiments, gather data, perform statistical analysis, and communicate the impact to senior management * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving * Provide technical leadership, research new machine learning approaches to drive continued scientific innovation * Be a member of the Amazon-wide Machine Learning Community, participating in internal and external MeetUps, Hackathons and Conferences
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist in the Content Understanding Team, you will lead the end-to-end research and deployment of video and multi-modal models applied to a variety of downstream applications. More specifically, you will: - Work backwards from customer problems to research and design scientific approaches for solving them - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals About the team Our Prime Video Content Understanding team builds holistic media representations (e.g. descriptions of scenes, semantic embeddings) and apply them to new customer experiences supply chain problems. Our technology spans the entire Prime Video catalogue globally, and we enable instant recaps, skip intro timing, ad placement, search, and content moderation.
IN, HR, Gurugram
We're on a journey to build something new a green field project! Come join our team and build new discovery and shopping products that connect customers with their vehicle of choice. We're looking for a talented Senior Applied Scientist to join our team of product managers, designers, and engineers to design, and build innovative automotive-shopping experiences for our customers. This is a great opportunity for an experienced engineer to design and implement the technology for a new Amazon business. We are looking for a Applied Scientist to design, implement and deliver end-to-end solutions. We are seeking passionate, hands-on, experienced and seasoned Senior Applied Scientist who will be deep in code and algorithms; who are technically strong in building scalable computer vision machine learning systems across item understanding, pose estimation, class imbalanced classifiers, identification and segmentation.. You will drive ideas to products using paradigms such as deep learning, semi supervised learning and dynamic learning. As a Senior Applied Scientist, you will also help lead and mentor our team of applied scientists and engineers. You will take on complex customer problems, distill customer requirements, and then deliver solutions that either leverage existing academic and industrial research or utilize your own out-of-the-box but pragmatic thinking. In addition to coming up with novel solutions and prototypes, you will directly contribute to implementation while you lead. A successful candidate has excellent technical depth, scientific vision, project management skills, great communication skills, and a drive to achieve results in a unified team environment. You should enjoy the process of solving real-world problems that, quite frankly, haven’t been solved at scale anywhere before. Along the way, we guarantee you’ll get opportunities to be a bold disruptor, prolific innovator, and a reputed problem solver—someone who truly enables AI and robotics to significantly impact the lives of millions of consumers. Key job responsibilities Architect, design, and implement Machine Learning models for vision systems on robotic platforms Optimize, deploy, and support at scale ML models on the edge. Influence the team's strategy and contribute to long-term vision and roadmap. Work with stakeholders across , science, and operations teams to iterate on design and implementation. Maintain high standards by participating in reviews, designing for fault tolerance and operational excellence, and creating mechanisms for continuous improvement. Prototype and test concepts or features, both through simulation and emulators and with live robotic equipment Work directly with customers and partners to test prototypes and incorporate feedback Mentor other engineer team members. A day in the life - 6+ years of building machine learning models for retail application experience - PhD, or Master's degree and 6+ years of applied research experience - Experience programming in Java, C++, Python or related language - Experience with neural deep learning methods and machine learning - Demonstrated expertise in computer vision and machine learning techniques.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist in the Content Understanding Team, you will lead the end-to-end research and deployment of video and multi-modal models applied to a variety of downstream applications. More specifically, you will: - Work backwards from customer problems to research and design scientific approaches for solving them - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals About the team Our Prime Video Content Understanding team builds holistic media representations (e.g. descriptions of scenes, semantic embeddings) and apply them to new customer experiences supply chain problems. Our technology spans the entire Prime Video catalogue globally, and we enable instant recaps, skip intro timing, ad placement, search, and content moderation.
US, WA, Seattle
Do you want to re-invent how millions of people consume video content on their TVs, Tablets and Alexa? We are building a free to watch streaming service called Fire TV Channels (https://techcrunch.com/2023/08/21/amazon-launches-fire-tv-channels-app-400-fast-channels/). Our goal is to provide customers with a delightful and personalized experience for consuming content across News, Sports, Cooking, Gaming, Entertainment, Lifestyle and more. You will work closely with engineering and product stakeholders to realize our ambitious product vision. You will get to work with Generative AI and other state of the art technologies to help build personalization and recommendation solutions from the ground up. You will be in the driver's seat to present customers with content they will love. Using Amazon’s large-scale computing resources, you will ask research questions about customer behavior, build state-of-the-art models to generate recommendations and run these models to enhance the customer experience. You will participate in the Amazon ML community and mentor Applied Scientists and Software Engineers with a strong interest in and knowledge of ML. Your work will directly benefit customers and you will measure the impact using scientific tools.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multi-modal systems. You will support projects that work on technologies including multi-modal model alignment, moderation systems and evaluation. Key job responsibilities As an Applied Scientist with the AGI team, you will support the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). You are also expected to publish in top tier conferences. About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems. Specifically, we focus on model alignment with an aim to maintain safety while not denting utility, in order to provide the best-possible experience for our customers.
IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!