Syntiant NDP101
Syntiant's NDP architecture is built from the ground up to run deep learning algorithms. The company says its NDP101 neural decision processor achieves breakthrough performance by coupling computation and memory, and exploiting the inherent parallelism of deep learning and computing at only required numerical precision.
Credit: Syntiant

3 questions with Jeremy Holleman: How to design and develop ultra-low-power AI processors

Holleman, the chief scientist of Alexa Fund company Syntiant, explains why the company’s new architecture allows machine learning to be deployed practically anywhere.  

Editor’s Note: This article is the latest installment within a series Amazon Science is publishing related to the science behind products and services from companies in which Amazon has invested. Syntiant, founded in 2017, has shipped more than 10 million units to customers worldwide, and has obtained $65 million in funding from leading technology companies, including the Amazon Alexa Fund.

In late July, Amazon held its Alexa Live event, where the company introduced more than 50 features to help developers and device makers build ambient voice-computing experiences, and drive the growth of voice computing.

Jeremy Holleman, Syntiant's chief scientist
Jeremy Holleman is Syntiant's chief scientist, and a professor of electrical and computer engineering at the University of North Carolina at Charlotte.
Credit: Syntiant

The event included an Amazon Alexa Startups Showcase in which Syntiant, a semiconductor company founded in 2017, and based in Irvine, California, shared its vision for making voice the computing interface of the future.  

In 2017, Kurt Busch, Syntiant’s chief executive officer, and Jeremy Holleman, Syntiant’s chief scientist, and a professor of electrical and computer engineering at the University of North Carolina at Charlotte, were focused on finding an answer to the question: How do you optimize the performance of machine learning models on power- and cost-constrained hardware?

According to Syntiant, they — and other members of Syntiant’s veteran management team — had the idea for a processor architecture that could deliver 200 times the efficiency, 20 times the performance, and at half the cost of existing edge processors. One key to their approach — optimizing for memory access versus traditional processors’ focus on logic.

This insight, and others, led them to the formation of Syntiant, which for the past four years has been designing and developing ultra-low-power, high-performance, deep neural network processors for computing at the network’s edge, helping to reduce latency, and increase the privacy and security of power- and cost-constrained applications running on devices as small as earbuds, and as large as automobiles.

Syntiant’s processors enable always-on voice (AOV) control for most battery-powered devices, from cell phones and earbuds, to drones, laptops and other voice-activated products. The company’s Neural Decision Processors (NDPs) provide highly accurate wake word, command word and event detection in a tiny package with near-zero power consumption.

Syntiant CEO on the future of ambient computing
During the Amazon Alexa Startups Showcase, Kurt Busch, CEO of Syntiant, an Alexa Fund company, explained how they're using the latest in voice technology to invent the future of ambient computing, and why he thinks voice will be the next user interface.

Holleman is considered a leading authority on ultra-low-power integrated circuits, and directs the Integrated Silicon Systems Laboratory at the University of North Carolina, Charlotte, where he is an associate professor. He’s also is a coauthor of the book “Ultra Low-Power Integrated Circuit Design for Wireless Neural Interfaces”, which was first published in 2011.

Amazon Science asked Holleman three questions about the challenges of designing and developing ultra-low-power AI processors, and why he believes voice will become the predominant user interface of the future.

Q. You are one of 22 authors on a paper, "MLPerf Tiny Benchmark", which has been accepted to the NeurIPS 2021 Conference. What does this benchmark suite comprise, and why is it significant to the tinyML field?

The MLPerf Tiny Benchmark actually includes four tests meant to measure the performance and efficiency of very small devices on ML inference: keyword spotting, person detection, image recognition, and anomaly detection. For each test, there is a reference model, and code to measure the latency and power on a reference platform.

I try to think about the benchmark from the standpoint of a system developer – someone building a device that needs some local intelligence. They have to figure out, with a given energy budget and system requirements, what solution is going to work for them. So they need to understand the power consumption and speed of different hardware. When you look at most of the information available, everyone measures their hardware on different things, so it’s really hard to compare. The benchmark makes it clear exactly what is being measured and – in the closed division – every submission is running the exact same model, so it’s a clear apples-to-apples comparison.

Then the open division takes the same principle – every submission does the same thing – but allows for some different tradeoffs by just defining the problem and allowing submitters to run different models that may take advantage of particular aspects of their hardware. So you wind up with a Pareto surface of accuracy, power, and speed.  I think this last part is particularly important in the “tiny” space because there is a lot of room to jointly optimize models, hardware, and features to get high-performing and high-efficiency end-to-end systems.

Q. What do you consider Syntiant’s key ingredients in your development and design of ultra-low-power AI processors, and how will your team’s work contribute to voice becoming the predominant user interface of the future?

I would say there are two major elements that have been key to our success. The first is, as I mentioned before, that edge ML requires tight coupling between the hardware and the algorithms. From the very beginning at Syntiant, we’ve had our silicon designers and our modelers working closely together. That shows up in office arrangement, with hardware and software groups all intermingled; in code and design reviews, really all across the company. And I think that’s paid off in outcomes. We see how easy it is to map a given algorithm to our hardware, because the hardware was designed to do all the hard work of coordinating memory access in a way that’s optimized for exactly the types of computation we see in ML workloads. And for the same reason, we see the benefits of that approach in power and performance.

The second big piece is that we realized that deep learning is still such a new field that the expertise required to deliver production-grade solutions is still very rare. It’s easy enough to download an MNIST or CIFAR demo, train it up and you think, “I’ve got this figured out!” But when you deploy a device to millions of people who interact with it on a daily basis, the job becomes much harder. You need to acquire data, validate it, debug models, and it’s a big job. We knew that for most customers, we couldn’t just toss a piece of silicon over the fence and leave the rest to them. That led us to put a lot of effort into building a complete pipeline addressing the data tasks, training, and evaluation, so we can provide a complete solution to customers who don’t have a ton of ML expertise in house.

Q. What in particular makes edge processing difficult?

On the hardware side, the big challenges are power and cost. Whether you’re talking about a watch, an earbud, or a phone, consumers have some pretty hard requirements for how long a battery needs to last – generally a day – and how much they will pay for something. And on the modeling side, edge devices find themselves in a tremendously diverse set of environments, so you need a voice assistant to recognize you not just in the kitchen or in the car, but on a factory floor, at a football game, and everywhere else you can imagine going.

Then those three things push against each other like the classical balloon analogy. If you push down cost by choosing a lower-end processor, it may not have the throughput to run the model quickly, so you run at a lower frame rate, under-sampling the input signal, and you miss events. Or you find a model that works well, and you run it fast enough, but then the power required to run it limits battery life. This tradeoff is especially difficult for features that are always on, like a wakeword detector, or person detection in a security camera. At Syntiant, we had to address all of these issues simultaneously, which is why it was so important to have all of our teams tightly connected, work through the use cases, and know how each piece affected all the other pieces.

Conventional general-purpose processors don’t have the efficiency to run strong models within the constraints that edge devices have. With our new architecture, powerful machine learning can be deployed practically anywhere for the first time.
Jeremy Holleman

Having done that work, the result is that you get the power of modern ML in tiny devices with almost no impact on the battery life. And the possibilities, especially for voice interfaces, is very exciting. We’ve all grown accustomed to interacting with our phone by voice and we’ve seen how often we want to do something but don’t have a free hand available for a tactile interface.

Syntiant’s technology is making it possible to bring that experience to smaller and cheaper devices with all of the processing happening locally. So many of the devices we use have useful information they can’t share with us because the interface would be too expensive. Imagine being able to say “TV remote, where are you?” or “Smoke alarm, why are you beeping?” and getting a clear and quick answer. We’ve forgotten that some annoying things we’ve gotten so used to can be fixed. And of course you don’t want all of the cost and the privacy concerns associated with sending all of that information to the cloud.

So we’re focused on putting that level of intelligence right in the device. To deliver that, we need all of these pieces to come together: the data pipeline, the models, and the hardware. Conventional general-purpose processors don’t have the efficiency to run strong models within the constraints that edge devices have. With our new architecture, powerful machine learning can be deployed practically anywhere for the first time.

Research areas

Related content

US, CA, San Francisco
Amazon has launched a new research lab in San Francisco to develop foundational capabilities for useful AI agents. We’re enabling practical AI to make our customers more productive, empowered, and fulfilled. In particular, our work combines large language models (LLMs) with reinforcement learning (RL) to solve reasoning, planning, and world modeling in both virtual and physical environments. Our research builds on that of Amazon’s broader AGI organization, which recently introduced Amazon Nova, a new generation of state-of-the-art foundation models (FMs). Our lab is a small, talent-dense team with the resources and scale of Amazon. Each team in the lab has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. We’re entering an exciting new era where agents can redefine what AI makes possible. We’d love for you to join our lab and build it from the ground up! Key job responsibilities You will contribute directly to AI agent development in a research engineering role: running experiments, building tools to accelerate scientific workflows, and scaling up AI systems. Key responsibilities include: * Design, maintain, and enhance tools and workflows that support cutting-edge research * Adapt quickly to evolving research priorities and team needs * Stay informed on the latest advancements in large language models and related research * Collaborate closely with researchers to develop new techniques and tools around emerging agent capabilities * Drive project execution, including scoping, prioritization, timeline management, and stakeholder communication * Thrive in a fast-paced, iterative environment, delivering high-quality software on tight schedules * Apply strong software engineering fundamentals to produce clean, reliable, and maintainable code About the team The Amazon AGI SF Lab is focused on developing new foundational capabilities for enabling useful AI agents that can take actions in the digital and physical worlds. In other words, we’re enabling practical AI that can actually do things for us and make our customers more productive, empowered, and fulfilled. The lab is designed to empower AI researchers and engineers to make major breakthroughs with speed and focus toward this goal. Our philosophy combines the agility of a startup with the resources of Amazon. By keeping the team lean, we’re able to maximize the amount of compute per person. Each team in the lab has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video subscriptions such as Apple TV+, HBO Max, Peacock, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist at Prime Video, you will have end-to-end ownership of the product, related research and experimentation, applying advanced machine learning techniques in computer vision (CV), Generative AI, multimedia understanding and so on. You’ll work on diverse projects that enhance Prime Video’s content localization, image/video understanding, and content personalization, driving impactful innovations for our global audience. Other responsibilities include: - Research and develop generative models for controllable synthesis across images, video, vector graphics, and multimedia - Innovate in advanced diffusion and flow-based methods (e.g., inverse flow matching, parameter efficient training, guided sampling, test-time adaptation) to improve efficiency, controllability, and scalability. - Advance visual grounding, depth and 3D estimation, segmentation, and matting for integration into pre-visualization, compositing, VFX, and post-production pipelines. - Design multimodal GenAI workflows including visual-language model tooling, structured prompt orchestration, agentic pipelines. A day in the life Prime Video is pioneering the use of Generative AI to empower the next generation of creatives. Our mission is to make world-class media creation accessible, scalable, and efficient. We are seeking an Applied Scientist to advance the state of the art in Generative AI and to deliver these innovations as production-ready systems at Amazon scale. Your work will give creators unprecedented freedom and control while driving new efficiencies across Prime Video’s global content and marketing pipelines. This is a newly formed team within Prime Video Science!
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video subscriptions such as Apple TV+, HBO Max, Peacock, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist at Prime Video, you will have end-to-end ownership of the product, related research and experimentation, applying advanced machine learning techniques in computer vision (CV), Generative AI, multimedia understanding and so on. You’ll work on diverse projects that enhance Prime Video’s content localization, image/video understanding, and content personalization, driving impactful innovations for our global audience. Other responsibilities include: - Research and develop generative models for controllable synthesis across images, video, vector graphics, and multimedia - Innovate in advanced diffusion and flow-based methods (e.g., inverse flow matching, parameter efficient training, guided sampling, test-time adaptation) to improve efficiency, controllability, and scalability. - Advance visual grounding, depth and 3D estimation, segmentation, and matting for integration into pre-visualization, compositing, VFX, and post-production pipelines. - Design multimodal GenAI workflows including visual-language model tooling, structured prompt orchestration, agentic pipelines. A day in the life Prime Video is pioneering the use of Generative AI to empower the next generation of creatives. Our mission is to make world-class media creation accessible, scalable, and efficient. We are seeking an Applied Scientist to advance the state of the art in Generative AI and to deliver these innovations as production-ready systems at Amazon scale. Your work will give creators unprecedented freedom and control while driving new efficiencies across Prime Video’s global content and marketing pipelines. This is a newly formed team within Prime Video Science!
US, MA, Boston
AI is the most transformational technology of our time, capable of tackling some of humanity’s most challenging problems. That is why Amazon is investing in generative AI (GenAI) and the responsible development and deployment of large language models (LLMs) across all of our businesses. Come build the future of human-technology interaction with us. We are looking for an Applied Scientist with strong technical skills which includes coding and natural language processing experience in dataset construction, training and evaluating models, and automatic processing of large datasets. You will play a critical role in driving innovation and advancing the state-of-the-art in natural language processing and machine learning. You will work closely with cross-functional teams, including product managers, language engineers, and other scientists. Key job responsibilities Specifically, the Applied Scientist will: • Ensure quality of speech/language/other data throughout all stages of acquisition and processing, including data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. • Clean, analyze and select speech/language/other data to achieve goals • Build and test models that elevate the customer experience • Collaborate with colleagues from science, engineering and business backgrounds • Present proposals and results in a clear manner backed by data and coupled with actionable conclusions • Work with engineers to develop efficient data querying infrastructure for both offline and online use cases
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, MA, Boston
AI is the most transformational technology of our time, capable of tackling some of humanity’s most challenging problems. That is why Amazon is investing in generative AI (GenAI) and the responsible development and deployment of large language models (LLMs) across all of our businesses. Come build the future of human-technology interaction with us. We are looking for an Applied Scientist with strong technical skills which includes coding and natural language processing experience in dataset construction, training and evaluating models, and automatic processing of large datasets. You will play a critical role in driving innovation and advancing the state-of-the-art in natural language processing and machine learning. You will work closely with cross-functional teams, including product managers, language engineers, and other scientists. Key job responsibilities Specifically, the Applied Scientist will: • Ensure quality of speech/language/other data throughout all stages of acquisition and processing, including data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. • Clean, analyze and select speech/language/other data to achieve goals • Build and test models that elevate the customer experience • Collaborate with colleagues from science, engineering and business backgrounds • Present proposals and results in a clear manner backed by data and coupled with actionable conclusions • Work with engineers to develop efficient data querying infrastructure for both offline and online use cases
US, CA, Sunnyvale
As a Principal Scientist in the Artificial General Intelligence (AGI) organization, you are a trusted part of the technical leadership. You bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. You solicit differing views across the organization and are willing to change your mind as you learn more. Your artifacts are exemplary and often used as reference across organization. You are a hands-on scientific leader. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. You amplify your impact by leading scientific reviews within your organization or at your location. You scrutinize and review experimental design, modeling, verification and other research procedures. You probe assumptions, illuminate pitfalls, and foster shared understanding. You align teams toward coherent strategies. You educate, keeping the scientific community up to date on advanced techniques, state of the art approaches, the latest technologies, and trends. You help managers guide the career growth of other scientists by mentoring and play a significant role in hiring and developing scientists and leads. You will play a critical role in driving the development of Generative AI (GenAI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities You will be responsible for defining key research directions, adopting or inventing new machine learning techniques, conducting rigorous experiments, publishing results, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. You will also participate in organizational planning, hiring, mentorship and leadership development. You will be technically exceptional with a passion for building scalable science and engineering solutions. You will serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance).
US, NY, New York
Do you want to leverage your expertise in translating innovative science into impactful products to improve the lives and work of over a million people worldwide? If so, People eXperience Technology Central Science (PXTCS) would love to discuss how you can make that a reality. PXTCS is an interdisciplinary team that uses economics, behavioral science, statistics, and machine learning to identify products, mechanisms, and process improvements that enhance Amazonians' well-being and their ability to deliver value for Amazon's customers. We collaborate with HR teams across Amazon to make Amazon PXT the most scientific human resources organization in the world. In this role, you will spearhead science design and technical implementation innovations across our predictive modeling and forecasting work-streams. You'll enhance existing models and create new ones, empowering leaders throughout Amazon to make data-driven business decisions. You'll collaborate with scientists and engineers to deliver solutions while working closely with business stakeholders to address their specific needs. Your work will span various business domains (corporate, operations, safety) and analysis levels (individual, group, organizational), utilizing a range of modeling approaches (linear, tree-based, deep neural networks, and LLM-based). You'll develop end-to-end ML solutions from problem formulation to deployment, maintaining high scientific standards and technical excellence throughout the process. As a Sr. Applied Scientist, you'll also contribute to the team's science strategy, keeping pace with emerging AI/ML trends. You'll mentor junior scientists, fostering their growth by identifying high-impact opportunities. Your guidance will span different analysis levels and modeling approaches, enabling stakeholders to make informed, strategic decisions. If you excel at building advanced scientific solutions and are passionate about developing technologies that drive organizational change in the AI era, join us as we work hard, have fun, and make history.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video subscriptions such as Apple TV+, HBO Max, Peacock, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist at Prime Video, you will have end-to-end ownership of the product, related research and experimentation, applying advanced machine learning techniques in computer vision (CV), Generative AI, multimedia understanding and so on. You’ll work on diverse projects that enhance Prime Video’s content localization, image/video understanding, and content personalization, driving impactful innovations for our global audience. Other responsibilities include: - Research and develop generative models for controllable synthesis across images, video, vector graphics, and multimedia - Innovate in advanced diffusion and flow-based methods (e.g., inverse flow matching, parameter efficient training, guided sampling, test-time adaptation) to improve efficiency, controllability, and scalability. - Advance visual grounding, depth and 3D estimation, segmentation, and matting for integration into pre-visualization, compositing, VFX, and post-production pipelines. - Design multimodal GenAI workflows including visual-language model tooling, structured prompt orchestration, agentic pipelines. A day in the life Prime Video is pioneering the use of Generative AI to empower the next generation of creatives. Our mission is to make world-class media creation accessible, scalable, and efficient. We are seeking an Applied Scientist to advance the state of the art in Generative AI and to deliver these innovations as production-ready systems at Amazon scale. Your work will give creators unprecedented freedom and control while driving new efficiencies across Prime Video’s global content and marketing pipelines. This is a newly formed team within Prime Video Science!
US, WA, Seattle
Are you fascinated by the power of Large Language Models (LLM) and applying Generative AI to solve complex challenges within one of Amazon's most significant businesses? Amazon Selection and Catalog Systems (ASCS) builds the systems that host and run the world's largest e-Commerce products catalog, it powers the online buying experience for customers worldwide so they can find, discover and buy anything they want. Amazon's customers rely on the completeness, consistency and correctness of Amazon's product data to make well-informed purchase decisions. We develop LLM applications that make Catalog the best-in-class source of product information for all products worldwide. This problem is challenging due to sheer scale (billions of products in the catalog), diversity (products ranging from electronics to groceries) and multitude of input sources (millions of sellers contributing product data with different quality). We are seeking a passionate, talented, and inventive individual to join the Catalog AI team and help build industry-leading technologies that customers will love. You will apply machine learning and large language model techniques, such as fine-tuning, reinforcement learning, and prompt optimization, to solve real customer problems. You will work closely with scientists and engineers to experiment with new methods, run large-scale evaluations, and bring research ideas into production. Key job responsibilities * Design and implement LLM-based solutions to improve catalog data quality and completeness * Conduct experiments and A/B tests to validate model improvements and measure business impact * Optimize large language models for quality and cost on catalog-specific tasks * Collaborate with engineering teams to deploy models at scale serving billions of products