Shrinking machine learning models for offline use

"Perfect hashing" is among the techniques that reduce the memory footprints of machine learning models by 94%.

Last week, the Alexa Auto team announced the release of its new Alexa Auto Software Development Kit (SDK), enabling developers to bring Alexa functionality to in-vehicle infotainment systems.

SYNC 3 and Amazon Echo
Ford is working to link home automation devices like Amazon Echo and Wink with its vehicles through Ford SYNC®, allowing consumers to control lights, thermostats and other home systems from their car and interact with their vehicle, including starting and unlocking it, from their home.

The initial release of the SDK assumes that automotive systems will have access to the cloud, where the machine-learning models that power Alexa currently reside. But in the future, we would like Alexa-enabled vehicles — and other mobile devices — to have recourse to some core functions even when they’re offline. That will mean drastically reducing the size of the underlying machine-learning models, so they can fit in local memory.

At the same time, third-party developers have created more than 45,000 Alexa skills, which expand on Alexa’s native capabilities, and that number is increasing daily. Even in the cloud, third-party skills are loaded into memory only when explicitly invoked by a customer request. Shrinking the underlying models would reduce load time, ensuring that Alexa customers continue to experience millisecond response times.

At this year’s Interspeech, my colleagues and I will present a new technique for compressing machine-learning models that reduces their memory footprints by 94% while leaving their performance almost unchanged. We report our results in a paper titled “Statistical model compression for small-footprint natural language understanding.”

Quantization

Alexa’s natural-language-understanding systems, which interpret free-form utterances, use several different types of machine-learning (ML) models, but they all share some common traits. One is that they learn to extract “features” — or strings of text with particular predictive value — from input utterances. An ML model trained to handle music requests, for instance, will probably become sensitized to text strings like “the Beatles”, “Elton John”, “Whitney Houston”, “Adele”, and so on. Alexa’s ML models frequently have millions of features.

Another common trait is that each feature has a set of associated “weights,” which determine how large a role it should play in different types of computation. The need to store multiple weights for millions of features is what makes ML models so memory intensive.

Our first technique for compressing an ML model is to quantize its weights. We take the total range of weights — say, -100 to 100 — and divide it into even intervals — say, -100 to -90, -90 to -80, and so on. Then we simply round each weight off to the nearest boundary value for its interval. In practice, we use 256 intervals, which allows us to represent every weight in the model with a single byte of data, with minimal effect on the network’s accuracy. This approach has the added benefit of automatically rounding low weights to zero, so they can be discarded.

Perfect hashing

Our other compression technique is more elegant. If an Alexa customer says, “Alexa, play ‘Yesterday,’ by the Beatles,” we want our system to pull up the weights associated with the feature “the Beatles” — not the weights associated with “Adele”, “Elton John”, and the rest. This requires a means of mapping particular features to the memory locations of the corresponding weights.

The standard way to perform such mappings is through hashing. A hash function is a mathematical function that takes arbitrary inputs and scrambles them up — hashes them — in such a way that the outputs (1) are of fixed size and (2) bear no predictable relationship to the inputs. If the output size is fixed at 16 bits, for instance, there are 65,536 possible hash values, but “Hank Williams” might map to value 1, while “Hank Williams, Jr.” maps to value 65,000.

Nonetheless, traditional hash functions sometimes produce collisions: Hank Williams, Jr. may not map to the same location as Hank Williams, but something totally arbitrary — the Bay City Rollers, say — might. In terms of runtime performance, this usually isn’t a big problem. If you hash the name “Hank Williams” and find two different sets of weights at the corresponding memory location, it doesn’t take that long to consult a metadata tag to determine which set of weights belongs to which artist.

In terms of memory footprint, however, this approach to collision resolution makes a substantial difference. With quantizing, the weights themselves will require just a few bytes of data; the metadata used to distinguish sets of weights could end up requiring more space in memory than the data it’s tagging.

We address this problem by using a more advanced hashing technique called perfect hashing, which maps a specific number of data items to the same number of memory slots but guarantees there will be no collisions. With perfect hashing, the system can simply hash a string of characters and pull up the corresponding weights — no metadata required.

Perfect-hashing algorithm
Our perfect-hashing algorithm relies on a family of conventional hash functions (h1, h2, etc.). If a function in the family produces a collision-free hash, we toggle the corresponding 0 in an array to 1. Then we repeat the process with different functions and smaller arrays, until every input value has a unique hash.

To produce a perfect hash, we assume that we have access to a family of conventional hash functions all of which produce random hashes. That is, each function in the family might hash “Hank Williams” to a different value, but that value tells you nothing about how the same function will hash any other string. In practice, we use the hash function MurmurHash, which can be seeded with a succession of different values.

Suppose that you have N input strings that you want to hash. We begin with an array of N 0’s. Then we apply our first hash function — call it Hash1 — to all N inputs. For every string that yields a unique hash value — no collisions — we change the corresponding 0 in the array to a 1.

Then we build a new array of 0’s, with entries for only the input strings that yielded collisions under Hash1. To those strings, we now apply a different hash function — say, Hash2 — and we again toggle the 0’s corresponding to collision-free hashes.

We repeat this process until every input string has a corresponding 1 in some array. Then we combine all the arrays into one giant array. The position of a 1 in the giant array indicates the unique memory location assigned to the corresponding input string.

Now, when the trained network receives an input, it applies Hash1 to each of the input’s substrings and, if it finds a 1 in the first array, it goes to the associated address. If it finds a 0, it applies Hash2 and repeats the process.

Calling successive hash functions for some inputs does incur a slight performance penalty. But it’s a penalty that’s paid only where a conventional hash function would yield a collision, anyway. In our paper, we include both a theoretical analysis and experimental results that demonstrate that this penalty is almost negligible. And it’s certainly a small price to pay for the drastic reduction in memory footprint that the method affords.

Acknowledgments: Kanthashree Mysore Sathyendra, Stanislav Peshterliev

Research areas

Related content

US, MA, North Reading
At Amazon Robotics, we design advanced robotic systems capable of intelligent perception, learning, and action alongside humans, all on a large scale. Our goal is to develop robots that increase productivity and efficiency at the Amazon fulfillment centers while ensuring the safety of workers. We are seeking an Applied Scientist to develop innovative, scalable solutions in feedback control and state estimation for robotic systems, with a focus on contact-rich manipulation tasks. In this role, you will formulate physics-based models of robotic systems, perform analytical and numerical studies, and design control and estimation algorithms that integrate fundamental principles with data-driven techniques. You will collaborate with a world-class team of experts in perception, machine learning, motion planning, and feedback controls to innovate and develop solutions for complex real-world problems. As part of your work, you will investigate applicable academic and industry research to develop, implement, and test solutions that support product features. You will also design and validate production designs. To succeed in this role, you should demonstrate a strong working knowledge of physical systems, a desire to learn from new challenges, and the problem-solving and communication skills to work within a highly interactive and experienced team. Candidates must show a hands-on passion for their work and the ability to communicate their ideas and concepts both verbally and visually. Key job responsibilities - Research, design, implement, and evaluate feedback control, estimation, and motion-planning algorithms, ensuring effective integration with perception, manipulation, and system-level components. - Develop experiments, simulations, and hardware prototypes to validate control algorithms, and optimization techniques in contact-rich manipulation and other challenging scenarios. - Collaborate with software engineering teams to enable scalable, real-time, and maintainable implementations of algorithms in production systems. - Partner with cross-functional teams across hardware, systems engineering, science, and operations to transition algorithms from early prototyping to robust, production-ready solutions. - Engage with stakeholders at all levels to iterate on system design, define requirements, and drive integration of control and estimation capabilities into Amazon Robotics platforms. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!
IN, KA, Bengaluru
You will be working with a unique and gifted team developing exciting products for consumers. The team is a multidisciplinary group of engineers and scientists engaged in a fast paced mission to deliver new products. The team faces a challenging task of balancing cost, schedule, and performance requirements. You should be comfortable collaborating in a fast-paced and often uncertain environment, and contributing to innovative solutions, while demonstrating leadership, technical competence, and meticulousness. Your deliverables will include development of thermal solutions, concept design, feature development, product architecture and system validation through to manufacturing release. You will support creative developments through application of analysis and testing of complex electronic assemblies using advanced simulation and experimentation tools and techniques. Key job responsibilities In this role, you will: - Own thermal design for consumer electronics products at the system level, proposing thermal architecture and aligning with functional leads - Perform CFD simulations using tools such as Star-CCM+ or FloEFD to assess thermal feasibility, identify risks, and propose mitigation options - Generate data processing, statistical analysis, and test automation scripts to improve data consistency, insight quality, and team efficiency - Plan and execute thermal validation activities for devices and SoC packages, including test setup definition, data review, and issue tracking - Work closely with cross-functional and cross-geo teams to support product decisions, generate thermal specifications, and align on thermal requirements - Prepare clear summaries and reports on thermal results, risks, and observations for review by cross-functional leads About the team Amazon Lab126 is an inventive research and development company that designs and engineers high-profile consumer electronics. Lab126 began in 2004 as a subsidiary of Amazon.com, Inc., originally creating the best-selling Kindle family of products. Since then, we have produced innovative devices like Fire tablets, Fire TV and Amazon Echo. What will you help us create?
CA, ON, Toronto
The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through cutting-edge generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Key job responsibilities • Collaborate with business, engineering and science leaders to establish science optimization and monetization roadmap for Amazon Retail Ad Service • Drive alignment across organizations for science, engineering and product strategy to achieve business goals • Lead/guide scientists and engineers across teams to develop, test, launch and improve of science models designed to optimize the shopper experience and deliver long term value for Amazon advertisers and third party retailers • Develop state of the art experimental approaches and ML models to keep up with our growing needs and diverse set of customers. • Participate in the Science hiring process as well as mentor other scientists - improving their skills, their knowledge of your solutions, and their ability to get things done. About the team Amazon Retail Ad Service within Sponsored Products and Brands is an ad-tech solution that enables retailers to monetize their online web and app traffic by displaying contextually relevant sponsored products ads. Our mission is to provide retailers with ad-solution for every type of supply to meet their advertising goals. At the same time, enable advertisers to manage their demand across multiple supplies (Amazon, offsite, third-party retailers) leveraging tools they are already familiar with. Our problem space is challenging and exciting in terms of different traffic patterns, varying product catalogs based on retailer industry and their shopper behaviors.
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of dexterous manipulation system that: - Enables unprecedented generalization across diverse tasks - Enables contact-rich manipulation in different environments - Seamlessly integrates low-level skills and high-level behaviors - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement methods for dexterous manipulation - Design and implement methods for use of dexterous end effectors with force and tactile sensing - Develop a hierarchical system that combines low-level control with high-level planning - Utilize state-of-the-art manipulation models and optimal control techniques
AT, Graz
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
IN, HR, Gurugram
Lead ML teams building large-scale forecasting and optimization systems that power Amazon’s global transportation network and directly impact customer experience and cost. As an Applied Science Manager, you will set scientific direction, mentor applied scientists, and partner with engineering and product leaders to deliver production-grade ML solutions at massive scale. Key job responsibilities 1. Lead and grow a high-performing team of Applied Scientists, providing technical guidance, mentorship, and career development. 2. Define and own the scientific vision and roadmap for ML solutions powering large-scale transportation planning and execution. 3. Guide model and system design across a range of techniques, including tree-based models, deep learning (LSTMs, transformers), LLMs, and reinforcement learning. 4. Ensure models are production-ready, scalable, and robust through close partnership with stakeholders. Partner with Product, Operations, and Engineering leaders to enable proactive decision-making and corrective actions. 5. Own end-to-end business metrics, directly influencing customer experience, cost optimization, and network reliability. 6. Help contribute to the broader ML community through publications, conference submissions, and internal knowledge sharing. A day in the life Your day includes reviewing model performance and business metrics, guiding technical design and experimentation, mentoring scientists, and driving roadmap execution. You’ll balance near-term delivery with long-term innovation while ensuring solutions are robust, interpretable, and scalable. Ultimately, your work helps improve delivery reliability, reduce costs, and enhance the customer experience at massive scale.
IL, Haifa
Come join the AWS Agentic AI science team in building the next generation models for intelligent automation. AWS, the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems that will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. We are looking for world class researchers with experience in one or more of the following areas - autonomous agents, API orchestration, Planning, large multimodal models (especially vision-language models), reinforcement learning (RL) and sequential decision making.
US, VA, Herndon
This position requires that the candidate selected be a US Citizen and currently possess and maintain an active Top Secret security clearance. The Amazon Web Services Professional Services (ProServe) team is seeking an experienced Delivery Practice Manager (DPM) to join our ProServe Shared Delivery Team (SDT) at Amazon Web Services (AWS). In this role, you'll manage a team of ProServe Delivery Consultants while supporting AWS enterprise customers through transformative projects. You'll leverage your IT and/or Management Consulting background to serve as a strategic advisor to customers, partners, and internal AWS teams. As a DPM you will be responsible for building and managing a team of Delivery Consultants and/or Engagement Managers working with customers and partners to architect and implement innovative solutions. You’ll routinely engage with Director, C-level executives, and governing boards, whilst being responsible for opportunity capture and driving engagement delivery. You’ll work closely with partner teams; drive business development initiatives through thought leadership; provide portfolio guidance and oversight; and meet and exceed customer satisfaction targets. As a DPM you are primarily focused directly or through their teams, on understanding and defining business outcomes for customers by building trust, identifying applicable AWS Professional Services offerings, and creating proposals and SOW’s. Your experience gained leading teams within the technology sector, will equip you with the ability to optimize team performance through implementing tailored people development plans, ensuring your teams are aligned to customer needs, and have the skills and capacity to address customer outcomes. Possessing the ability to translate technical concepts into business value for customers and then talk in technical depth with teams, you will cultivate strong customer, Amazon Global Sales (AGS), and ProServe team relationships which enables exceptional business performance. DPMs success is primarily measured by consistently delivering customer engagements by supporting sales through scoping technical requirements for an engagement, delivering engagements on time, within budget, and exceeding customer expectations. They will hold the Practice total utilization goal and be responsible for optimizing team performance. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities • Building and managing a high-performing team of Delivery Consultants • Collaborating with Delivery Consultants, Engagement Managers, Account Executives, and Cloud Architects to deploy solutions and provide input on new features • Developing and overseeing the implementation of innovative, forward-looking IT strategies for customers • Managing practice P&L, ensuring on-time and within-budget delivery of customer engagements • Driving business development initiatives and exceed customer satisfaction targets
IL, Haifa
Are you a scientist interested in pushing the state of the art in Information Retrieval, Large Language Models and Recommendation Systems? Are you interested in innovating on behalf of millions of customers, helping them accomplish their every day goals? Do you wish you had access to large datasets and tremendous computational resources? Do you want to join a team of capable scientist and engineers, building the future of e-commerce? Answer yes to any of these questions, and you will be a great fit for our team at Amazon. Our team is part of Amazon’s Personalization organization, a high-performing group that leverages Amazon’s expertise in machine learning, generative AI, large-scale data systems, and user experience design to deliver the best shopping experiences for our customers. Our team builds large-scale machine-learning solutions that delight customers with personalized and up-to-date recommendations that are related to their interests. We are a team uniquely placed within Amazon, to have a direct window of opportunity to influence how customers will think about their shopping journey in the future. As an Applied Scientist in our team, you will be responsible for the research, design, and development of new AI technologies for personalization. You will adopt or invent new machine learning and analytical techniques in the realm of recommendations, information retrieval and large language models. You will collaborate with scientists, engineers, and product partners locally and abroad. Your work will include inventing, experimenting with, and launching new features, products and systems. Please visit https://www.amazon.science for more information.
IL, Haifa
Are you a scientist interested in pushing the state of the art in Information Retrieval, Large Language Models and Recommendation Systems? Are you interested in innovating on behalf of millions of customers, helping them accomplish their every day goals? Do you wish you had access to large datasets and tremendous computational resources? Do you want to join a team of capable scientist and engineers, building the future of e-commerce? Answer yes to any of these questions, and you will be a great fit for our team at Amazon. Our team is part of Amazon’s Personalization organization, a high-performing group that leverages Amazon’s expertise in machine learning, generative AI, large-scale data systems, and user experience design to deliver the best shopping experiences for our customers. Our team builds large-scale machine-learning solutions that delight customers with personalized and up-to-date recommendations that are related to their interests. We are a team uniquely placed within Amazon, to have a direct window of opportunity to influence how customers will think about their shopping journey in the future. As an Applied Scientist in our team, you will be responsible for the research, design, and development of new AI technologies for personalization. You will adopt or invent new machine learning and analytical techniques in the realm of recommendations, information retrieval and large language models. You will collaborate with scientists, engineers, and product partners locally and abroad. Your work will include inventing, experimenting with, and launching new features, products and systems. Please visit https://www.amazon.science for more information.