Optimizing neural networks for special-purpose hardware

Curating the neural-architecture search space and taking advantage of human intuition reduces latency on real-world applications by up to 55%.

As neural networks grow in size, deploying them on-device increasingly requires special-purpose hardware that parallelizes common operations. But for maximum efficiency, it’s not enough to optimize the hardware for the networks; the networks should be optimized for the hardware, too.

Related content
The first step in training a neural network to solve a problem is usually the selection of an architecture: a specification of the number of computational nodes in the network and the connections between them. Architectural decisions are generally based on historical precedent, intuition, and plenty of trial and error.

The standard way to optimize a neural network is through neural-architecture search (NAS), where the goal is to minimize both the size of the network and the number of floating-point operations (FLOPS) it performs. But this approach doesn’t work with neural chips, which can often execute easily parallelized but higher-FLOPS tasks more rapidly than they can harder-to-parallelize but lower-FLOPS tasks.

Minimizing latency is a more complicated optimization objective than minimizing FLOPS, so in the Amazon Devices Hardware group, we’ve developed a number of strategies for adapting NAS to the problem of optimizing network architectures for Amazon’s new Neural Engine family of accelerators. Those strategies involve curating the architecture search space to, for instance, reduce the chances of getting stuck in local minima. We’ve also found that combining a little human intuition with the results of NAS for particular tasks can help us generalize to new tasks more reliably and efficiently.

In experiments involving several different machine learning tasks, we’ve found that our NAS strategies can reduce latencies by as much as 55%.

Varieties of neural-architecture search

NAS needs three things: a definition of the search space, which specifies the building blocks available to construct a network; a cost model, which is a function of the network's accuracy, latency, and memory; and an optimization algorithm. We use a performance estimator to measure latency and memory footprint, but to measure accuracy, we must train the network. This is a major bottleneck, as training a single network can take days. Sampling thousands of architectures would take thousands of GPU days, which is clearly neither practical nor environmentally sustainable.

There are three categories of NAS algorithm, which require networks to be trained different numbers of times: multishot, single-shot, and zero-shot.

Related content
A new approach that grows networks dynamically promises improvements over GANs with fixed architectures or predetermined growing strategies.

Multishot methods sample a cohort of architectures in each iteration. Each network is trained and evaluated for accuracy and performance, and the next set of architectures is sampled based on their cost. Evolutionary or reinforcement-learning-based algorithms are generally used for multishot methods.

Single-shot methods start with a large network called the supernet, which has multiple possible subgraphs. During training, the subgraphs start converging to a single, small network. Single-shot methods are designed to be trained only once, but their training takes much longer than that of a single network in multishot methods.

Zero-shot methods works like multishot methods, with the key difference that the network is never trained. As a proxy for accuracy, we use the network’s trainability score, which is computed using the network's topology, nonlinearity, and operations. Zero-shot methods are the fastest to converge, because calculating the score is computationally very cheap. The downside is that the trainability may not correlate well with model accuracy.

Search space curation

The NAS cost function can be visualized as a landscape, with each point representing a potential architecture. A cost function based on FLOPS changes monotonically with factors such as sizes or channels: that is, if you find a direction across the terrain in which the cost is going down, you can be sure that continuing in that direction will not cause the cost to go up.

However, the inclusion of accelerator-aware constraints disrupts the function by introducing more asymptotes, or points at which the cost switches from going down to going up. This results in a more complex and rocky landscape.

Related content
How to make trained systems evolve gracefully.

To address this issue, we reduced the number of options in the search space. We were exploring convolutional architectures, meaning that the inputs are decomposed into several different components, each of which has its own channel through the network. The data in each channel, in turn, is filtered in several different ways; each filter involves a different data convolution.

Previously, we would have explored the number of channels — known as the channel size — at increments of one; instead, we considered only a handful of channel sizes. We limited the options for channel sizes to certain values that were favorable for the parallelism factor of the Neural Engine. The parallelism factor is a count of operations, such as dot product, that can be performed in parallel. In some cases, we even added "depth multiplier" ratio that could be used to scale the number of channels across the entire model to the search space.

These improvements can be visualized as taking fewer, larger steps across a smoother terrain, rather than trying to navigate the rocky landscape that resulted from the inclusion of accelerator-aware performance in the cost function. During the optimization process, they resulted in a faster convergence rate because of the reduced number of options and in improved stability and reliability thanks to the monotonic nature of the curated search space.

NAS - 3x1.png
Illustration of how the cost landscape (green) changes from smooth (left) to rocky (center and right) when a cost function based on Neural Engine performance replaces one based on FLOPS. Curation (right) reduces the discrete search space (black dots) and ensures that points are far apart. The trajectory of a search algorithm (blue arrows) shows how curation (right) ensures that with each step in a search, the cost is monotonically decreasing.

One key detail in our implementation is the performance estimator. Instead of deploying an architecture on real hardware or an emulator to obtain performance metrics, we estimated them using a machine learning regression model trained on measurements of different operators or subgraphs.

At inference time, the estimator would decompose the queried architecture into subgraphs and use the regression model to estimate the performance of each. Then it would accumulate these estimates to give the model-level performance. This regressor-based design simplified our NAS framework, as it no longer required compilation, inference, or hardware. This technique enables us to test accelerators in the design phase, before we’ve developed custom compilers and hardware emulators for them.

Productizing NAS with expert-in-the-loop

Curating the search space improves convergence rate, stability, and reliability, but transferability to new use cases is not straightforward. NAS results for a detector model, for instance, may not be easy to transfer to a classification model. On the other hand, running NAS from scratch for each new dataset may not be feasible, due to time constraints. In these situations, we found that combining NAS results and human expertise was the fastest approach.

Channel reduction step.png
The initial channel reduction step (1x1 conv.) in the inverted-bottleneck (IBN) block at left is fused with the channel expansion step (KxK depth. conv.) in the fused IBN at right. This proved to be a common subgraph modification across datasets.

When we performed NAS on different datasets, we saw common patterns, such as the fusion of convolution layers with previous convolution layers, reducing the number of channels and, aligning them with the hardware parallelism factor.

In particular, fusing convolution layers in inverted bottleneck (IBN) blocks contributed most to boosting efficiency. With just these modifications, we observed latency reductions of up to 50%, whereas a fully converged NAS model would yield a slightly better 53% reduction.

In situations where running NAS from scratch is not feasible, a human expert can rely on mathematical intuition and observations of the results of NAS on similar datasets to build the required model architecture.

Results and product impact

We applied this technique to multiple products in the Amazon Devices portfolio, ranging from Echo Show and Blink home security products to the latest Astro, the in-home consumer robot.

1. Reduced detection latency by half on Echo Show

Echo Show runs a model to detect human presence and locate the detected person in a room. The original model used IBN blocks. We used accelerator-aware NAS to reduce the latency of this model by 53%.

Human-presence detection.png
Schematic representation of human-presence detection.

We performed a search for depth multipliers — that is, layers that multiply the number of channels — and for opportunities to replace IBN blocks with fused-IBN blocks. The requirement was to maintain the same mean average precision (mAP) of the original model while improving the latency. Our V3 model improved the latency by more than 53% (i.e. 2.2x faster) while keeping the mAP scores same as baseline.

Latency results for the original model and three models found through NAS.
Fused-IBN searchDepth multiplier searchLatency reduction (%)
BaselineNoNoBaseline
V1NoYes14%
V2YesNo35%
V3YesYes53%

After performing NAS, we found that not every IBN fusion improves latency and accuracy. The later layers are larger, and replacing them with fused layers hurt performance. For the layers where fusion was selected, the FLOPs, as expected, increased, but the latency did not.

2. Model fitting within the tight memory budget of the Blink Floodlight Camera

Blink cameras use a classification model for security assistance. Our goal was to fit the model parameters and peak activation memory within a tight memory budget. In this case, we combined NAS techniques with an expert-in-the-loop to provide fine-tuning. The NAS result on the classification dataset provided intuition on what operator/subgraph changes could extract benefits from the accelerator design.

Classification.png
Schematic representation of the classification model output.

The expert recommendations were to replace the depth-wise convolutions with standard convolutions and reduce the channels by making them even across the model, preferably by a multiple of the parallelism factor. With these changes, model developers were able to reduce both the model size and the intermediate memory usage by 47% and fit the model within the required budget.

3. Fast semantic segmentation for robotics

In the context of robotics, semantic segmentation is used to understand the objects and scenes the robot is interacting with. For example, it can enable the robot to identify chairs, tables, or other objects in the environment, allowing it to navigate and interact with its surroundings more effectively. Our goal for this model was to reduce latency by half. Our starting point was a semantic-segmentation model that was optimized to run on a CPU.

Semantic segmentation.png
Left: original image of a room at night; center: semantic-segmentation image; right: semantic segmentation overlaid on original image.

For this model, we searched for different channel sizes, fusion, and also output and input dimensions. We used the multishot method with the evolutionary search algorithm. NAS gave us multiple candidates with different performances. The best candidate was able to reduce the latency by half.

Latency improvement for different architectures found through NAS.
Latency reduction (%)
OriginalBaseline
Model A27%
Model B37%
Model C38%
Model D41%
Model E51%

4. User privacy with on-device inference

Amazon's Neural Engine supports large-model inference on-device, so we can process microphone and video feeds without sending data to the cloud. For example, the Amazon Neural Engine has enabled Alexa to perform automatic speech recognition on-device. On-device processing also provides a better user experience because the inference pipeline is not affected by intermittent connection issues. In our NAS work, we discovered that even larger, more accurate models can now fit on-device with no hit on latency.

Making edge AI sustainable

We mentioned earlier that multishot NAS with full training can take up to 2,000 GPU-days. However, with some of the techniques described in this blog, we were able to create efficient architectures in a substantially shorter amount of time, making NAS much more scalable and sustainable. But our sustainability efforts don't end there.

Related content
Innovative training methods and model compression techniques combine with clever engineering to keep speech processing local.

Because of its parallelism and mixed-precision features, the Neural Engine is more power efficient than a generic CPU. For a million average users, the difference is on order of millions of kilowatt-hours per year, equivalent to 200 gasoline-powered passenger vehicles per year or the energy consumption of a hundred average US households.

When we optimize models through NAS, we increase the device's capability to run more neural-network models simultaneously. This allows us to use smaller application processors and, in some cases, fewer of them. By reducing the hardware footprint in this way, we are further reducing the carbon footprint of our devices.

Future work

We have identified that curation requires an expert who understands the hardware design well. This may not scale to future generations of more complex hardware. We have also identified that in situations where time is tight, having an expert in the loop is still faster than running NAS from scratch. Because of this, we are continuing to investigate how NAS algorithms with accelerator awareness can handle large search spaces. We are also working on improving the search algorithm’s efficiency and effectiveness by exploring how the three categories of algorithms can be combined. We also plan to explore model optimization by introducing sparsity through pruning and clustering. Stay tuned!

Acknowledgements: Manasa Manohara, Lingchuan Meng, Rahul Bakshi, Varada Gopalakrishnan, Lindo St. Angel

Research areas

Related content

GB, London
Amazon Advertising is looking for a Data Scientist to join its brand new initiative that powers Amazon’s contextual advertising products. Advertising at Amazon is a fast-growing multi-billion dollar business that spans across desktop, mobile and connected devices; encompasses ads on Amazon and a vast network of hundreds of thousands of third party publishers; and extends across US, EU and an increasing number of international geographies. The Supply Quality organization has the charter to solve optimization problems for ad-programs in Amazon and ensure high-quality ad-impressions. We develop advanced algorithms and infrastructure systems to optimize performance for our advertisers and publishers. We are focused on solving a wide variety of problems in computational advertising like traffic quality prediction (robot and fraud detection), Security forensics and research, Viewability prediction, Brand Safety, Contextual data processing and classification. Our team includes experts in the areas of distributed computing, machine learning, statistics, optimization, text mining, information theory and big data systems. We are looking for a dynamic, innovative and accomplished Data Scientist to work on data science initiatives for contextual data processing and classification that power our contextual advertising solutions. Are you an experienced user of sophisticated analytical techniques that can be applied to answer business questions and chart a sustainable vision? Are you exited by the prospect of communicating insights and recommendations to audiences of varying levels of technical sophistication? Above all, are you an innovator at heart and have a track record of resolving ambiguity to deliver result? As a data scientist, you help our data science team build cutting edge models and measurement solutions to power our contextual classification technology. As this is a new initiative, you will get an opportunity to act as a thought leader, work backwards from the customer needs, dive deep into data to understand the issues, define metrics, conceptualize and build algorithms and collaborate with multiple cross-functional teams. Key job responsibilities * Define a long-term science vision for contextual-classification tech, driven fundamentally from the needs of our advertisers and publishers, translating that direction into specific plans for the science team. Interpret complex and interrelated data points and anecdotes to build and communicate this vision. * Collaborate with software engineering teams to Identify and implement elegant statistical and machine learning solutions * Oversee the design, development, and implementation of production level code that handles billions of ad requests. Own the full development cycle: idea, design, prototype, impact assessment, A/B testing (including interpretation of results) and production deployment. * Promote the culture of experimentation and applied science at Amazon. * Demonstrated ability to meet deadlines while managing multiple projects. * Excellent communication and presentation skills working with multiple peer groups and different levels of management * Influence and continuously improve a sustainable team culture that exemplifies Amazon’s leadership principles. We are open to hiring candidates to work out of one of the following locations: London, GBR
JP, 13, Tokyo
We are seeking a Principal Economist to be the science leader in Amazon's customer growth and engagement. The wide remit covers Prime, delivery experiences, loyalty program (Amazon Points), and marketing. We look forward to partnering with you to advance our innovation on customers’ behalf. Amazon has a trailblazing track record of working with Ph.D. economists in the tech industry and offers a unique environment for economists to thrive. As an economist at Amazon, you will apply the frontier of econometric and economic methods to Amazon’s terabytes of data and intriguing customer problems. Your expertise in building reduced-form or structural causal inference models is exemplary in Amazon. Your strategic thinking in designing mechanisms and products influences how Amazon evolves. In this role, you will build ground-breaking, state-of-the-art econometric models to guide multi-billion-dollar investment decisions around the global Amazon marketplaces. You will own, execute, and expand a research roadmap that connects science, business, and engineering and contributes to Amazon's long term success. As one of the first economists outside North America/EU, you will make an outsized impact to our international marketplaces and pioneer in expanding Amazon’s economist community in Asia. The ideal candidate will be an experienced economist in empirical industrial organization, labour economics, or related structural/reduced-form causal inference fields. You are a self-starter who enjoys ambiguity in a fast-paced and ever-changing environment. You think big on the next game-changing opportunity but also dive deep into every detail that matters. You insist on the highest standards and are consistent in delivering results. Key job responsibilities - Work with Product, Finance, Data Science, and Data Engineering teams across the globe to deliver data-driven insights and products for regional and world-wide launches. - Innovate on how Amazon can leverage data analytics to better serve our customers through selection and pricing. - Contribute to building a strong data science community in Amazon Asia. We are open to hiring candidates to work out of one of the following locations: Tokyo, 13, JPN
DE, BE, Berlin
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis We are open to hiring candidates to work out of one of the following locations: Berlin, BE, DEU | Berlin, DEU
DE, BY, Munich
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis We are open to hiring candidates to work out of one of the following locations: Munich, BE, DEU | Munich, BY, DEU | Munich, DEU
IT, MI, Milan
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis We are open to hiring candidates to work out of one of the following locations: Milan, MI, ITA
ES, M, Madrid
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis We are open to hiring candidates to work out of one of the following locations: Madrid, ESP | Madrid, M, ESP
US, TX, Austin
The role is available Arlington, Virginia (may consider New York, NY, Los Angeles, CA, or Toronto, Canada). Calling all inventors to work on exciting new opportunities in Sponsored Products. Amazon is building a world class advertising business and defining and delivering a collection of self-service performance advertising products that drive discovery and sales of merchandise. Our products are strategically important to our Retail and Marketplace businesses, driving long-term growth. Sponsored Products (SP) helps merchants, retail vendors, and brand owners grows incremental sales of their products sold on Amazon through native advertising. SP achieves this by using a combination of machine learning, big data analytics, ultra-low latency high-volume engineering systems, and quantitative product focus. We are a highly motivated, collaborative and fun-loving group with an entrepreneurial spirit and bias for action. You will join a newly-founded team with a broad mandate to experiment and innovate, which gives us the flexibility to explore and apply scientific techniques to novel product problems. You will have the satisfaction of seeing your work improve the experience of millions of Amazon shoppers while driving quantifiable revenue impact. More importantly, you will have the opportunity to broaden your technical skills, work with Generative AI, and be a science leader in an environment that thrives on creativity, experimentation, and product innovation. We are open to hiring candidates to work out of one of the following locations: Austin, TX, USA
GB, London
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis Basic Qualifications -Masters in Computer Science, Machine Learning, Robotics or equivalent with a focus on Computer Vision. -2+ years of experience of building machine learning models for business application -Broad knowledge of fundamentals and state of the art in computer vision and machine learning -Strong coding skills in two or more programming languages such as Python or C/C++ -Knowledge of fundamentals in optimization, supervised and reinforcement learning -Excellent problem-solving ability Preferred Qualifications -PhD and 4+ years of industry or academic applied research experience applying Computer Vision techniques and developing Computer vision algorithms -Depth and breadth in state-of-the-art computer vision and machine learning technologies and experience designing and building computer vision solutions -Industry experience in sensor systems and the development of production computer vision and machine learning applications built to use them -Experience developing software interfacing to AWS services -Excellent written and verbal communication skills with the ability to present complex technical information in a clear and concise manner to a variety of audiences -Ability to work on a diverse team or with a diverse range of coworkers -Experience in publishing at major Computer Vision, ML or Robotics conferences or Journals (CVPR, ICCV, ECCV, NeurIPS, ICML, IJCV, ICRA, IROS, RSS,...) We are open to hiring candidates to work out of one of the following locations: London, GBR
US, WA, Seattle
Want to work in a start-up environment with the resources of Amazon behind you? Do you want to have direct and immediate impact on millions of customers every day? If you are a self-starter, passionate about machine learning, deep learning, big data systems, enjoy designing and implementing new features and machine learned models, and intrigued by ambiguous problems, look no further. Amazon Advertising operates at the intersection of eCommerce and advertising, offering a rich array of digital display advertising solutions with the goal of helping our customers find and discover anything they want to buy. We help advertisers of all types to reach Amazon customers on Amazon.com, across our other owned and operated sites, on other high quality sites across the web, and on millions of Kindles, tablets, and mobile devices. We start with the customer and work backwards in everything we do, including advertising. If you’re interested in joining a rapidly growing team working to build a unique, world-class advertising group with a relentless focus on the customer, you’ve come to the right place. About Our Team: Our team is responsible for building a new advertising product for non-endemic advertisers. We are tasked with taking this start-up offering to market, with the goal of empowering over one million non-endemic advertisers to independently plan and execute campaigns. “Non-endemic” brands offer products and services that are not sold/available in Amazon’s retail marketplace, including restaurants, hotels, airlines, insurance, telecom, and automobiles. We are embarking on a multi-year vision to democratize display advertising for non-endemic advertisers at self-service scale. This will open up Amazon Ads to self-service non-endemic demand— whether they sell on the Amazon store or not— to activate Amazon Ads first-party audiences built from shopping and streaming signals and access unique ad inventory to help grow their business. Open to hire in NYC or Seattle. Key job responsibilities - Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity. - Perform hands-on analysis and modeling of enormous data sets to develop insights that increase traffic monetization and merchandise sales, without compromising the shopper experience. - Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. - Run A/B experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Research new and innovative machine learning approaches. - Train and fine-tune neural models including transformers and language models. - Recruit Applied Scientists to the team and provide mentorship. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
LU, Luxembourg
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis We are open to hiring candidates to work out of one of the following locations: Luxembourg, LUX