Prime Video's work on 3-D scene reconstruction, image representation

CVPR papers examine the recovery of 3-D information from camera movement and learning general representations from weakly annotated data.

At this year’s Conference on Computer Vision and Pattern Recognition (CVPR), Prime Video is presenting a pair of papers that indicate the range of problems we work on.

In one paper, “Depth-guided sparse structure-from-motion for movies and TV shows”, we present a method for determining the camera movement and 3-D geometry of scenes depicted in videos. An important application of this work is to enable the accurate insertion of digital objects into already recorded videos. Our approach, which leverages off-the-shelf depth estimators to enhance the standard geometric-optimization approach, results in improvements of 10% to 30% on six different performance measures, relative to the best-performing prior technique.

SfM.gif
The Prime Video structure-from-motion system at work. At top is the input video. At lower left is the video with keypoints (colored circles) added. The keypoints are tracked accurately from frame to frame, and their color indicates their depth, as estimated by a machine learning model. At lower right is the 3-D model of the keypoints (whose rotation, to demonstrate the 3-D structure, is not synchronized with the video).

In the other paper, “Robust cross-modal representation learning with progressive self-distillation,” we expand on the CLIP method of using paired images and texts found online to train a model that produces image and text representations useful for downstream tasks, such as image classification or text-based image retrieval.

Where CLIP enforces a hard alignment between Web-crawled images and their associated texts, our method is more flexible, allowing for partial correspondences between a given image and texts associated with other images. We also use a self-distillation technique, in which our model progressively creates some of its own training targets, to steadily refine its representations.

Related content
Detectors for block corruption, audio artifacts, and errors in audio-video synchronization are just three of Prime Video’s quality assurance tools.

In two different image classification settings, our method outperforms CLIP across the board, by significant margins — 30% to 90% — on some datasets. Our method also consistently outperforms its CLIP counterpart on the tasks of image-based text retrieval and text-based image retrieval.

Structure-from-motion

Structure-from-motion is the problem of determining the 3-D structure of a scene from parallax — the relative displacement of objects in the scene as the camera moves. There are robust solutions for videos with large camera movements, but they don’t work as well for feature films and TV shows, where the camera movements tend to be more restrained.

The standard approach to determining structure from motion uses geometric optimization. First, the method estimates the location of a set of 3-D points in the scene, and then, based on that estimation, it re-projects them onto a 2-D image corresponding to each camera location. The optimization procedure minimizes the distance between points in the original 2-D image and the corresponding points of the 2-D projection.

We improve on this approach by introducing depth estimates performed by off-the-shelf, pretrained models. Instead of minimizing only the difference between the original and the projected 2-D points, our approach minimizes both the reprojection error of the 2-D points and the depth measurement error, relative to the output of the depth estimation model.

Double loss.png
Our approach jointly minimizes 2-D reprojection error and depth estimate error.

Our approach begins by using a standard method to detect image keypoints — salient points in the image, usually at object corners and other edge intersections — and identify their correspondences across successive frames of video. Then, through bilinear interpolation, we use the depth map obtained from an off-the-shelf depth estimator to determine the ground-truth keypoint depths. We use the depth information not only during optimization but also during the initialization stage of the process, when we produce our initial estimates of 3-D scene structure and relative camera pose.

SfM.png
The Prime Video structure-from-motion technique identifies keypoints in input video, finds their correspondences across frames, and then estimates their depth using bilinear interpolation on a dense depth map.

We experimented with several different depth estimation models and found that the results of our approach were essentially the same with all of them. And, in all cases, our approach improved substantially on the state of the art.

Cross-modal representations

In natural-language processing, the best-performing models in recent years have been built on top of language models that learn generic linguistic representations from huge corpora of unannotated public texts. The language models can then be fine-tuned for specific tasks with minimal additional data.

CLIP (contrastive language-image pretraining) seeks to do something similar for computer vision, learning generic visual representations from images harvested from the Web and their associated texts.

Related content
The switch to WebAssembly increases stability, speed.

Like many such weakly supervised models, CLIP is trained through contrastive learning. Intuitively, for each training image, the model is fed two texts: one, the positive training example, is the text associated with the image online; the other text, the negative example, is randomly chosen. CLIP learns a data representation that pulls the image and the positive text together in the representation space and pushes the image and the negative text apart.

Although CLIP has yielded impressive results on downstream computer vision tasks, its training approach has two drawbacks. First, the web-harvested data is noisy: the text associated with an image may in fact be semantically unrelated to it. Conversely, the text randomly selected as a negative example may in fact be semantically related to the image. CLIP can thus steer the model toward erroneous associations and away from correct ones.

Our method attempts to address this problem. Rather than learn a hard alignment between image and text, we learn a soft alignment, which gives the resulting model more interpretive flexibility.

For example, in one of our experiments, both the CLIP baseline and our model were trained on datasets that included images of goldfish. When presented with an image of a stained-glass window depicting a goldfish — a type of image not included in the training data — CLIP guessed that it was a guinea pig or maybe a beer glass, while our model guessed that it was a goldfish or possibly a clown fish. That is, our model learned a representation general enough to accommodate the stylization of the stained-glass artist’s rendering style.

CV model learning.png
CLIP’s contrastive-learning procedure enforces connections between web-harvested images and their associated texts (green lines, at left) while dissociating them from other images’ texts (red lines). Our approach instead privileges associated texts but also learns softer, probabilistic alignments with other images’ texts (dotted blue lines).

Our model learns its soft alignments through a self-distillation process. First, the model learns an initial data representation through the same contrastive-loss function that CLIP uses.

Over the course of training, however, we use the model itself to make predictions about the training examples and use those predictions as additional training targets. At first, the loss function gives these self-predictions little weight, but it gradually increases the weight as training progresses.

Related content
In a pilot study, an automated code checker found about 100 possible errors, 80% of which turned out to require correction.

The idea is that, over time, the model learns more reliable correlations between training images and texts. Self-distillation reinforces those correlations, so the model isn’t encouraged to break semantic connections between images and texts that may very well be present in the data. Similarly, over time, the model learns to give less weight to spurious connections between images and the texts initially associated with them.

The great virtue of general representation models like ours and CLIP is that they can be applied to a wide variety of computer vision problems. So the accuracy improvements that our approach affords should pay dividends for Prime Video customers in a range of contexts over the next few years.

Research areas

Related content

US, WA, Seattle
Job description: We are reimagining Amazon Search with an interactive conversational experience that helps you find answers to product questions, perform product comparisons, receive personalized product suggestions, and so much more, to easily find the perfect product for your needs. We’re looking for the best and brightest across Amazon to help us realize and deliver this vision to our customers right away. This will be a once in a generation transformation for Search, just like the Mosaic browser made the Internet easier to engage with three decades ago. If you missed the 90s—WWW, Mosaic, and the founding of Amazon and Google—you don’t want to miss this opportunity.
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Knowledge of econometrics, (Bayesian) time series, macroeconomic, as well as basic familiarity with Matlab, R, or Python is necessary, and experience with SQL would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, WA, Seattle
Do you want to join an innovative team of scientists who use machine learning to help Amazon provide the best experience to our Selling Partners by automatically understanding and addressing their challenges, needs and opportunities? Do you want to build advanced algorithmic systems that are powered by state-of-art ML, such as Natural Language Processing, Large Language Models, Deep Learning, Computer Vision and Causal Modeling, to seamlessly engage with Sellers? Are you excited by the prospect of analyzing and modeling terabytes of data and creating cutting edge algorithms to solve real world problems? Do you like to build end-to-end business solutions and directly impact the profitability of the company and experience of our customers? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Selling Partner Experience Science team. Key job responsibilities Use statistical and machine learning techniques to create the next generation of the tools that empower Amazon's Selling Partners to succeed. Design, develop and deploy highly innovative models to interact with Sellers and delight them with solutions. Work closely with teams of scientists and software engineers to drive real-time model implementations and deliver novel and highly impactful features. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. Research and implement novel machine learning and statistical approaches. Lead strategic initiatives to employ the most recent advances in ML in a fast-paced, experimental environment. Drive the vision and roadmap for how ML can continually improve Selling Partner experience. About the team Selling Partner Experience Science (SPeXSci) is a growing team of scientists, engineers and product leaders engaged in the research and development of the next generation of ML-driven technology to empower Amazon's Selling Partners to succeed. We draw from many science domains, from Natural Language Processing to Computer Vision to Optimization to Economics, to create solutions that seamlessly and automatically engage with Sellers, solve their problems, and help them grow. Focused on collaboration, innovation and strategic impact, we work closely with other science and technology teams, product and operations organizations, and with senior leadership, to transform the Selling Partner experience.
US, WA, Seattle
The AWS AI Labs team has a world-leading team of researchers and academics, and we are looking for world-class colleagues to join us and make the AI revolution happen. Our team of scientists have developed the algorithms and models that power AWS computer vision services such as Amazon Rekognition and Amazon Textract. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. AWS is the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems which will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. Our research themes include, but are not limited to: few-shot learning, transfer learning, unsupervised and semi-supervised methods, active learning and semi-automated data annotation, large scale image and video detection and recognition, face detection and recognition, OCR and scene text recognition, document understanding, 3D scene and layout understanding, and geometric computer vision. For this role, we are looking for scientist who have experience working in the intersection of vision and language. We are located in Seattle, Pasadena, Palo Alto (USA) and in Haifa and Tel Aviv (Israel).
RO, Iasi
Amazon’s mission is to be earth’s most customer-centric company and our team is the guardian of our customer’s privacy. Amazon SDO Privacy engineering operates in Austin – TX, US and Iasi, Bucharest – Romania. Our mission is to develop services which will enable every Amazon service operating with personal data to satisfy the privacy rights of Amazon customers. We are working backwards from the customers and world-wide privacy regulations, think long term, and propose solutions which will assure Amazon Privacy compliance. Our external customers are world-wide customers of Amazon Retail Website, Amazon B2B services (e.g. Seller central, App / Skill Developers), and Amazon Subsidiaries. Our internal customers are services within Amazon who operate with personal data, Legal Representatives, and Customer Service Agents. You can opt-in for being part of one of the existing or newly formed engineering teams who will contribute to Amazon mission to meet external customers’ privacy rights: Personal Data Classification, The Right to be forgotten, The right of access, or Digital Markets Act – The Right of Portability. The ideal candidate has a great passion for data and an insatiable desire to learn and innovate. A commitment to team work, hustle and strong communication skills (to both business and technical partners) are absolute requirements. Creating reliable, scalable, and high-performance products requires a sound understanding of the fundamentals of Computer Science and practical experience building large-scale distributed systems. Your solutions will apply to all of Amazon’s consumer and digital businesses including but not limited to Amazon.com, Alexa, Kindle, Amazon Go, Prime Video and more. Key job responsibilities As an data scientist on our team, you will apply the appropriate technologies and best practices to autonomously solve difficult problems. You'll contribute to the science solution design, run experiments, research new algorithms, and find new ways of optimizing customer experience. Besides theoretical analysis and innovation, you will work closely with talented engineers and ML scientists to put your algorithms and models into practice. You will collaborate with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, and model development. Your work will directly impact the trust customers place in Amazon Privacy, globally.
JP, 13, Tokyo
The JP Economics team is a central science team working across a variety of topics in the JP Retail business and beyond. We work closely with JP business leaders to drive change at Amazon. We focus on solving long-term, ambiguous and challenging problems, while providing advisory support to help solve short-term business pain points. Key topics include pricing, product selection, delivery speed, profitability, and customer experience. We tackle these issues by building novel economic/econometric models, machine learning systems, and high-impact experiments which we integrate into business, financial, and system-level decision making. Our work is highly collaborative and we regularly partner with JP- EU- and US-based interdisciplinary teams. In this role, you will build ground-breaking, state-of-the-art causal inference models to guide multi-billion-dollar investment decisions around the global Amazon marketplaces. You will own, execute, and expand a research roadmap that connects science, business, and engineering and contributes to Amazon's long term success. As one of the first economists outside North America/EU, you will make an outsized impact to our international marketplaces and pioneer in expanding Amazon’s economist community in Asia. The ideal candidate will be an experienced economist in empirical industrial organization, labour economics, econometrics, or related structural/reduced-form causal inference fields. You are a self-starter who enjoys ambiguity in a fast-paced and ever-changing environment. You think big on the next game-changing opportunity but also dive deep into every detail that matters. You insist on the highest standards and are consistent in delivering results. Key job responsibilities Work with Product, Finance, Data Science, and Data Engineering teams across the globe to deliver data-driven insights and products for regional and world-wide launches. Innovate on how Amazon can leverage data analytics to better serve our customers through selection and pricing. Contribute to building a strong data science community in Amazon Asia.
GB, London
Are you excited about applying economic models and methods using large data sets to solve real world business problems? Then join the Economic Decision Science (EDS) team. EDS is an economic science team based in the EU Stores business. The teams goal is to optimize and automate business decision making in the EU business and beyond. An internship at Amazon is an opportunity to work with leading economic researchers on influencing needle-moving business decisions using incomparable datasets and tools. It is an opportunity for PhD students and recent PhD graduates in Economics or related fields. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL would be a plus. As an Economics Intern, you will be working in a fast-paced, cross-disciplinary team of researchers who are pioneers in the field. You will take on complex problems, and work on solutions that either leverage existing academic and industrial research, or utilize your own out-of-the-box pragmatic thinking. In addition to coming up with novel solutions and prototypes, you may even need to deliver these to production in customer facing products. Roughly 85% of previous intern cohorts have converted to full time economics employment at Amazon.
US, CA, Cupertino
We're looking for an Applied Scientist to help us secure Amazon's most critical data. In this role, you'll work closely with internal security teams to design and build AR-powered systems that protect our customers' data. You will build on top of existing formal verification tools developed by AWS and develop new methods to apply those tools at scale. You will need to be innovative, entrepreneurial, and adaptable. We move fast, experiment, iterate and then scale quickly, thoughtfully balancing speed and quality. Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. Key job responsibilities Deeply understand AR techniques for analyzing programs and other systems, and keep up with emerging ideas from the research community. Engage with our customers to develop understanding of their needs. Propose and develop solutions that leverage symbolic reasoning services and concepts from programming languages, theorem proving, formal verification and constraint solving. Implement these solutions as services and work with others to deploy them at scale across Payments and Healthcare. Author papers and present your work internally and externally. Train new teammates, mentor others, participate in recruiting and interviewing, and participate in our tactical and strategic planning. About the team Our small team of applied scientists works within a larger security group, supporting thousands of engineers who are developing Amazon's payments and healthcare services. Security is a rich area for automated reasoning. Most other approaches are quite ad-hoc and take a lot of human effort. AR can help us to reason deliberately and systematically, and the dream of provable security is incredibly compelling. We are working to make this happen at scale. We partner closely with our larger security group and with other automated reasoning teams in AWS that develop core reasoning services.
US, NY, New York
Search Thematic Ad Experience (STAX) team within Sponsored Products is looking for a leader to lead a team of talented applied scientists working on cutting-edge science to innovate on ad experiences for Amazon shoppers!. You will manage a team of scientists, engineers, and PMs to innovate new widgets on Amazon Search page to improve shopper experience using state-of-the-art NLP and computer vision models. You will be leading some industry first experiences that has the potential to revolutionize how shopping looks and feels like on Amazon, and e-commerce marketplaces in general. You will have the opportunity to design the vision on how ad experiences look on Amazon search page, and use the combination of advanced techniques and continuous experimentation to realize this vision. Your work will be core to Amazon’s advertising business. You will be a significant contributor in building the future of sponsored advertising, directly impacting the shopper experience for our hundreds of millions of shoppers worldwide, while delivering significant value for hundreds of thousands of advertisers across the purchase journey with ads on Amazon. Key job responsibilities * Be the technical leader in Machine Learning; lead efforts within the team, and collaborate and influence across the organization. * Be a critic, visionary, and execution leader. Invent and test new product ideas that are powered by science that addresses key product gaps or shopper needs. * Set, plan, and execute on a roadmap that strikes the optimal balance between short term delivery and long term exploration. You will influence what we invest in today and tomorrow. * Evangelize the team’s science innovation within the organization, company, and in key conferences (internal and external). * Be ruthless with prioritization. You will be managing a team which is highly sought after. But not all can be done. Have a deep understanding of the tradeoffs involved and be fierce in prioritizing. * Bring clarity, direction, and guidance to help teams navigate through unsolved problems with the goal to elevate the shopper experience. We work on ambiguous problems and the right approach is often unknown. You will bring your rich experience to help guide the team through these ambiguities, while working with product and engineering in crisply defining the science scope and opportunities. * Have strong product and business acumen to drive both shopper improvements and business outcomes. A day in the life * Lead a multidisciplinary team that embodies “customer obsessed science”: inventing brand new approaches to solve Amazon’s unique problems, and using those inventions in software that affects hundreds of millions of customers * Dive deep into our metrics, ongoing experiments to understand how and why they are benefitting our shoppers (or not) * Design, prototype and validate new widgets, techniques, and ideas. Take end-to-end ownership of moving from prototype to final implementation. * Be an advocate and expert for STAX science to leaders and stakeholders inside and outside advertising. About the team We are the Search thematic ads experience team within Sponsored products - a fast growing team of customer-obsessed engineers, technologists, product leaders, and scientists. We are focused on continuous exploration of contexts and creatives to drive value for both our customers and advertisers, through continuous innovation. We focus on new ads experiences globally to help shoppers make the most informed purchase decision while helping shortcut the time to discovery that shoppers are highly likely to engage with. We also harvest rich contextual and behavioral signals that are used to optimize our backend models to continually improve the shopper experience. We obsess about our customers and are continuously seeking opportunities to delight them.
US, CA, Palo Alto
Amazon is the 4th most popular site in the US. Our product search engine, one of the most heavily used services in the world, indexes billions of products and serves hundreds of millions of customers world-wide. We are working on a new initiative to transform our search engine into a shopping engine that assists customers with their shopping missions. We look at all aspects of search CX, query understanding, Ranking, Indexing and ask how we can make big step improvements by applying advanced Machine Learning (ML) and Deep Learning (DL) techniques. We’re seeking a thought leader to direct science initiatives for the Search Relevance and Ranking at Amazon. This person will also be a deep learning practitioner/thinker and guide the research in these three areas. They’ll also have the ability to drive cutting edge, product oriented research and should have a notable publication record. This intellectual thought leader will help enhance the science in addition to developing the thinking of our team. This leader will direct and shape the science philosophy, planning and strategy for the team, as we explore multi-modal, multi lingual search through the use of deep learning . We’re seeking an individual that can enhance the science thinking of our team: The org is made of 60+ applied scientists, (2 Principal scientists and 5 Senior ASMs). This person will lead and shape the science philosophy, planning and strategy for the team, as we push into Deep Learning to solve problems like cold start, discovery and personalization in the Search domain. Joining this team, you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon [Earth's most customer-centric internet company]. We provide a highly customer-centric, team-oriented environment in our offices located in Palo Alto, California.