Prime Video's work on 3-D scene reconstruction, image representation

0
0
{"value":"At this year’s Conference on Computer Vision and Pattern Recognition (++[CVPR](https://www.amazon.science/conferences-and-events/cvpr-2022)++), Prime Video is presenting a pair of papers that indicate the range of problems we work on.\n\nIn one paper, “++[Depth-guided sparse structure-from-motion for movies and TV shows](https://www.amazon.science/publications/depth-guided-sparse-structure-from-motion-for-movies-and-tv-shows)++”, we present a method for determining the camera movement and 3-D geometry of scenes depicted in videos. An important application of this work is to enable the accurate insertion of digital objects into already recorded videos. Our approach, which leverages off-the-shelf depth estimators to enhance the standard geometric-optimization approach, results in improvements of 10% to 30% on six different performance measures, relative to the best-performing prior technique.\n\n![下载.gif](https://dev-media.amazoncloud.cn/6c1dd8a9e5a44f82adc04e156a46172c_%E4%B8%8B%E8%BD%BD.gif)\n\nThe Prime Video structure-from-motion system at work. At top is the input video. At lower left is the video with keypoints (colored circles) added. The keypoints are tracked accurately from frame to frame, and their color indicates their depth, as estimated by a machine learning model. At lower right is the 3-D model of the keypoints (whose rotation, to demonstrate the 3-D structure, is not synchronized with the video).\n\nIn the other paper, “++[Robust cross-modal representation learning with progressive self-distillation](https://www.amazon.science/publications/robust-cross-modal-representation-learning-with-progressive-self-distillation)++,” we expand on the CLIP method of using paired images and texts found online to train a model that produces image and text representations useful for downstream tasks, such as image classification or text-based image retrieval.\n\nWhere CLIP enforces a hard alignment between Web-crawled images and their associated texts, our method is more flexible, allowing for partial correspondences between a given image and texts associated with other images. We also use a self-distillation technique, in which our model progressively creates some of its own training targets, to steadily refine its representations.\n\nIn two different image classification settings, our method outperforms CLIP across the board, by significant margins — 30% to 90% — on some datasets. Our method also consistently outperforms its CLIP counterpart on the tasks of image-based text retrieval and text-based image retrieval.\n\n#### **Structure-from-motion**\n\nStructure-from-motion is the problem of determining the 3-D structure of a scene from parallax — the relative displacement of objects in the scene as the camera moves. There are robust solutions for videos with large camera movements, but they don’t work as well for feature films and TV shows, where the camera movements tend to be more restrained.\n\nThe standard approach to determining structure from motion uses geometric optimization. First, the method estimates the location of a set of 3-D points in the scene, and then, based on that estimation, it re-projects them onto a 2-D image corresponding to each camera location. The optimization procedure minimizes the distance between points in the original 2-D image and the corresponding points of the 2-D projection.\n\nWe improve on this approach by introducing depth estimates performed by off-the-shelf, pretrained models. Instead of minimizing only the difference between the original and the projected 2-D points, our approach minimizes both the reprojection error of the 2-D points and the depth measurement error, relative to the output of the depth estimation model.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/696293d2592047a6a6458a56883d1f5f_%E4%B8%8B%E8%BD%BD.jpg)\n\nOur approach jointly minimizes 2-D reprojection error and depth estimate error.\n\nOur approach begins by using a standard method to detect image keypoints — salient points in the image, usually at object corners and other edge intersections — and identify their correspondences across successive frames of video. Then, through bilinear interpolation, we use the depth map obtained from an off-the-shelf depth estimator to determine the ground-truth keypoint depths. We use the depth information not only during optimization but also during the initialization stage of the process, when we produce our initial estimates of 3-D scene structure and relative camera pose.\n\n![下载 1.jpg](https://dev-media.amazoncloud.cn/15857c7d71ab49db86dfd5ef8d2ff897_%E4%B8%8B%E8%BD%BD%20%281%29.jpg)\n\nThe Prime Video structure-from-motion technique identifies keypoints in input video, finds their correspondences across frames, and then estimates their depth using bilinear interpolation on a dense depth map.\n\nWe experimented with several different depth estimation models and found that the results of our approach were essentially the same with all of them. And, in all cases, our approach improved substantially on the state of the art.\n\n#### **Cross-modal representations**\n\nIn natural-language processing, the best-performing models in recent years have been built on top of language models that learn generic linguistic representations from huge corpora of unannotated public texts. The language models can then be fine-tuned for specific tasks with minimal additional data.\n\nCLIP (contrastive language-image pretraining) seeks to do something similar for computer vision, learning generic visual representations from images harvested from the Web and their associated texts.\n\nLike many such weakly supervised models, CLIP is trained through ++[contrastive learning](https://www.amazon.science/tag/contrastive-learning)++. Intuitively, for each training image, the model is fed two texts: one, the positive training example, is the text associated with the image online; the other text, the negative example, is randomly chosen. CLIP learns a data representation that pulls the image and the positive text together in the representation space and pushes the image and the negative text apart.\n\nAlthough CLIP has yielded impressive results on downstream computer vision tasks, its training approach has two drawbacks. First, the web-harvested data is noisy: the text associated with an image may in fact be semantically unrelated to it. Conversely, the text randomly selected as a negative example may in fact be semantically related to the image. CLIP can thus steer the model toward erroneous associations and away from correct ones.\n\nOur method attempts to address this problem. Rather than learn a hard alignment between image and text, we learn a soft alignment, which gives the resulting model more interpretive flexibility.\n\nFor example, in one of our experiments, both the CLIP baseline and our model were trained on datasets that included images of goldfish. When presented with an image of a stained-glass window depicting a goldfish — a type of image not included in the training data — CLIP guessed that it was a guinea pig or maybe a beer glass, while our model guessed that it was a goldfish or possibly a clown fish. That is, our model learned a representation general enough to accommodate the stylization of the stained-glass artist’s rendering style.\n\n![下载 2.jpg](https://dev-media.amazoncloud.cn/d6e5b868c6484045a14ae044272baedb_%E4%B8%8B%E8%BD%BD%20%282%29.jpg)\n\nCLIP’s contrastive-learning procedure enforces connections between web-harvested images and their associated texts (green lines, at left) while dissociating them from other images’ texts (red lines). Our approach instead privileges associated texts but also learns softer, probabilistic alignments with other images’ texts (dotted blue lines).\n\nOur model learns its soft alignments through a self-distillation process. First, the model learns an initial data representation through the same contrastive-loss function that CLIP uses.\n\nOver the course of training, however, we use the model itself to make predictions about the training examples and use those predictions as additional training targets. At first, the loss function gives these self-predictions little weight, but it gradually increases the weight as training progresses.\n\nThe idea is that, over time, the model learns more reliable correlations between training images and texts. Self-distillation reinforces those correlations, so the model isn’t encouraged to break semantic connections between images and texts that may very well be present in the data. Similarly, over time, the model learns to give less weight to spurious connections between images and the texts initially associated with them.\n\nThe great virtue of general representation models like ours and CLIP is that they can be applied to a wide variety of computer vision problems. So the accuracy improvements that our approach affords should pay dividends for Prime Video customers in a range of contexts over the next few years.\n\nABOUT THE AUTHOR\n\n#### **[Raffay Hamid](https://www.amazon.science/author/raffay-hamid)**\nRaffay Hamid is a principal scientist with Prime Video.","render":"<p>At this year’s Conference on Computer Vision and Pattern Recognition (<ins><a href=\\"https://www.amazon.science/conferences-and-events/cvpr-2022\\" target=\\"_blank\\">CVPR</a></ins>), Prime Video is presenting a pair of papers that indicate the range of problems we work on.</p>\n<p>In one paper, “<ins><a href=\\"https://www.amazon.science/publications/depth-guided-sparse-structure-from-motion-for-movies-and-tv-shows\\" target=\\"_blank\\">Depth-guided sparse structure-from-motion for movies and TV shows</a></ins>”, we present a method for determining the camera movement and 3-D geometry of scenes depicted in videos. An important application of this work is to enable the accurate insertion of digital objects into already recorded videos. Our approach, which leverages off-the-shelf depth estimators to enhance the standard geometric-optimization approach, results in improvements of 10% to 30% on six different performance measures, relative to the best-performing prior technique.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/6c1dd8a9e5a44f82adc04e156a46172c_%E4%B8%8B%E8%BD%BD.gif\\" alt=\\"下载.gif\\" /></p>\n<p>The Prime Video structure-from-motion system at work. At top is the input video. At lower left is the video with keypoints (colored circles) added. The keypoints are tracked accurately from frame to frame, and their color indicates their depth, as estimated by a machine learning model. At lower right is the 3-D model of the keypoints (whose rotation, to demonstrate the 3-D structure, is not synchronized with the video).</p>\n<p>In the other paper, “<ins><a href=\\"https://www.amazon.science/publications/robust-cross-modal-representation-learning-with-progressive-self-distillation\\" target=\\"_blank\\">Robust cross-modal representation learning with progressive self-distillation</a></ins>,” we expand on the CLIP method of using paired images and texts found online to train a model that produces image and text representations useful for downstream tasks, such as image classification or text-based image retrieval.</p>\n<p>Where CLIP enforces a hard alignment between Web-crawled images and their associated texts, our method is more flexible, allowing for partial correspondences between a given image and texts associated with other images. We also use a self-distillation technique, in which our model progressively creates some of its own training targets, to steadily refine its representations.</p>\n<p>In two different image classification settings, our method outperforms CLIP across the board, by significant margins — 30% to 90% — on some datasets. Our method also consistently outperforms its CLIP counterpart on the tasks of image-based text retrieval and text-based image retrieval.</p>\n<h4><a id=\\"Structurefrommotion_14\\"></a><strong>Structure-from-motion</strong></h4>\\n<p>Structure-from-motion is the problem of determining the 3-D structure of a scene from parallax — the relative displacement of objects in the scene as the camera moves. There are robust solutions for videos with large camera movements, but they don’t work as well for feature films and TV shows, where the camera movements tend to be more restrained.</p>\n<p>The standard approach to determining structure from motion uses geometric optimization. First, the method estimates the location of a set of 3-D points in the scene, and then, based on that estimation, it re-projects them onto a 2-D image corresponding to each camera location. The optimization procedure minimizes the distance between points in the original 2-D image and the corresponding points of the 2-D projection.</p>\n<p>We improve on this approach by introducing depth estimates performed by off-the-shelf, pretrained models. Instead of minimizing only the difference between the original and the projected 2-D points, our approach minimizes both the reprojection error of the 2-D points and the depth measurement error, relative to the output of the depth estimation model.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/696293d2592047a6a6458a56883d1f5f_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>Our approach jointly minimizes 2-D reprojection error and depth estimate error.</p>\n<p>Our approach begins by using a standard method to detect image keypoints — salient points in the image, usually at object corners and other edge intersections — and identify their correspondences across successive frames of video. Then, through bilinear interpolation, we use the depth map obtained from an off-the-shelf depth estimator to determine the ground-truth keypoint depths. We use the depth information not only during optimization but also during the initialization stage of the process, when we produce our initial estimates of 3-D scene structure and relative camera pose.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/15857c7d71ab49db86dfd5ef8d2ff897_%E4%B8%8B%E8%BD%BD%20%281%29.jpg\\" alt=\\"下载 1.jpg\\" /></p>\n<p>The Prime Video structure-from-motion technique identifies keypoints in input video, finds their correspondences across frames, and then estimates their depth using bilinear interpolation on a dense depth map.</p>\n<p>We experimented with several different depth estimation models and found that the results of our approach were essentially the same with all of them. And, in all cases, our approach improved substantially on the state of the art.</p>\n<h4><a id=\\"Crossmodal_representations_34\\"></a><strong>Cross-modal representations</strong></h4>\\n<p>In natural-language processing, the best-performing models in recent years have been built on top of language models that learn generic linguistic representations from huge corpora of unannotated public texts. The language models can then be fine-tuned for specific tasks with minimal additional data.</p>\n<p>CLIP (contrastive language-image pretraining) seeks to do something similar for computer vision, learning generic visual representations from images harvested from the Web and their associated texts.</p>\n<p>Like many such weakly supervised models, CLIP is trained through <ins><a href=\\"https://www.amazon.science/tag/contrastive-learning\\" target=\\"_blank\\">contrastive learning</a></ins>. Intuitively, for each training image, the model is fed two texts: one, the positive training example, is the text associated with the image online; the other text, the negative example, is randomly chosen. CLIP learns a data representation that pulls the image and the positive text together in the representation space and pushes the image and the negative text apart.</p>\n<p>Although CLIP has yielded impressive results on downstream computer vision tasks, its training approach has two drawbacks. First, the web-harvested data is noisy: the text associated with an image may in fact be semantically unrelated to it. Conversely, the text randomly selected as a negative example may in fact be semantically related to the image. CLIP can thus steer the model toward erroneous associations and away from correct ones.</p>\n<p>Our method attempts to address this problem. Rather than learn a hard alignment between image and text, we learn a soft alignment, which gives the resulting model more interpretive flexibility.</p>\n<p>For example, in one of our experiments, both the CLIP baseline and our model were trained on datasets that included images of goldfish. When presented with an image of a stained-glass window depicting a goldfish — a type of image not included in the training data — CLIP guessed that it was a guinea pig or maybe a beer glass, while our model guessed that it was a goldfish or possibly a clown fish. That is, our model learned a representation general enough to accommodate the stylization of the stained-glass artist’s rendering style.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/d6e5b868c6484045a14ae044272baedb_%E4%B8%8B%E8%BD%BD%20%282%29.jpg\\" alt=\\"下载 2.jpg\\" /></p>\n<p>CLIP’s contrastive-learning procedure enforces connections between web-harvested images and their associated texts (green lines, at left) while dissociating them from other images’ texts (red lines). Our approach instead privileges associated texts but also learns softer, probabilistic alignments with other images’ texts (dotted blue lines).</p>\n<p>Our model learns its soft alignments through a self-distillation process. First, the model learns an initial data representation through the same contrastive-loss function that CLIP uses.</p>\n<p>Over the course of training, however, we use the model itself to make predictions about the training examples and use those predictions as additional training targets. At first, the loss function gives these self-predictions little weight, but it gradually increases the weight as training progresses.</p>\n<p>The idea is that, over time, the model learns more reliable correlations between training images and texts. Self-distillation reinforces those correlations, so the model isn’t encouraged to break semantic connections between images and texts that may very well be present in the data. Similarly, over time, the model learns to give less weight to spurious connections between images and the texts initially associated with them.</p>\n<p>The great virtue of general representation models like ours and CLIP is that they can be applied to a wide variety of computer vision problems. So the accuracy improvements that our approach affords should pay dividends for Prime Video customers in a range of contexts over the next few years.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Raffay_Hamidhttpswwwamazonscienceauthorraffayhamid_62\\"></a><strong><a href=\\"https://www.amazon.science/author/raffay-hamid\\" target=\\"_blank\\">Raffay Hamid</a></strong></h4>\n<p>Raffay Hamid is a principal scientist with Prime Video.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭