Automatically identifying scene boundaries in movies and TV shows

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Scene boundary detection is the problem of localizing where scenes in a video begin and end. It’s an important step towards semantic understanding of video, with applications in scene classification, video retrieval and search, and video summarization, among other things.\n\nIn a [paper](https://www.amazon.science/publications/shot-contrastive-self-supervised-learning-for-scene-boundary-detection) we presented at this year’s Conference on Computer Vision and Pattern Recognition ([CVPR](https://www.amazon.science/conferences-and-events/cvpr-2021)), we described ShotCoL, a new self-supervised algorithm for scene boundary detection. \n\nIn terms of average precision, ShotCoL improves upon the previous state of the art in scene boundary detection on the MovieNet dataset by 13% while remaining data efficient and lightweight. It is 90% smaller and 84% faster than previous models and requires 75% less labeled data to match the previous state-of-the-art performance. \n\nThese improvements come largely from self-supervised learning, a learning method that can make use of large amounts of unlabeled data. In particular, we used contrastive learning, in which a model learns to distinguish similar and dissimilar examples via a pretext (or surrogate) task, which is related to the task at hand but not identical to it.\n\n![image.png](https://dev-media.amazoncloud.cn/fd5f8b40045242fdbc443c3525a72230_image.png)\n\nAn overview of the ShotCoL method. The model learns to pull together a given shot — the query shot — and the most similar neighboring shot (the key shot) and push apart the query shot and randomly selected shots. Any similarity metric will work, but we used cosine similarity. Our method is agnostic to the choice of model and can work with different modalities, such as visual and audio.\nCREDIT: GUPTA MEDIA; STILLS FROM THE MARVELOUS MRS. MAISEL.\n\nA scene in a film or TV show is a series of shots depicting a semantically cohesive part of a story, while a shot is a series of frames captured by the same camera over an uninterrupted period of time. Accurate scene boundary detection is challenging for existing models, while accurate shot detection is not. Thus, in our work, we formulate the problem of scene boundary detection as a binary classification task: every shot boundary is either a scene boundary as well, or it isn’t.\n\nIn the past, researchers using contrastive learning for image classification have generated examples of similar images through image augmentation: the query image might be flipped, for instance, and its color scheme altered to produce a positive key. ShotCoL, by contrast, leverages temporal relationships as well as visual similarity when finding positive examples.\n\nIn particular, for a given query shot, our pretext task defines the corresponding positive key as the most similar shot — measured by cosine similarity in the feature space — within a local neighborhood of shots. Over the course of training, our model learns an embedding that tends to group query shots and their corresponding positive key shots together, while separating dissimilar shots.\n\nIn our experiments, using nearest neighbors as positive keys proved more successful than other ways of choosing key shots, such as selecting the shot that immediately precedes or follows the query. Successive shots in a single scene can vary so greatly that the model can’t learn relationships that will allow it to generalize well to new inputs.\n\n![下载.gif](https://dev-media.amazoncloud.cn/54b11c8c798e42999eafea6986dbc210_%E4%B8%8B%E8%BD%BD.gif)\n\nPast approaches to contrastive learning used different ways of choosing key shots, such as selecting the shot that immediately precedes the query shot (bottom). ShotCol instead selects the most similar shot within a local neighborhood of shots (top).\nCREDIT: ANIMATION BY GUPTA MEDIA; STILLS FROM THE MARVELOUS MRS. MAISEL.\n\nThe goal of self-supervised learning is to learn an embedding that can be useful for downstream tasks — in our case, scene boundary detection. Once we’ve used self-supervised learning to train the embedding network, we freeze its weights and use the encoder to produce embeddings for a sequence of shots. In a supervised fashion, the embeddings are then used to train a classification model that outputs a binary decision about whether the middle shot in a sequence of shots is the end of a scene.\n\nAs mentioned above, ShotCoL improves the state-of-the-art average precision on the MovieNet dataset by more than 13% while remaining lightweight and data efficient. For more results, please see our full paper.\n\nWe believe that the insights provided by our work will lead to further advances in long-form-video-representation learning and benefit other tasks that require higher-level understanding of content, such as action localization, movie question answering, and search and retrieval. Moving forward, we will continue to strive to create the best viewing experience possible for Prime Video customers and push the envelope of research in multimodal video understanding.\n\nABOUT THE AUTHOR\n\n#### **[Shixing Chen](https://www.amazon.science/author/shixing-chen)**\n\nShixing Chen is an applied scientist with Prime Video.\n\n#### **[Xiaohan Nie](https://www.amazon.science/author/xiaohan-nie)**\n\nXiaohan Nie is an applied scientist with Prime Video.\n\n#### **[David Fan](https://www.amazon.science/author/david-fan)**\n\nDavid Fan is an applied scientist with Prime Video.\n\n#### **[Dongqing Zhang](https://www.amazon.science/author/dongqing-zhang)**\n\nDongqing Zhang is a senior applied scientist with Amazon Web Services.\n\n#### **[Vimal Bhat](https://www.amazon.science/author/vimal-bhat)**\n\nVimal Bhat is a senior manager of software development with Prime Video.\n\n#### **[Raffay Hamid](https://www.amazon.science/author/raffay-hamid)**\n\nRaffay Hamid is a principal scientist with Prime Video.","render":"<p>Scene boundary detection is the problem of localizing where scenes in a video begin and end. It’s an important step towards semantic understanding of video, with applications in scene classification, video retrieval and search, and video summarization, among other things.</p>\n<p>In a <a href=\\"https://www.amazon.science/publications/shot-contrastive-self-supervised-learning-for-scene-boundary-detection\\" target=\\"_blank\\">paper</a> we presented at this year’s Conference on Computer Vision and Pattern Recognition (<a href=\\"https://www.amazon.science/conferences-and-events/cvpr-2021\\" target=\\"_blank\\">CVPR</a>), we described ShotCoL, a new self-supervised algorithm for scene boundary detection.</p>\\n<p>In terms of average precision, ShotCoL improves upon the previous state of the art in scene boundary detection on the MovieNet dataset by 13% while remaining data efficient and lightweight. It is 90% smaller and 84% faster than previous models and requires 75% less labeled data to match the previous state-of-the-art performance.</p>\n<p>These improvements come largely from self-supervised learning, a learning method that can make use of large amounts of unlabeled data. In particular, we used contrastive learning, in which a model learns to distinguish similar and dissimilar examples via a pretext (or surrogate) task, which is related to the task at hand but not identical to it.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/fd5f8b40045242fdbc443c3525a72230_image.png\\" alt=\\"image.png\\" /></p>\n<p>An overview of the ShotCoL method. The model learns to pull together a given shot — the query shot — and the most similar neighboring shot (the key shot) and push apart the query shot and randomly selected shots. Any similarity metric will work, but we used cosine similarity. Our method is agnostic to the choice of model and can work with different modalities, such as visual and audio.<br />\\nCREDIT: GUPTA MEDIA; STILLS FROM THE MARVELOUS MRS. MAISEL.</p>\n<p>A scene in a film or TV show is a series of shots depicting a semantically cohesive part of a story, while a shot is a series of frames captured by the same camera over an uninterrupted period of time. Accurate scene boundary detection is challenging for existing models, while accurate shot detection is not. Thus, in our work, we formulate the problem of scene boundary detection as a binary classification task: every shot boundary is either a scene boundary as well, or it isn’t.</p>\n<p>In the past, researchers using contrastive learning for image classification have generated examples of similar images through image augmentation: the query image might be flipped, for instance, and its color scheme altered to produce a positive key. ShotCoL, by contrast, leverages temporal relationships as well as visual similarity when finding positive examples.</p>\n<p>In particular, for a given query shot, our pretext task defines the corresponding positive key as the most similar shot — measured by cosine similarity in the feature space — within a local neighborhood of shots. Over the course of training, our model learns an embedding that tends to group query shots and their corresponding positive key shots together, while separating dissimilar shots.</p>\n<p>In our experiments, using nearest neighbors as positive keys proved more successful than other ways of choosing key shots, such as selecting the shot that immediately precedes or follows the query. Successive shots in a single scene can vary so greatly that the model can’t learn relationships that will allow it to generalize well to new inputs.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/54b11c8c798e42999eafea6986dbc210_%E4%B8%8B%E8%BD%BD.gif\\" alt=\\"下载.gif\\" /></p>\n<p>Past approaches to contrastive learning used different ways of choosing key shots, such as selecting the shot that immediately precedes the query shot (bottom). ShotCol instead selects the most similar shot within a local neighborhood of shots (top).<br />\\nCREDIT: ANIMATION BY GUPTA MEDIA; STILLS FROM THE MARVELOUS MRS. MAISEL.</p>\n<p>The goal of self-supervised learning is to learn an embedding that can be useful for downstream tasks — in our case, scene boundary detection. Once we’ve used self-supervised learning to train the embedding network, we freeze its weights and use the encoder to produce embeddings for a sequence of shots. In a supervised fashion, the embeddings are then used to train a classification model that outputs a binary decision about whether the middle shot in a sequence of shots is the end of a scene.</p>\n<p>As mentioned above, ShotCoL improves the state-of-the-art average precision on the MovieNet dataset by more than 13% while remaining lightweight and data efficient. For more results, please see our full paper.</p>\n<p>We believe that the insights provided by our work will lead to further advances in long-form-video-representation learning and benefit other tasks that require higher-level understanding of content, such as action localization, movie question answering, and search and retrieval. Moving forward, we will continue to strive to create the best viewing experience possible for Prime Video customers and push the envelope of research in multimodal video understanding.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Shixing_Chenhttpswwwamazonscienceauthorshixingchen_34\\"></a><strong><a href=\\"https://www.amazon.science/author/shixing-chen\\" target=\\"_blank\\">Shixing Chen</a></strong></h4>\n<p>Shixing Chen is an applied scientist with Prime Video.</p>\n<h4><a id=\\"Xiaohan_Niehttpswwwamazonscienceauthorxiaohannie_38\\"></a><strong><a href=\\"https://www.amazon.science/author/xiaohan-nie\\" target=\\"_blank\\">Xiaohan Nie</a></strong></h4>\n<p>Xiaohan Nie is an applied scientist with Prime Video.</p>\n<h4><a id=\\"David_Fanhttpswwwamazonscienceauthordavidfan_42\\"></a><strong><a href=\\"https://www.amazon.science/author/david-fan\\" target=\\"_blank\\">David Fan</a></strong></h4>\n<p>David Fan is an applied scientist with Prime Video.</p>\n<h4><a id=\\"Dongqing_Zhanghttpswwwamazonscienceauthordongqingzhang_46\\"></a><strong><a href=\\"https://www.amazon.science/author/dongqing-zhang\\" target=\\"_blank\\">Dongqing Zhang</a></strong></h4>\n<p>Dongqing Zhang is a senior applied scientist with Amazon Web Services.</p>\n<h4><a id=\\"Vimal_Bhathttpswwwamazonscienceauthorvimalbhat_50\\"></a><strong><a href=\\"https://www.amazon.science/author/vimal-bhat\\" target=\\"_blank\\">Vimal Bhat</a></strong></h4>\n<p>Vimal Bhat is a senior manager of software development with Prime Video.</p>\n<h4><a id=\\"Raffay_Hamidhttpswwwamazonscienceauthorraffayhamid_54\\"></a><strong><a href=\\"https://www.amazon.science/author/raffay-hamid\\" target=\\"_blank\\">Raffay Hamid</a></strong></h4>\n<p>Raffay Hamid is a principal scientist with Prime Video.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭