WACV: Transformers for video and contrastive learning

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Joe Tighe, senior manager for computer vision at Amazon Web Services, is a coauthor on two papers being presented at this year’s Winter Conference on Applications of Computer Vision (++[WACV](https://www.amazon.science/conferences-and-events/amazon-wacv-2021)++), and as he prepares to attend the conference, he sees two major trends in the field of computer vision.\n\n“One is Transformers and what they can do, and the other is self-supervised or unsupervised learning and how we can apply that,” Tighe says.\n\n![image.png](https://dev-media.amazoncloud.cn/53c5e39a56c246bd870497dd897ed74a_image.png)\n\nJoe Tighe, senior manager for computer vision at Amazon Web Services.\n\nThe Transformer is a neural-network architecture that uses attention mechanisms to improve performance on machine learning tasks. When processing part of a stream of input data, the Transformer attends to data from other parts of the stream, which influences its handling of the data at hand. Transformers have enabled state-of-the-art performance on natural-language-processing tasks because of their ability to model long-range correlations — recognizing, for instance, that the name at the start of a sentence might be the referent of a pronoun at the sentence’s end.\n\nIn visual data, on the other hand, locality tends to matter more: usually, the value of a pixel is more strongly correlated with those of the pixels around it than with pixels that are farther away. Computer vision has traditionally relied on convolutional neural networks (CNNs), which step through an image applying the same set of filters — or kernels — to each patch of an image. That way, the CNN can find the patterns it’s looking for — say, visual characteristics of dog ears — wherever in the image they occur.\n\n“We've been successful in basically achieving the same accuracy as convolutional networks with these Transformers,” Tighe says. “And we maintain that locality constraint by, for instance, feeding in patches of images, because with a patch, you have to be local. Or we start out with a CNN and then feed mid-level features from the CNN into the Transformer, and then you let the Transformer go and relate any patch to any other patch.\n\n“But I don't think what Transformers are going to bring to our field is higher accuracy for just embedding images. What they are incredibly good at — and we’re already seeing strong results — is processing structured data.”\n\n![image.png](https://dev-media.amazoncloud.cn/7bee1a2ee42d4b0ea02b1c2c7abd23fd_image.png)\n\nOne of the WACV papers on which Tighe is a coauthor describes a machine learning model that uses attention mechanisms to determine which frames of a video are most relevant to the task of action recognition. At left are video clips, at right heat maps that indicate where the model is attending. Where action is uniform, so is the model's attention (top). In other cases, the model attends only to the most informative parts of the clip (red boxes, center and bottom). From \"++[NUTA: Non-uniform temporal aggregation for action recognition](https://www.amazon.science/publications/nuta-non-uniform-temporal-aggregation-for-action-recognition)++\".\n\nFor instance, Tighe explains, Transformers can more naturally infer object permanence — determining that a collection of pixels in one frame of video designate the same object as a different collection of pixels in a different frame.\n\nThis is crucial to a number of video applications. For instance, determining the semantic content of a film or TV show requires recognizing the same characters across different shots. Similarly, Amazon Go — the Amazon service that enables checkout-free shopping in physical stores — needs to recognize that the same customer who picked up canned peaches on aisle three also picked up raisin bran on aisle five.\n\n“To understand a movie, we can't just send in frames,” Tighe says. “One of the things my group is doing — as well as a lot of different groups — is using Transformers to take in audio information, take in text, like subtitles, and take in the visual information, the movie content, into one framework. Because what you see is only half of it. What you hear is as, if not more, important for understanding what's going on in these movies. I see Transformers as a powerful tool to finally not have ad hoc ways to combine audio, text, and video together.”\n\n#### **Contrastive learning**\n\nOn the topic of unsupervised and self-supervised learning, Tighe says, the most interesting recent development has been the exploration of ++[contrastive learning](https://www.amazon.science/tag/contrastive-learning)++. With contrastive learning, a neural network is fed pairs of inputs, some from the same class and some from different classes, and it learns to produce embeddings — vector representations — that cluster instances of the same class together and separate instances of different classes. The trick is to do this with unlabeled data.\n\n“If you take an image, and then you augment it, you change its color, you take a really aggressive crop, you add a bunch of noise, then you have two examples,” Tighe explains. “You put those both through the network and you say, These two things are the same thing. You can be very aggressive with your augmentations. So when you get, say, a crop of a dog's head and a crop of a dog's tail, you're telling the network these are semantically the same object. And so it needs to learn high-level semantics of dog parts.\n\n“But you also need to push them apart from something else. It’s easy to find examples that are far away already, but that doesn’t help the network learn. What we really need is to find the closest example and push away from that. So I think one of the key innovations here is that you have this large bank of image embeddings that you should push against. The network is going to pick out the really hard examples, the ones that it naturally is embedding very close together. It's going to try and push those apart, and that's how this embedding is learned very well.\n\n“Then at the end, when you're going to test how well it does, you just train a single linear layer with all your labeled data. The idea is, if this works, we should be able to throw the world of images at one of these systems, train the ultimate embedding that can describe the entire world, and then, with our specific task in mind, just with a little bit of data, train that last layer and have very high performance.”\n\n#### **Action recognition**\n\nIn his own papers at WACV, Tighe and his colleagues are exploring both attention mechanisms and semi-supervised learning — although not exactly Transformers and contrastive learning.\n\n“One WACV paper is looking at how we ++[use the Transformer mechanism of self-attention to aggregate temporal information](https://www.amazon.science/publications/nuta-non-uniform-temporal-aggregation-for-action-recognition)++ ,” he explains. “It's actually a CNN, but then we use that self-attention mechanism to aggregate information across the whole video. So we get the ability to share information globally inside this network as well.\n\n“The other one is looking at, if you have a dictionary of actions, ++[how can you predict the different actions that are occurring](https://www.amazon.science/publications/sscap-self-supervised-co-occurrence-action-parsing-for-unsupervised-temporal-action-segmentation)++ by looking at a bunch of events? One of the datasets we look at is gymnastics. So if we look at the floor plan for a gymnastics event, and you have a number of examples of that, we predict the fine-grain actions like a flip or turnover that happened without supervision of those fine-grain actions.”\n\nAs for what the future may hold, “what's really missing from video research is around how you model the temporal dimension,” Tighe says. “And I'm not claiming to know what that means yet. But it's inherently a different signal; it can't just be treated like another space dimension.”\n\nABOUT THE AUTHOR\n\n#### **[Larry Hardesty](https://www.amazon.science/author/larry-hardesty)**\n\nLarry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.\n\n\n\n\n\n\n\n\n","render":"<p>Joe Tighe, senior manager for computer vision at Amazon Web Services, is a coauthor on two papers being presented at this year’s Winter Conference on Applications of Computer Vision (<ins><a href=\\"https://www.amazon.science/conferences-and-events/amazon-wacv-2021\\" target=\\"_blank\\">WACV</a></ins>), and as he prepares to attend the conference, he sees two major trends in the field of computer vision.</p>\n<p>“One is Transformers and what they can do, and the other is self-supervised or unsupervised learning and how we can apply that,” Tighe says.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/53c5e39a56c246bd870497dd897ed74a_image.png\\" alt=\\"image.png\\" /></p>\n<p>Joe Tighe, senior manager for computer vision at Amazon Web Services.</p>\n<p>The Transformer is a neural-network architecture that uses attention mechanisms to improve performance on machine learning tasks. When processing part of a stream of input data, the Transformer attends to data from other parts of the stream, which influences its handling of the data at hand. Transformers have enabled state-of-the-art performance on natural-language-processing tasks because of their ability to model long-range correlations — recognizing, for instance, that the name at the start of a sentence might be the referent of a pronoun at the sentence’s end.</p>\n<p>In visual data, on the other hand, locality tends to matter more: usually, the value of a pixel is more strongly correlated with those of the pixels around it than with pixels that are farther away. Computer vision has traditionally relied on convolutional neural networks (CNNs), which step through an image applying the same set of filters — or kernels — to each patch of an image. That way, the CNN can find the patterns it’s looking for — say, visual characteristics of dog ears — wherever in the image they occur.</p>\n<p>“We’ve been successful in basically achieving the same accuracy as convolutional networks with these Transformers,” Tighe says. “And we maintain that locality constraint by, for instance, feeding in patches of images, because with a patch, you have to be local. Or we start out with a CNN and then feed mid-level features from the CNN into the Transformer, and then you let the Transformer go and relate any patch to any other patch.</p>\n<p>“But I don’t think what Transformers are going to bring to our field is higher accuracy for just embedding images. What they are incredibly good at — and we’re already seeing strong results — is processing structured data.”</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/7bee1a2ee42d4b0ea02b1c2c7abd23fd_image.png\\" alt=\\"image.png\\" /></p>\n<p>One of the WACV papers on which Tighe is a coauthor describes a machine learning model that uses attention mechanisms to determine which frames of a video are most relevant to the task of action recognition. At left are video clips, at right heat maps that indicate where the model is attending. Where action is uniform, so is the model’s attention (top). In other cases, the model attends only to the most informative parts of the clip (red boxes, center and bottom). From “<ins><a href=\\"https://www.amazon.science/publications/nuta-non-uniform-temporal-aggregation-for-action-recognition\\" target=\\"_blank\\">NUTA: Non-uniform temporal aggregation for action recognition</a></ins>”.</p>\n<p>For instance, Tighe explains, Transformers can more naturally infer object permanence — determining that a collection of pixels in one frame of video designate the same object as a different collection of pixels in a different frame.</p>\n<p>This is crucial to a number of video applications. For instance, determining the semantic content of a film or TV show requires recognizing the same characters across different shots. Similarly, Amazon Go — the Amazon service that enables checkout-free shopping in physical stores — needs to recognize that the same customer who picked up canned peaches on aisle three also picked up raisin bran on aisle five.</p>\n<p>“To understand a movie, we can’t just send in frames,” Tighe says. “One of the things my group is doing — as well as a lot of different groups — is using Transformers to take in audio information, take in text, like subtitles, and take in the visual information, the movie content, into one framework. Because what you see is only half of it. What you hear is as, if not more, important for understanding what’s going on in these movies. I see Transformers as a powerful tool to finally not have ad hoc ways to combine audio, text, and video together.”</p>\n<h4><a id=\\"Contrastive_learning_26\\"></a><strong>Contrastive learning</strong></h4>\\n<p>On the topic of unsupervised and self-supervised learning, Tighe says, the most interesting recent development has been the exploration of <ins><a href=\\"https://www.amazon.science/tag/contrastive-learning\\" target=\\"_blank\\">contrastive learning</a></ins>. With contrastive learning, a neural network is fed pairs of inputs, some from the same class and some from different classes, and it learns to produce embeddings — vector representations — that cluster instances of the same class together and separate instances of different classes. The trick is to do this with unlabeled data.</p>\n<p>“If you take an image, and then you augment it, you change its color, you take a really aggressive crop, you add a bunch of noise, then you have two examples,” Tighe explains. “You put those both through the network and you say, These two things are the same thing. You can be very aggressive with your augmentations. So when you get, say, a crop of a dog’s head and a crop of a dog’s tail, you’re telling the network these are semantically the same object. And so it needs to learn high-level semantics of dog parts.</p>\n<p>“But you also need to push them apart from something else. It’s easy to find examples that are far away already, but that doesn’t help the network learn. What we really need is to find the closest example and push away from that. So I think one of the key innovations here is that you have this large bank of image embeddings that you should push against. The network is going to pick out the really hard examples, the ones that it naturally is embedding very close together. It’s going to try and push those apart, and that’s how this embedding is learned very well.</p>\n<p>“Then at the end, when you’re going to test how well it does, you just train a single linear layer with all your labeled data. The idea is, if this works, we should be able to throw the world of images at one of these systems, train the ultimate embedding that can describe the entire world, and then, with our specific task in mind, just with a little bit of data, train that last layer and have very high performance.”</p>\n<h4><a id=\\"Action_recognition_36\\"></a><strong>Action recognition</strong></h4>\\n<p>In his own papers at WACV, Tighe and his colleagues are exploring both attention mechanisms and semi-supervised learning — although not exactly Transformers and contrastive learning.</p>\n<p>“One WACV paper is looking at how we <ins><a href=\\"https://www.amazon.science/publications/nuta-non-uniform-temporal-aggregation-for-action-recognition\\" target=\\"_blank\\">use the Transformer mechanism of self-attention to aggregate temporal information</a></ins> ,” he explains. “It’s actually a CNN, but then we use that self-attention mechanism to aggregate information across the whole video. So we get the ability to share information globally inside this network as well.</p>\n<p>“The other one is looking at, if you have a dictionary of actions, <ins><a href=\\"https://www.amazon.science/publications/sscap-self-supervised-co-occurrence-action-parsing-for-unsupervised-temporal-action-segmentation\\" target=\\"_blank\\">how can you predict the different actions that are occurring</a></ins> by looking at a bunch of events? One of the datasets we look at is gymnastics. So if we look at the floor plan for a gymnastics event, and you have a number of examples of that, we predict the fine-grain actions like a flip or turnover that happened without supervision of those fine-grain actions.”</p>\n<p>As for what the future may hold, “what’s really missing from video research is around how you model the temporal dimension,” Tighe says. “And I’m not claiming to know what that means yet. But it’s inherently a different signal; it can’t just be treated like another space dimension.”</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Larry_Hardestyhttpswwwamazonscienceauthorlarryhardesty_48\\"></a><strong><a href=\\"https://www.amazon.science/author/larry-hardesty\\" target=\\"_blank\\">Larry Hardesty</a></strong></h4>\n<p>Larry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭