Teaching neural networks to compress images

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Virtually all the images flying over the Internet are compressed to save bandwidth, and usually, the codecs — short for coder-decoder — that do the compression, such as JPG, are hand crafted.\n\nIn theory, machine-learning-based codecs could provide better compression and higher image quality than hand-crafted codecs. But machine learning models are trained to minimize some loss metric, and existing loss metrics, such as PSNR and MS-SSIM, do not align well with human perception of similarity. \n\nIn January, at the IEEE Winter Conference on Applications of Computer Vision (++[WACV](https://www.amazon.science/conferences-and-events/amazon-wacv-2021)++), we ++[presented](https://www.amazon.science/publications/saliency-driven-perceptual-image-compression)++ a perceptual loss function for learned image compression that addresses this issue. \n\n![image.png](https://dev-media.amazoncloud.cn/30c4f0ebeb7d4b3db00532cd01094021_image.png)\n\nA comparison of the reconstructed images yielded by seven different compression schemes, both learned and hand crafted, at the same bit rate. Ours provides more faithful reconstruction of image details than the others and compares more favorably with the original (uncompressed) image.\n\nWe also describe how to incorporate saliency into a learned codec. Current image codecs, whether classical or learned, tend to compress all regions of an image equally. But most images have salient regions — say, faces and texts — where faithful reconstruction matters more than in other regions — say, sky and background. \n\nCompression codecs that assign more bits to salient regions than to low-importance regions tend to yield images that human viewers find more satisfying. Our model automatically learns from training data how to trade off the assignment of bits to salient and non-salient regions of an image.\n\n<video src=\"https://dev-media.amazoncloud.cn/e5d76e80af6c4f318b51c29a9a958678_Saliency%20Driven%20Perceptual%20Image%20Compression%2C%20WACV%202021..mp4\" class=\"manvaVedio\" controls=\"controls\" style=\"width:160px;height:160px\"></video>\n\n**Video of the researchers' conference presentation**\n\nIn our paper, we also report the results of two evaluation studies. One is a human-perception study in which subjects were asked to compare decompressed images from our codec to those of other codecs. The other study used compressed images in downstream tasks such as object detection and image segmentation.\n\nIn the first study, our method was the clear winner at bit rates below one bit per image pixel. In the second study, our method was the top performer across the board.\n\n\n#### **Model-derived losses**\n\n\nSeveral studies have shown that the loss functions used to train neural networks as compression codecs are inconsistent with human judgments of quality. For instance, of the four post-compression reconstructions in the image below, humans consistently pick the second from the right as the most faithful, even though it ranks only third according to the MS-SSIM loss metric.\n\n![image.png](https://dev-media.amazoncloud.cn/cd981eef15a64883a79f9e6f181f0c4e_image.png)\n\nA source image and four post-compression reconstructions of it, ranked, from left to right, in descending order by MS-SSIM values. Human evaluators, however, rank the second-lowest-scoring reconstruction (BPG) as the best.\n\nIt’s also been shown, however, that intermediate values computed by neural networks trained on arbitrary computer vision tasks — such as object recognition — accord better with human similarity judgments than conventional loss metrics. \n\nThat is, a neural network trained on a computer vision task will generally produce a fixed-length vector representation of each input image, which is the basis for further processing. The distance between the values of that vector for two different images is a good predictor of human similarity judgments.\n\n![image.png](https://dev-media.amazoncloud.cn/00b4222f6a06472ca457a62e9f9df2e4_image.png)\n\nThe architecture of the system we use to compute deep perceptual loss. F is the encoder learned from the image-ranking task. The downstream processing normalizes the encoder outputs and computes the distance between them.\n\nWe drew on this observation to create a loss function suitable for training image compression models. In other words, to train our image compression model, we used a loss function computed by another neural network. We call this deep perceptual loss.\n\nFirst, we created a compression training set using the two-alternative forced-choice (++[2AFC](https://en.wikipedia.org/wiki/Two-alternative_forced_choice)++) methodology. Annotators are presented with two versions of the same image reconstructed from different compression methods (both classical and learned codecs), with the original image between them. They are asked to pick the image that is closer to the original. On average, the annotators spent 56 seconds on each sample.\n\nWe split this data into training and test sets and trained a network to predict which of each pair of reconstructed images human annotators preferred. Then we extracted the encoder that produces the vector representation of the input images and used it as the basis for a system that computes a similarity score (above).\n\nIn the table at right, we can see that, compared to other metrics, our approach (LPIPS-Comp VGG PSNR) provides the closest approximation (81.9) of human judgment (82.06). (The human-judgment score is less than 100 because human annotators sometimes disagree about the relative quality of images.) Also note that MS-SSIM and PSNR loss are the lowest-scoring metrics.\n\n![image.png](https://dev-media.amazoncloud.cn/ac638a70666e4de0bfbe9f2b50b59dda_image.png)\n\nOur similarity measure approximates human judgment much better than its predecessors, with MS-SIM and PSNR earning the lowest scores.\n\n\n#### **The compression model**\n\n\nArmed with a good perceptual-loss metric, we can train our neural codec. So that it can learn to exploit saliency judgments, our codec includes an off-the-shelf saliency model, trained on a 10,000-image data set in which salient regions have been annotated. The codec learns how to employ the outputs of the saliency model independently, based on the training data.\n\n![image.png](https://dev-media.amazoncloud.cn/24446bdc3a98438aadbe18f9fd0fdeb6_image.png)\n\nThe architecture of our neural compression codec. The shorter of the two modules labeled bit string is the compressed version of the input. During training, the input is both compressed and decompressed, so that we can evaluate the network according to the similarity between the original and reconstructed images, according to our new loss metric.\n\nIn our paper, we report an extensive human-evaluation study that compared our approach to five other compression approaches across four different bits-per-pixel values (0.23, 0.37, 0.67, 1.0). Subjects judged reconstructed images from our model as closest to the original across the three lowest bit-rates. At a bit rate of 1.0 bits per pixel, the BPG method is the top performer.\n\nWe did another experiment where we compressed images from the benchmark ++[COCO](https://cocodataset.org/#home)++ dataset using traditional and learned image compression approaches. We then used these compressed images for other tasks, such as instance segmentation (finding the boundaries of objects) and object recognition. The reconstructed images from our approach delivered superior performance across the board, since our approach better preserves salient aspects in an image.\n\nA compression algorithm that preserves important aspects of an image at various compression rates benefits Amazon customers in several ways, such as reducing the cost of cloud storage and speeding the download of images stored with Amazon Photos. Delivering those types of concrete results to our customers was the motivation for this work.\n\nABOUT THE AUTHOR\n\n#### **[Srikar Appalaraju](https://www.amazon.science/author/srikar-appalaraju)**\n\nSrikar Appalaraju is a senior applied scientist in the Amazon Web Services Computer Vision group.","render":"<p>Virtually all the images flying over the Internet are compressed to save bandwidth, and usually, the codecs — short for coder-decoder — that do the compression, such as JPG, are hand crafted.</p>\n<p>In theory, machine-learning-based codecs could provide better compression and higher image quality than hand-crafted codecs. But machine learning models are trained to minimize some loss metric, and existing loss metrics, such as PSNR and MS-SSIM, do not align well with human perception of similarity.</p>\n<p>In January, at the IEEE Winter Conference on Applications of Computer Vision (<ins><a href=\"https://www.amazon.science/conferences-and-events/amazon-wacv-2021\" target=\"_blank\">WACV</a></ins>), we <ins><a href=\"https://www.amazon.science/publications/saliency-driven-perceptual-image-compression\" target=\"_blank\">presented</a></ins> a perceptual loss function for learned image compression that addresses this issue.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/30c4f0ebeb7d4b3db00532cd01094021_image.png\" alt=\"image.png\" /></p>\n<p>A comparison of the reconstructed images yielded by seven different compression schemes, both learned and hand crafted, at the same bit rate. Ours provides more faithful reconstruction of image details than the others and compares more favorably with the original (uncompressed) image.</p>\n<p>We also describe how to incorporate saliency into a learned codec. Current image codecs, whether classical or learned, tend to compress all regions of an image equally. But most images have salient regions — say, faces and texts — where faithful reconstruction matters more than in other regions — say, sky and background.</p>\n<p>Compression codecs that assign more bits to salient regions than to low-importance regions tend to yield images that human viewers find more satisfying. Our model automatically learns from training data how to trade off the assignment of bits to salient and non-salient regions of an image.</p>\n<p><video src=\"https://dev-media.amazoncloud.cn/e5d76e80af6c4f318b51c29a9a958678_Saliency%20Driven%20Perceptual%20Image%20Compression%2C%20WACV%202021..mp4\" controls=\"controls\"></video></p>\n<p><strong>Video of the researchers’ conference presentation</strong></p>\n<p>In our paper, we also report the results of two evaluation studies. One is a human-perception study in which subjects were asked to compare decompressed images from our codec to those of other codecs. The other study used compressed images in downstream tasks such as object detection and image segmentation.</p>\n<p>In the first study, our method was the clear winner at bit rates below one bit per image pixel. In the second study, our method was the top performer across the board.</p>\n<h4><a id=\"Modelderived_losses_23\"></a><strong>Model-derived losses</strong></h4>\n<p>Several studies have shown that the loss functions used to train neural networks as compression codecs are inconsistent with human judgments of quality. For instance, of the four post-compression reconstructions in the image below, humans consistently pick the second from the right as the most faithful, even though it ranks only third according to the MS-SSIM loss metric.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/cd981eef15a64883a79f9e6f181f0c4e_image.png\" alt=\"image.png\" /></p>\n<p>A source image and four post-compression reconstructions of it, ranked, from left to right, in descending order by MS-SSIM values. Human evaluators, however, rank the second-lowest-scoring reconstruction (BPG) as the best.</p>\n<p>It’s also been shown, however, that intermediate values computed by neural networks trained on arbitrary computer vision tasks — such as object recognition — accord better with human similarity judgments than conventional loss metrics.</p>\n<p>That is, a neural network trained on a computer vision task will generally produce a fixed-length vector representation of each input image, which is the basis for further processing. The distance between the values of that vector for two different images is a good predictor of human similarity judgments.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/00b4222f6a06472ca457a62e9f9df2e4_image.png\" alt=\"image.png\" /></p>\n<p>The architecture of the system we use to compute deep perceptual loss. F is the encoder learned from the image-ranking task. The downstream processing normalizes the encoder outputs and computes the distance between them.</p>\n<p>We drew on this observation to create a loss function suitable for training image compression models. In other words, to train our image compression model, we used a loss function computed by another neural network. We call this deep perceptual loss.</p>\n<p>First, we created a compression training set using the two-alternative forced-choice (<ins><a href=\"https://en.wikipedia.org/wiki/Two-alternative_forced_choice\" target=\"_blank\">2AFC</a></ins>) methodology. Annotators are presented with two versions of the same image reconstructed from different compression methods (both classical and learned codecs), with the original image between them. They are asked to pick the image that is closer to the original. On average, the annotators spent 56 seconds on each sample.</p>\n<p>We split this data into training and test sets and trained a network to predict which of each pair of reconstructed images human annotators preferred. Then we extracted the encoder that produces the vector representation of the input images and used it as the basis for a system that computes a similarity score (above).</p>\n<p>In the table at right, we can see that, compared to other metrics, our approach (LPIPS-Comp VGG PSNR) provides the closest approximation (81.9) of human judgment (82.06). (The human-judgment score is less than 100 because human annotators sometimes disagree about the relative quality of images.) Also note that MS-SSIM and PSNR loss are the lowest-scoring metrics.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/ac638a70666e4de0bfbe9f2b50b59dda_image.png\" alt=\"image.png\" /></p>\n<p>Our similarity measure approximates human judgment much better than its predecessors, with MS-SIM and PSNR earning the lowest scores.</p>\n<h4><a id=\"The_compression_model_53\"></a><strong>The compression model</strong></h4>\n<p>Armed with a good perceptual-loss metric, we can train our neural codec. So that it can learn to exploit saliency judgments, our codec includes an off-the-shelf saliency model, trained on a 10,000-image data set in which salient regions have been annotated. The codec learns how to employ the outputs of the saliency model independently, based on the training data.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/24446bdc3a98438aadbe18f9fd0fdeb6_image.png\" alt=\"image.png\" /></p>\n<p>The architecture of our neural compression codec. The shorter of the two modules labeled bit string is the compressed version of the input. During training, the input is both compressed and decompressed, so that we can evaluate the network according to the similarity between the original and reconstructed images, according to our new loss metric.</p>\n<p>In our paper, we report an extensive human-evaluation study that compared our approach to five other compression approaches across four different bits-per-pixel values (0.23, 0.37, 0.67, 1.0). Subjects judged reconstructed images from our model as closest to the original across the three lowest bit-rates. At a bit rate of 1.0 bits per pixel, the BPG method is the top performer.</p>\n<p>We did another experiment where we compressed images from the benchmark <ins><a href=\"https://cocodataset.org/#home\" target=\"_blank\">COCO</a></ins> dataset using traditional and learned image compression approaches. We then used these compressed images for other tasks, such as instance segmentation (finding the boundaries of objects) and object recognition. The reconstructed images from our approach delivered superior performance across the board, since our approach better preserves salient aspects in an image.</p>\n<p>A compression algorithm that preserves important aspects of an image at various compression rates benefits Amazon customers in several ways, such as reducing the cost of cloud storage and speeding the download of images stored with Amazon Photos. Delivering those types of concrete results to our customers was the motivation for this work.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Srikar_Appalarajuhttpswwwamazonscienceauthorsrikarappalaraju_70\"></a><strong><a href=\"https://www.amazon.science/author/srikar-appalaraju\" target=\"_blank\">Srikar Appalaraju</a></strong></h4>\n<p>Srikar Appalaraju is a senior applied scientist in the Amazon Web Services Computer Vision group.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭
contact-us