A little public data makes privacy-preserving AI models more accurate

机器学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Many useful computer vision models are trained on large corpora of public data, such as ImageNet. But some applications — models that analyze medical images for indications of disease, for instance — need to be trained on data whose owners might like to keep it private. In such cases, we want to be sure that no one can infer anything about specific training examples from the output of the trained model.\n\n++[Differential privacy](https://www.amazon.science/tag/differential-privacy)++ offers a way to quantify both the amount of private information that a machine learning model might leak and the effectiveness of countermeasures. The standard way to prevent data leakage is to add noise during the model training process. This can obscure the inferential pathway leading from model output to specific training examples, but it also tends to compromise model accuracy.\n\n++[Natural-language-processing](https://www.amazon.science/tag/nlp)++ researchers have had success training models on a mixture of private and public training data, enforcing differential-privacy (DP) guarantees on the private data while compromising model accuracy very little. But attempts to generalize these methods to computer vision have fared badly. In fact, they fare so badly that training a model on public data and then doing zero-shot learning on the private-data task tends to work better than training mixed-data models.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/041f179478e94c03b2254ecef922715c_%E4%B8%8B%E8%BD%BD.jpg)\n\nA differential-privacy guarantee means that it is statistically impossible to tell whether a given sample was or was not part of the dataset used to train a machine learning model.\n\nIn a ++[paper](https://www.amazon.science/publications/mixed-differential-privacy-in-computer-vision)++ we presented at this year’s Conference on Computer Vision and Pattern Recognition (++[CVPR](https://www.amazon.science/conferences-and-events/cvpr-2022)++), we address this problem, with an algorithm called AdaMix. We consider the case in which we have at least a little public data whose label set is the same as — or at least close to — that of the private data. In the medical-imaging example, we might have a small public dataset of images labeled to show evidence of the disease of interest, or something similar.\n\nAdaMix works in two phases. It first trains on the public data to identify the “ball park” of the desired model weights. Then it trains jointly on the public and private data to refine the solution, while being incentivized to stay near the ball park. The public data also helps to make various adaptive decisions in every training iteration so that we can meet our DP criteria with minimal overall perturbation of the model.\n\nAdaMix models outperform zero-shot models on private-data tasks, and relative to conventional mixed-data models, they reduce the error increase by 60% to 70%. That’s still a significant increase, but it’s mild enough that, in cases in which privacy protection is paramount, the resulting models may still be useful — which conventional mixed-data models often aren’t.\n\nIn addition, we obtained strong theoretical guarantees on the performance of AdaMix. Notably, we show that even a tiny public dataset will bring about substantial improvement in accuracy, with provable guarantees. This is in addition to the formal differential-privacy guarantee that the algorithm enjoys.\n\n#### **Information transfer and memorization**\n\nComputer vision models learn to identify image features relevant to particular tasks. A cat recognizer, for instance, might learn to identify image features that denote pointy ears when viewed from various perspectives. Since most of the images in the training data feature cats with pointy ears, the recognizer will probably model pointy ears in a very general way, which is not traceable to any particular training example.\n\nIf, however, the training data contains only a few images of Scottish Fold cats, with their distinctive floppy ears, the model might learn features particular to just those images, a process we call memorization. And memorization does open the possibility that a canny adversary could identify individual images used in the training data.\n\nInformation theory provides a way to quantify the amount of information that the model-training process transfers from any given training example to the model parameters, and the obvious way to prevent memorization would be to cap that information transfer.\n\nBut as one of us (Alessandro) explained in an essay for Amazon Science, “++[The importance of forgetting in artificial and animal intelligence](https://www.amazon.science/blog/the-importance-of-forgetting-in-artificial-and-animal-intelligence)++”, during training, neural networks begin by memorizing a good deal of information about individual training examples before, over time, forgetting most of the memorized details. That is, they develop abstract models by gradually subtracting extraneous details from more particularized models. (This finding was unsurprising to biologists, as the development of the animal brain involves a constant shedding of useless information and a consolidation of useful information.)\n\nDP provably prevents unintended memorization of individual training examples. But this also imposes a universal cap on the information transfer between training examples and model parameters, which could inhibit the learning process. The characteristics of specific training examples are often needed to map out the space of possibilities that the learning algorithm should explore as examples accumulate.\n\nThis is the insight that our CVPR paper exploits. Essentially, we allow the model to memorize features of the small public dataset, mapping out the space of exploration. Then, when the model has been pretrained on public data, we cap the information transfer between the private data and the model parameters.\n\nWe tailor that cap, however, to the current values of the model parameters, and, more particularly, we update the cap after every iteration of the training procedure. This ensures that, for each sample in the private data set, we don’t add more noise than is necessary to protect privacy.\n\nThe particular improvement that our approach affords on test data suggests that it could enable more practical computer vision models that also meet privacy guarantees. But more important, we hope that the theoretical insight it incorporates — that DP schemes for computer vision have to be mindful of the importance of forgetting — will lead to still more effective methods of privacy protection.\n\n**Acknowledgments**: Aaron Roth, Michael Kearns, Stefano Soatto\n\nABOUT THE AUTHOR\n\n#### **[Alessandro Achille](https://www.amazon.science/author/alessandro-achille)**\nAlessandro Achille is an applied scientist with Amazon Web Services.\n\n#### **[Yu-Xiang Wang](https://www.amazon.science/author/yu-xiang-wang)**\nYu-Xiang Wang is an assistant professor of computer science at the University of California, Santa Barbara, and a visiting academic at Amazon.","render":"<p>Many useful computer vision models are trained on large corpora of public data, such as ImageNet. But some applications — models that analyze medical images for indications of disease, for instance — need to be trained on data whose owners might like to keep it private. In such cases, we want to be sure that no one can infer anything about specific training examples from the output of the trained model.</p>\n<p><ins><a href=\\"https://www.amazon.science/tag/differential-privacy\\" target=\\"_blank\\">Differential privacy</a></ins> offers a way to quantify both the amount of private information that a machine learning model might leak and the effectiveness of countermeasures. The standard way to prevent data leakage is to add noise during the model training process. This can obscure the inferential pathway leading from model output to specific training examples, but it also tends to compromise model accuracy.</p>\n<p><ins><a href=\\"https://www.amazon.science/tag/nlp\\" target=\\"_blank\\">Natural-language-processing</a></ins> researchers have had success training models on a mixture of private and public training data, enforcing differential-privacy (DP) guarantees on the private data while compromising model accuracy very little. But attempts to generalize these methods to computer vision have fared badly. In fact, they fare so badly that training a model on public data and then doing zero-shot learning on the private-data task tends to work better than training mixed-data models.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/041f179478e94c03b2254ecef922715c_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>A differential-privacy guarantee means that it is statistically impossible to tell whether a given sample was or was not part of the dataset used to train a machine learning model.</p>\n<p>In a <ins><a href=\\"https://www.amazon.science/publications/mixed-differential-privacy-in-computer-vision\\" target=\\"_blank\\">paper</a></ins> we presented at this year’s Conference on Computer Vision and Pattern Recognition (<ins><a href=\\"https://www.amazon.science/conferences-and-events/cvpr-2022\\" target=\\"_blank\\">CVPR</a></ins>), we address this problem, with an algorithm called AdaMix. We consider the case in which we have at least a little public data whose label set is the same as — or at least close to — that of the private data. In the medical-imaging example, we might have a small public dataset of images labeled to show evidence of the disease of interest, or something similar.</p>\n<p>AdaMix works in two phases. It first trains on the public data to identify the “ball park” of the desired model weights. Then it trains jointly on the public and private data to refine the solution, while being incentivized to stay near the ball park. The public data also helps to make various adaptive decisions in every training iteration so that we can meet our DP criteria with minimal overall perturbation of the model.</p>\n<p>AdaMix models outperform zero-shot models on private-data tasks, and relative to conventional mixed-data models, they reduce the error increase by 60% to 70%. That’s still a significant increase, but it’s mild enough that, in cases in which privacy protection is paramount, the resulting models may still be useful — which conventional mixed-data models often aren’t.</p>\n<p>In addition, we obtained strong theoretical guarantees on the performance of AdaMix. Notably, we show that even a tiny public dataset will bring about substantial improvement in accuracy, with provable guarantees. This is in addition to the formal differential-privacy guarantee that the algorithm enjoys.</p>\n<h4><a id=\\"Information_transfer_and_memorization_18\\"></a><strong>Information transfer and memorization</strong></h4>\\n<p>Computer vision models learn to identify image features relevant to particular tasks. A cat recognizer, for instance, might learn to identify image features that denote pointy ears when viewed from various perspectives. Since most of the images in the training data feature cats with pointy ears, the recognizer will probably model pointy ears in a very general way, which is not traceable to any particular training example.</p>\n<p>If, however, the training data contains only a few images of Scottish Fold cats, with their distinctive floppy ears, the model might learn features particular to just those images, a process we call memorization. And memorization does open the possibility that a canny adversary could identify individual images used in the training data.</p>\n<p>Information theory provides a way to quantify the amount of information that the model-training process transfers from any given training example to the model parameters, and the obvious way to prevent memorization would be to cap that information transfer.</p>\n<p>But as one of us (Alessandro) explained in an essay for Amazon Science, “<ins><a href=\\"https://www.amazon.science/blog/the-importance-of-forgetting-in-artificial-and-animal-intelligence\\" target=\\"_blank\\">The importance of forgetting in artificial and animal intelligence</a></ins>”, during training, neural networks begin by memorizing a good deal of information about individual training examples before, over time, forgetting most of the memorized details. That is, they develop abstract models by gradually subtracting extraneous details from more particularized models. (This finding was unsurprising to biologists, as the development of the animal brain involves a constant shedding of useless information and a consolidation of useful information.)</p>\n<p>DP provably prevents unintended memorization of individual training examples. But this also imposes a universal cap on the information transfer between training examples and model parameters, which could inhibit the learning process. The characteristics of specific training examples are often needed to map out the space of possibilities that the learning algorithm should explore as examples accumulate.</p>\n<p>This is the insight that our CVPR paper exploits. Essentially, we allow the model to memorize features of the small public dataset, mapping out the space of exploration. Then, when the model has been pretrained on public data, we cap the information transfer between the private data and the model parameters.</p>\n<p>We tailor that cap, however, to the current values of the model parameters, and, more particularly, we update the cap after every iteration of the training procedure. This ensures that, for each sample in the private data set, we don’t add more noise than is necessary to protect privacy.</p>\n<p>The particular improvement that our approach affords on test data suggests that it could enable more practical computer vision models that also meet privacy guarantees. But more important, we hope that the theoretical insight it incorporates — that DP schemes for computer vision have to be mindful of the importance of forgetting — will lead to still more effective methods of privacy protection.</p>\n<p><strong>Acknowledgments</strong>: Aaron Roth, Michael Kearns, Stefano Soatto</p>\\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Alessandro_Achillehttpswwwamazonscienceauthoralessandroachille_40\\"></a><strong><a href=\\"https://www.amazon.science/author/alessandro-achille\\" target=\\"_blank\\">Alessandro Achille</a></strong></h4>\n<p>Alessandro Achille is an applied scientist with Amazon Web Services.</p>\n<h4><a id=\\"YuXiang_Wanghttpswwwamazonscienceauthoryuxiangwang_43\\"></a><strong><a href=\\"https://www.amazon.science/author/yu-xiang-wang\\" target=\\"_blank\\">Yu-Xiang Wang</a></strong></h4>\n<p>Yu-Xiang Wang is an assistant professor of computer science at the University of California, Santa Barbara, and a visiting academic at Amazon.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭