Amazon wins best-paper award for protecting privacy of training data

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Differential privacy is a popular technique that provides a way to quantify the privacy risk of releasing aggregate statistics based on individual data. In the context of machine learning, differential privacy provides a way to protect privacy by adding noise to the data used to train a machine learning model. But the addition of noise can also reduce model performance.\n\nIn a pair of papers at the annual meeting of the [Florida Artificial Intelligence Research Society (FLAIRS)](https://www.flairs-34.info/), the Privacy Engineering for Alexa team is presenting a new way to calibrate the noise added to the textual data used to train natural-language-processing (NLP) models. The idea is to distinguish cases where a little noise is enough to protect privacy from cases where more noise is necessary. This helps minimize the impact on model accuracy while maintaining privacy guarantees, which aligns with the team’s mission to measurably preserve customer privacy across Alexa.\n\nOne of the papers, “[Density-aware differentially private textual perturbations using truncated Gumbel noise](https://www.amazon.science/publications/density-aware-differentially-private-textual-perturbations-using-truncated-gumbel-noise)”, has won the conference’s best-paper award.\n\n![下载.gif](https://dev-media.amazoncloud.cn/482e0922c12e4c9eac30080bdcb6762d_%E4%B8%8B%E8%BD%BD.gif)\n\nA simplified example of the method proposed in the researchers' award-winning paper. Noise is added to the three nearest neighbors of a source word, A, and to A itself. After noise addition, the word closest to A's original position — B — is chosen as a substitute for A.\nCREDIT: GLYNIS CONDON\n\nDifferential privacy says that, given an aggregate statistic, the probability that the underlying dataset does or does not contain a particular item should be virtually the same. The addition of noise to the data helps enforce that standard, but it can also obscure relationships in the data that the model is trying to learn.\n\nIn NLP applications, a standard way to add noise involves embedding the words of the training texts. An embedding represents words as vectors, such that vectors that are close in the space have related meanings. \n\nAdding noise to an embedding vector produces a new vector, which would correspond to a similar but different word. Ideally, substituting the new words for the old should disguise the original data while preserving the attributes that the NLP model is trying to learn. \n\nHowever, words in an embedding space tend to form clusters, united by semantic similarity, with sparsely populated regions between clusters. Intuitively, within a cluster, much less noise should be required to ensure enough semantic distance to preserve privacy. However, if the noise added to each word is based on the average distance between embeddings — factoring in the sparser regions — it may be more than is necessary for words in dense regions.\n\n![image.png](https://dev-media.amazoncloud.cn/7af21fd57f6f4e418d52cff3f7778ab8_image.png)\n\nA simplified representation of words (red dots) in an embedding space. Adding noise to a source vector (A) produces a new vector, and the nearest (green circle) embedded word (B) is chosen as a substitute. In the graph at left, adding a lot of noise to the source word produces an output word that is far away and hence semantically dissimilar. In the middle graph, however, a lot of noise is needed to produce a semantically different output. In the graph at right, the amount of noise is calibrated to the density of the vectors around the source word.\n\nThis leads us to pose the following question in our FLAIRS papers: Can we recalibrate the noise added such that it varies for every word depending on the density of the surrounding space, rather than resorting to a single global sensitivity?\n\n#### **Calibration techniques**\n\nWe study this question from two different perspectives. In the paper titled “[Research challenges in designing differentially private text generation mechanisms](https://www.amazon.science/publications/research-challenges-in-designing-differentially-private-text-generation-mechanisms)”, my Alexa colleagues Oluwaseyi Feyisetan, Zekun Xu, Nathanael Teissier, and I discuss general techniques to enhance the privacy of text mechanisms by exploiting features such as local density in the embedding space. \n\nFor example, one technique deduces a probability distribution (a prior) that assigns high probability to dense areas of the embedding and low probability to sparse areas. This prior can be produced using kernel density estimation, which is a popular technique for estimating distributions from limited data samples. \n\nHowever, these distributions are often highly nonlinear, which makes them difficult to sample from. In this case, we can either opt for an approximation to the distribution or adopt indirect sampling strategies such as the Metropolis–Hastings algorithm (which is based on well-known Monte Carlo Markov chain techniques). \n\nAnother technique we discuss is to impose a limit on how far away a noisy embedding may be from its source. We explore two ways to do this: distance-based truncation and k-nearest-neighbor-based truncation. \n\nDistance-based truncation simply caps the distance between the noisy embedding and its source, according to some measure of distance in the space. This prevents the addition of a large amount of noise, which is useful in the dense regions of the embedding. But in the sparse regions, this can effectively mean zero perturbation, since there may not be another word within the distance limit. \n\nTo avoid this drawback, we consider the alternate approach of k-nearest-neighbor-based truncation. In this approach, the k words closest to the source delineate the acceptable search area. We then execute a selection procedure to choose the new word from these k candidates (plus the source word itself). This is the approach we adopt in our second paper.\n\n![image.png](https://dev-media.amazoncloud.cn/592db102e7264c788681710f834dcf55_image.png)\n\nA schematic of distance-based (left and middle graphs) and nearest-neighbor-based (right graph) truncation techniques. In the first graph, the blue circle represents a limit on the distance from the source word, A. Randomly adding noise produces a vector within this limit, and the output word B is selected. In the middle graph, a large amount of noise has been randomly added, but it’s truncated at the boundary of the blue circle. The right graph shows k-nearest-neighbor truncation, where a random number of neighbors (in this case, three) are selected around the source word, A. Noise is added to each of these neighbors independently, and the nearest word after noise addition — B — is chosen (see animation, above).\n\nIn “Density-aware differentially private textual perturbations using truncated Gumbel noise”, Nan Xu, a summer intern with our group in 2020 and currently a PhD student in computer science at the University of Southern California, joins us to discuss a particular algorithm in detail. \n\nThis algorithm calibrates noise by selecting a few neighbors of the source word and perturbing the distance to these neighbors using samples from the Gumbel distribution (the rightmost graph, above). We chose the Gumbel distribution because it is more computationally efficient than existing mechanisms for differentially private selection (e.g., the exponential mechanism). The number of neighbors is chosen randomly using Poisson samples.\n\nTogether, these two techniques, when calibrated appropriately, provide the required amount of differential privacy while enhancing utility. We call the resulting algorithm the truncated Gumbel mechanism, and it better preserves semantic meanings than multivariate Laplace mechanisms, a widely used method for adding noise to textual data. (The left and middle graphs of the top figure above depict the use of Laplace mechanisms). \n\nIn tests, we found that this new algorithm provided improvements in accuracy of up to 9.9% for text classification tasks on two different datasets. Our paper also includes a formal proof of the privacy guarantees offered by this mechanism and analyzes relevant privacy statistics. \n\nOur ongoing research efforts continue to improve upon the techniques described above and enable Alexa to continue introducing new features and inventions that make customers’ lives easier while keeping their data private.\n\nABOUT THE AUTHOR\n#### **Abhinav Aggarwal**\nAbhinav Aggarwal is an applied scientist in the Alexa organization.\n","render":"<p>Differential privacy is a popular technique that provides a way to quantify the privacy risk of releasing aggregate statistics based on individual data. In the context of machine learning, differential privacy provides a way to protect privacy by adding noise to the data used to train a machine learning model. But the addition of noise can also reduce model performance.</p>\n<p>In a pair of papers at the annual meeting of the <a href=\\"https://www.flairs-34.info/\\" target=\\"_blank\\">Florida Artificial Intelligence Research Society (FLAIRS)</a>, the Privacy Engineering for Alexa team is presenting a new way to calibrate the noise added to the textual data used to train natural-language-processing (NLP) models. The idea is to distinguish cases where a little noise is enough to protect privacy from cases where more noise is necessary. This helps minimize the impact on model accuracy while maintaining privacy guarantees, which aligns with the team’s mission to measurably preserve customer privacy across Alexa.</p>\\n<p>One of the papers, “<a href=\\"https://www.amazon.science/publications/density-aware-differentially-private-textual-perturbations-using-truncated-gumbel-noise\\" target=\\"_blank\\">Density-aware differentially private textual perturbations using truncated Gumbel noise</a>”, has won the conference’s best-paper award.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/482e0922c12e4c9eac30080bdcb6762d_%E4%B8%8B%E8%BD%BD.gif\\" alt=\\"下载.gif\\" /></p>\n<p>A simplified example of the method proposed in the researchers’ award-winning paper. Noise is added to the three nearest neighbors of a source word, A, and to A itself. After noise addition, the word closest to A’s original position — B — is chosen as a substitute for A.<br />\\nCREDIT: GLYNIS CONDON</p>\n<p>Differential privacy says that, given an aggregate statistic, the probability that the underlying dataset does or does not contain a particular item should be virtually the same. The addition of noise to the data helps enforce that standard, but it can also obscure relationships in the data that the model is trying to learn.</p>\n<p>In NLP applications, a standard way to add noise involves embedding the words of the training texts. An embedding represents words as vectors, such that vectors that are close in the space have related meanings.</p>\n<p>Adding noise to an embedding vector produces a new vector, which would correspond to a similar but different word. Ideally, substituting the new words for the old should disguise the original data while preserving the attributes that the NLP model is trying to learn.</p>\n<p>However, words in an embedding space tend to form clusters, united by semantic similarity, with sparsely populated regions between clusters. Intuitively, within a cluster, much less noise should be required to ensure enough semantic distance to preserve privacy. However, if the noise added to each word is based on the average distance between embeddings — factoring in the sparser regions — it may be more than is necessary for words in dense regions.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/7af21fd57f6f4e418d52cff3f7778ab8_image.png\\" alt=\\"image.png\\" /></p>\n<p>A simplified representation of words (red dots) in an embedding space. Adding noise to a source vector (A) produces a new vector, and the nearest (green circle) embedded word (B) is chosen as a substitute. In the graph at left, adding a lot of noise to the source word produces an output word that is far away and hence semantically dissimilar. In the middle graph, however, a lot of noise is needed to produce a semantically different output. In the graph at right, the amount of noise is calibrated to the density of the vectors around the source word.</p>\n<p>This leads us to pose the following question in our FLAIRS papers: Can we recalibrate the noise added such that it varies for every word depending on the density of the surrounding space, rather than resorting to a single global sensitivity?</p>\n<h4><a id=\\"Calibration_techniques_25\\"></a><strong>Calibration techniques</strong></h4>\\n<p>We study this question from two different perspectives. In the paper titled “<a href=\\"https://www.amazon.science/publications/research-challenges-in-designing-differentially-private-text-generation-mechanisms\\" target=\\"_blank\\">Research challenges in designing differentially private text generation mechanisms</a>”, my Alexa colleagues Oluwaseyi Feyisetan, Zekun Xu, Nathanael Teissier, and I discuss general techniques to enhance the privacy of text mechanisms by exploiting features such as local density in the embedding space.</p>\\n<p>For example, one technique deduces a probability distribution (a prior) that assigns high probability to dense areas of the embedding and low probability to sparse areas. This prior can be produced using kernel density estimation, which is a popular technique for estimating distributions from limited data samples.</p>\n<p>However, these distributions are often highly nonlinear, which makes them difficult to sample from. In this case, we can either opt for an approximation to the distribution or adopt indirect sampling strategies such as the Metropolis–Hastings algorithm (which is based on well-known Monte Carlo Markov chain techniques).</p>\n<p>Another technique we discuss is to impose a limit on how far away a noisy embedding may be from its source. We explore two ways to do this: distance-based truncation and k-nearest-neighbor-based truncation.</p>\n<p>Distance-based truncation simply caps the distance between the noisy embedding and its source, according to some measure of distance in the space. This prevents the addition of a large amount of noise, which is useful in the dense regions of the embedding. But in the sparse regions, this can effectively mean zero perturbation, since there may not be another word within the distance limit.</p>\n<p>To avoid this drawback, we consider the alternate approach of k-nearest-neighbor-based truncation. In this approach, the k words closest to the source delineate the acceptable search area. We then execute a selection procedure to choose the new word from these k candidates (plus the source word itself). This is the approach we adopt in our second paper.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/592db102e7264c788681710f834dcf55_image.png\\" alt=\\"image.png\\" /></p>\n<p>A schematic of distance-based (left and middle graphs) and nearest-neighbor-based (right graph) truncation techniques. In the first graph, the blue circle represents a limit on the distance from the source word, A. Randomly adding noise produces a vector within this limit, and the output word B is selected. In the middle graph, a large amount of noise has been randomly added, but it’s truncated at the boundary of the blue circle. The right graph shows k-nearest-neighbor truncation, where a random number of neighbors (in this case, three) are selected around the source word, A. Noise is added to each of these neighbors independently, and the nearest word after noise addition — B — is chosen (see animation, above).</p>\n<p>In “Density-aware differentially private textual perturbations using truncated Gumbel noise”, Nan Xu, a summer intern with our group in 2020 and currently a PhD student in computer science at the University of Southern California, joins us to discuss a particular algorithm in detail.</p>\n<p>This algorithm calibrates noise by selecting a few neighbors of the source word and perturbing the distance to these neighbors using samples from the Gumbel distribution (the rightmost graph, above). We chose the Gumbel distribution because it is more computationally efficient than existing mechanisms for differentially private selection (e.g., the exponential mechanism). The number of neighbors is chosen randomly using Poisson samples.</p>\n<p>Together, these two techniques, when calibrated appropriately, provide the required amount of differential privacy while enhancing utility. We call the resulting algorithm the truncated Gumbel mechanism, and it better preserves semantic meanings than multivariate Laplace mechanisms, a widely used method for adding noise to textual data. (The left and middle graphs of the top figure above depict the use of Laplace mechanisms).</p>\n<p>In tests, we found that this new algorithm provided improvements in accuracy of up to 9.9% for text classification tasks on two different datasets. Our paper also includes a formal proof of the privacy guarantees offered by this mechanism and analyzes relevant privacy statistics.</p>\n<p>Our ongoing research efforts continue to improve upon the techniques described above and enable Alexa to continue introducing new features and inventions that make customers’ lives easier while keeping their data private.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Abhinav_Aggarwal_54\\"></a><strong>Abhinav Aggarwal</strong></h4>\\n<p>Abhinav Aggarwal is an applied scientist in the Alexa organization.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭