Privacy challenges in extreme gradient boosting

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"(Editor’s note: This is the fourth in a series of articles Amazon Science is publishing related to the science behind products and services from companies in which the Amazon Alexa Fund has invested. The Alexa Fund [completed a strategic investment in Inpher, Inc](https://inpher.io/amazon-alexa-fund-investment/)., earlier this year; the New York and Swiss-based company develops privacy-preserving machine learning and analytics solutions that help organizations unlock the value of sensitive, siloed data to enable secure collaboration across organizations. This article is co-authored by Dimitar Jetchev, the cofounder and chief technology officer of Inpher, and Joan Feigenbaum, an Amazon Scholar and the Grace Murray Hopper professor of computer science at Yale University.)\n\nMachine learning (ML) is increasingly important in a wide range of applications, including market forecasting, service personalization, voice and facial recognition, autonomous driving, health diagnostics, education, and security analytics. Because ML touches so many aspects of our lives, it’s of vital concern that ML systems protect the privacy of the data used to train them, the confidential queries submitted to them, and the confidential predictions they return.\n\nPrivacy protection — and the protection of organizations’ intellectual property — motivates the study of privacy-preserving machine learning (PPML). In essence, the goal of PPML is to perform machine learning in a manner that does not reveal any unnecessary information about training-data sets, queries, and predictions.\n\n![image.png](https://dev-media.amazoncloud.cn/26a94163afcf4f46aa52e027e8038d28_image.png)\n\nDimitar Jetchev (left), the cofounder and chief technology officer of Inpher, and Joan Feigenbaum, the Grace Murray Hopper professor of computer science at Yale University, and an Amazon Scholar, describe the use of privacy-preserving machine learning to address privacy challenges in XGBoost training and prediction.\nCREDIT: GLYNIS CONDON\n\nPrivacy protection — and the protection of organizations’ intellectual property — motivates the study of privacy-preserving machine learning (PPML). In essence, the goal of PPML is to perform machine learning in a manner that does not reveal any unnecessary information about training-data sets, queries, and predictions.\n\nSuppose, for example, that schools supplied encrypted student records to educational researchers who used them to train ML models. Suppose further that students, parents, teachers, and other researchers could feed encrypted queries to the models and receive encrypted predictions in return. By taking advantage of PPML techniques in this manner, all of the participants could mine the knowledge contained in educational-record databases without compromising the privacy of the data subjects or the data users.\n\nPPML is a very active area, with an [eponymous annual workshop](https://ppml-workshop.github.io/) and many strong papers in general-ML and security venues. Techniques have been developed for privacy-preserving training and prediction on a wide range of ML model types, e.g., neural nets, decision trees, and logistic-regression formulae.\n\nIn the sections below, we describe PPML methods for training and prediction in extreme gradient boosting.\n\n#### **Training**\n\nGradient boosting is an ML method for regression and classification problems that yields a set of prediction trees, typically classification and regression trees (CARTs), which together constitute a model. A CART is a generalization of a binary decision tree; while a binary tree produces a binary output, classifying each input query as a “yes” or “no,” a CART assigns each input query a (real) numerical score.\n\nInterpretation of scores is application dependent. If v is a query, then each CART in the model assigns a score to v, and the final prediction of the model on input v is the sum of these scores. In some applications, the softmax function may be used instead of sum to produce a probability distribution over the predicted output classes.\n\n[Extreme gradient boosting (XGBoost)](https://github.com/dmlc/xgboost) is an optimized, distributed, gradient-boosting framework that is efficient, portable, and flexible. In this section, we consider confidentiality of training data in the creation of XGBoost models for disease prediction — specifically, for prediction of multiple sclerosis (MS).\n\nEarly diagnosis and treatment of MS is crucial to prevent degenerative progression of the disease and patient disabilities. [A recent paper](https://www.msard-journal.com/article/S2211-0348(20)30706-9/abstract) proposes an early-diagnosis method that applies XGBoost to electronic health records and uses three types of features: diagnostic, epidemiologic, and laboratory.\n\n#### **How cryptographic computing can accelerate the adoption of cloud computing**\n\nIn a [previous Amazon Science article](https://www.amazon.science/academic-engagements/cryptographic-computing-can-accelerate-the-adoption-of-cloud-computing), Joan Feigenbaum reviewed secure multiparty computation and privacy-preserving machine learning – two cryptographic techniques employed to address cloud-computing privacy concerns and accelerate enterprise cloud adoption.\n\nThe presence of another neurological disease (e.g., acute disseminated encephalomyelitis (ADEM)) is an example of a diagnostic feature. Epidemiologic features include age, gender, and total number of visits to a hospital. Two more features that are discovered by lab tests are used in the model and referred to as laboratory features: hyperlipidemia (abnormally elevated levels of any or all lipids) and hyperglycemia (elevated blood sugar). The proposed XGBoost model significantly outperforms other ML techniques (including naïve Bayes methods, k-nearest neighbor, and support vector machines) that have been proposed for early diagnosis of MS.\n\nCollecting a sufficient number of high-quality data samples and features to train such a diagnostic model is quite challenging, because the data reside in different private locations. The training data can be split in different ways among these locations: horizontally split, vertically split, or both.\n\nIf the private data sources contain samples with the same feature set (as would be the case if, say, the same features are extracted from health records residing in different hospitals), the dataset is said to be horizontally split. The other extreme — vertically split data — occurs when a private data source contributes a new feature for all of the training samples. For example, a health-insurance company could supply reimbursement receipts for past medication (the new feature) to complement the features in clinical health records. In these scenarios, aggregating the training data on a central server violates GDPR regulations.\n\nThe figure below illustrates one possible CART in the trained model. The weights at the leaves might indicate probabilities of MS resulting from the various paths from root to leaf.\n\n![image.png](https://dev-media.amazoncloud.cn/42d5ffd33e254773a1a1364da0de3f93_image.png)\n\nResearch on privacy-preserving training of XGBoost models for prediction of MS uses two distinct techniques: secure multiparty computation (SMPC) and privacy-preserving federated learning (PPFL). We briefly describe both of them here.\n\nAn SMPC protocol enables several parties, each of whom holds a private input, to jointly evaluate a publicly known function on these inputs without revealing anything about the inputs except what is implied by the output of the function. Private inputs are secret shared among the parties, e.g., via additive secret sharing, in which each owner of a private input v generates random “shares” that add up to v.\n\nFor instance, suppose that Alice’s private input is v = 5. She can secret share it among herself, Bob, and Charlie by generating two random integers SBob =125621 and SCharlie = 56872, sending Bob’s share to him and Charlie’s to him, and keeping SAlice = v - SBob - SCharlie = -182488. Unless an adversary controls all three parties, he cannot learn anything about Alice’s private input v. \n\nIn an execution of an SMPC protocol, the inputs to each elementary operation (addition or multiplication) are secret shared, and the output of the operation is a set of secret shares of the result. We say that a secret-shared value y (which may be the final output of the computation) is revealed to party P if all the parties send their shares to P, thus enabling P to reconstruct y. Further discussion of SMPC and its relevance to cloud computing can be found [here](https://www.amazon.science/academic-engagements/cryptographic-computing-can-accelerate-the-adoption-of-cloud-computing) and [in Inpher’s Secret Computing Explainer Series](https://inpher.io/learn/resources/).\n\n[A recent paper](https://eprint.iacr.org/2021/432.pdf) by researchers at Inpher proposes an SMPC protocol, called XORBoost, for privacy-preserving training of XGBoost models. It improves the state of the art by several orders of magnitude and ensures that\n\n- The CARTs computed by the protocol are secret shared among the training-data owners and revealed only to a designated party, namely the data analyst.\n- The training algorithm not only protects the input data but also reveals no information about the paths in the CARTs taken by any of the training samples. \n- XORBoost supports both numerical and categorical features, thus providing enough flexibility and generality to support the above model. \n\nXORBoost works well for training datasets of reasonable size — hundreds of thousands of samples and hundreds of features. However, many real-world applications require training on more than a million samples. To achieve that type of scale, one can use [federated learning](https://en.wikipedia.org/wiki/Federated_learning) (FL), which is an ML technique used to train a model on data samples held locally by multiple, decentralized edge devices without requiring the devices to exchange the samples.\n\nFL differs from XORBoost mainly in that FL does not perform the entire training exercise on secret-shared values. Rather, each device trains a local model on its local data samples and sends its local model to one or more servers for aggregation. The aggregation protocol typically uses simple operations such as sum, average, and oblivious comparisons but no complex optimization.\n\nIf the server receives the plaintext local-model updates from all of the devices, it could, in principle, recover the local training-data samples using [model-inversion attacks](https://proceedings.neurips.cc/paper/2020/file/c4ede56bbd98819ae6112b20ac6bf145-Paper.pdf). SMPC and other privacy-preserving computational techniques [can be applied](https://arxiv.org/pdf/1611.04482v1.pdf) to aggregate local models without revealing them to the server. See the diagram below for the overall architecture. \n\n![image.png](https://dev-media.amazoncloud.cn/1e9f1e11c0b54fc392453a8e8f544cc3_image.png)\n\n#### **Prediction**\n\n[PPXGBoost](https://www.amazon.science/publications/privacy-preserving-xgboost-inference) is a privacy-preserving version of XGBoost prediction. More precisely, it is a system that supports encrypted queries to encrypted XGBoost models. PPXGBoost is designed for applications that start by training a plaintext model Ω on a suitable training-data set and then create, for each user U, a personalized, encrypted version ΩU of the model to which U will submit encrypted queries and from which she will receive encrypted results. \n\n![image.png](https://dev-media.amazoncloud.cn/118e5b56b49e41d9a12dc0d3a4d5a05a_image.png)\n\nThe PPXGBoost system architecture is shown in the figure above. On the client side, there is an app with which a user encrypts queries and decrypts results. On the server side, there is a module called Proxy that runs in a trusted environment and is responsible for setup (i.e., creating, for each authorized user, a personalized, encrypted model and a set of cryptographic keys) and an ML module that executes the encrypted queries. PPXGBoost uses two specialized types of encryption schemes (symmetric-key, order-preserving encryption and public-key, additive, homomorphic encryption) to encrypt models and evaluate encrypted queries. Each user is issued keys for both schemes during the setup phase.\n\nNote that PPXGBoost is a natural choice for researchers, clinicians, and patients who wish to make disease predictions repeatedly as the patients’ circumstances change. Potentially relevant changes include exposure to new environmental factors, experimental treatment for another condition, or simply aging. An individual patient can create a personalized, encrypted version of a disease-prediction model and store it on a server owned by the medical center at which he is receiving treatment. Patient and physician can then use it to monitor, in a privacy-preserving manner, changes in the patient’s likelihood of contracting the disease.\n\n#### **Conclusion**\n\nWe have described the use of PPML to address privacy challenges in XGBoost training and prediction. In a future post, we will elaborate on how privacy-preserving federated learning enables researchers to train more-complex ML models on millions of samples stored on hundreds of thousands of devices.\n\nABOUT THE AUTHOR\n\n#### **[Dimitar Jetchev](https://www.amazon.science/author/dimitar-jetchev)**\n\nDimitar Jetchev is the cofounder and chief technology \n\nofficer of Inpher.\n\n#### **[Joan Feigenbaum](Joan Feigenbaum)**\n\nJoan Feigenbaum is an Amazon Scholar and the Grace Murray Hopper professor of computer science at Yale University.","render":"<p>(Editor’s note: This is the fourth in a series of articles Amazon Science is publishing related to the science behind products and services from companies in which the Amazon Alexa Fund has invested. The Alexa Fund <a href=\\"https://inpher.io/amazon-alexa-fund-investment/\\" target=\\"_blank\\">completed a strategic investment in Inpher, Inc</a>., earlier this year; the New York and Swiss-based company develops privacy-preserving machine learning and analytics solutions that help organizations unlock the value of sensitive, siloed data to enable secure collaboration across organizations. This article is co-authored by Dimitar Jetchev, the cofounder and chief technology officer of Inpher, and Joan Feigenbaum, an Amazon Scholar and the Grace Murray Hopper professor of computer science at Yale University.)</p>\\n<p>Machine learning (ML) is increasingly important in a wide range of applications, including market forecasting, service personalization, voice and facial recognition, autonomous driving, health diagnostics, education, and security analytics. Because ML touches so many aspects of our lives, it’s of vital concern that ML systems protect the privacy of the data used to train them, the confidential queries submitted to them, and the confidential predictions they return.</p>\n<p>Privacy protection — and the protection of organizations’ intellectual property — motivates the study of privacy-preserving machine learning (PPML). In essence, the goal of PPML is to perform machine learning in a manner that does not reveal any unnecessary information about training-data sets, queries, and predictions.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/26a94163afcf4f46aa52e027e8038d28_image.png\\" alt=\\"image.png\\" /></p>\n<p>Dimitar Jetchev (left), the cofounder and chief technology officer of Inpher, and Joan Feigenbaum, the Grace Murray Hopper professor of computer science at Yale University, and an Amazon Scholar, describe the use of privacy-preserving machine learning to address privacy challenges in XGBoost training and prediction.<br />\\nCREDIT: GLYNIS CONDON</p>\n<p>Privacy protection — and the protection of organizations’ intellectual property — motivates the study of privacy-preserving machine learning (PPML). In essence, the goal of PPML is to perform machine learning in a manner that does not reveal any unnecessary information about training-data sets, queries, and predictions.</p>\n<p>Suppose, for example, that schools supplied encrypted student records to educational researchers who used them to train ML models. Suppose further that students, parents, teachers, and other researchers could feed encrypted queries to the models and receive encrypted predictions in return. By taking advantage of PPML techniques in this manner, all of the participants could mine the knowledge contained in educational-record databases without compromising the privacy of the data subjects or the data users.</p>\n<p>PPML is a very active area, with an <a href=\\"https://ppml-workshop.github.io/\\" target=\\"_blank\\">eponymous annual workshop</a> and many strong papers in general-ML and security venues. Techniques have been developed for privacy-preserving training and prediction on a wide range of ML model types, e.g., neural nets, decision trees, and logistic-regression formulae.</p>\\n<p>In the sections below, we describe PPML methods for training and prediction in extreme gradient boosting.</p>\n<h4><a id=\\"Training_19\\"></a><strong>Training</strong></h4>\\n<p>Gradient boosting is an ML method for regression and classification problems that yields a set of prediction trees, typically classification and regression trees (CARTs), which together constitute a model. A CART is a generalization of a binary decision tree; while a binary tree produces a binary output, classifying each input query as a “yes” or “no,” a CART assigns each input query a (real) numerical score.</p>\n<p>Interpretation of scores is application dependent. If v is a query, then each CART in the model assigns a score to v, and the final prediction of the model on input v is the sum of these scores. In some applications, the softmax function may be used instead of sum to produce a probability distribution over the predicted output classes.</p>\n<p><a href=\\"https://github.com/dmlc/xgboost\\" target=\\"_blank\\">Extreme gradient boosting (XGBoost)</a> is an optimized, distributed, gradient-boosting framework that is efficient, portable, and flexible. In this section, we consider confidentiality of training data in the creation of XGBoost models for disease prediction — specifically, for prediction of multiple sclerosis (MS).</p>\\n<p>Early diagnosis and treatment of MS is crucial to prevent degenerative progression of the disease and patient disabilities. <a href=\\"https://www.msard-journal.com/article/S2211-0348(20)30706-9/abstract\\" target=\\"_blank\\">A recent paper</a> proposes an early-diagnosis method that applies XGBoost to electronic health records and uses three types of features: diagnostic, epidemiologic, and laboratory.</p>\\n<h4><a id=\\"How_cryptographic_computing_can_accelerate_the_adoption_of_cloud_computing_29\\"></a><strong>How cryptographic computing can accelerate the adoption of cloud computing</strong></h4>\\n<p>In a <a href=\\"https://www.amazon.science/academic-engagements/cryptographic-computing-can-accelerate-the-adoption-of-cloud-computing\\" target=\\"_blank\\">previous Amazon Science article</a>, Joan Feigenbaum reviewed secure multiparty computation and privacy-preserving machine learning – two cryptographic techniques employed to address cloud-computing privacy concerns and accelerate enterprise cloud adoption.</p>\\n<p>The presence of another neurological disease (e.g., acute disseminated encephalomyelitis (ADEM)) is an example of a diagnostic feature. Epidemiologic features include age, gender, and total number of visits to a hospital. Two more features that are discovered by lab tests are used in the model and referred to as laboratory features: hyperlipidemia (abnormally elevated levels of any or all lipids) and hyperglycemia (elevated blood sugar). The proposed XGBoost model significantly outperforms other ML techniques (including naïve Bayes methods, k-nearest neighbor, and support vector machines) that have been proposed for early diagnosis of MS.</p>\n<p>Collecting a sufficient number of high-quality data samples and features to train such a diagnostic model is quite challenging, because the data reside in different private locations. The training data can be split in different ways among these locations: horizontally split, vertically split, or both.</p>\n<p>If the private data sources contain samples with the same feature set (as would be the case if, say, the same features are extracted from health records residing in different hospitals), the dataset is said to be horizontally split. The other extreme — vertically split data — occurs when a private data source contributes a new feature for all of the training samples. For example, a health-insurance company could supply reimbursement receipts for past medication (the new feature) to complement the features in clinical health records. In these scenarios, aggregating the training data on a central server violates GDPR regulations.</p>\n<p>The figure below illustrates one possible CART in the trained model. The weights at the leaves might indicate probabilities of MS resulting from the various paths from root to leaf.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/42d5ffd33e254773a1a1364da0de3f93_image.png\\" alt=\\"image.png\\" /></p>\n<p>Research on privacy-preserving training of XGBoost models for prediction of MS uses two distinct techniques: secure multiparty computation (SMPC) and privacy-preserving federated learning (PPFL). We briefly describe both of them here.</p>\n<p>An SMPC protocol enables several parties, each of whom holds a private input, to jointly evaluate a publicly known function on these inputs without revealing anything about the inputs except what is implied by the output of the function. Private inputs are secret shared among the parties, e.g., via additive secret sharing, in which each owner of a private input v generates random “shares” that add up to v.</p>\n<p>For instance, suppose that Alice’s private input is v = 5. She can secret share it among herself, Bob, and Charlie by generating two random integers SBob =125621 and SCharlie = 56872, sending Bob’s share to him and Charlie’s to him, and keeping SAlice = v - SBob - SCharlie = -182488. Unless an adversary controls all three parties, he cannot learn anything about Alice’s private input v.</p>\n<p>In an execution of an SMPC protocol, the inputs to each elementary operation (addition or multiplication) are secret shared, and the output of the operation is a set of secret shares of the result. We say that a secret-shared value y (which may be the final output of the computation) is revealed to party P if all the parties send their shares to P, thus enabling P to reconstruct y. Further discussion of SMPC and its relevance to cloud computing can be found <a href=\\"https://www.amazon.science/academic-engagements/cryptographic-computing-can-accelerate-the-adoption-of-cloud-computing\\" target=\\"_blank\\">here</a> and <a href=\\"https://inpher.io/learn/resources/\\" target=\\"_blank\\">in Inpher’s Secret Computing Explainer Series</a>.</p>\\n<p><a href=\\"https://eprint.iacr.org/2021/432.pdf\\" target=\\"_blank\\">A recent paper</a> by researchers at Inpher proposes an SMPC protocol, called XORBoost, for privacy-preserving training of XGBoost models. It improves the state of the art by several orders of magnitude and ensures that</p>\\n<ul>\\n<li>The CARTs computed by the protocol are secret shared among the training-data owners and revealed only to a designated party, namely the data analyst.</li>\n<li>The training algorithm not only protects the input data but also reveals no information about the paths in the CARTs taken by any of the training samples.</li>\n<li>XORBoost supports both numerical and categorical features, thus providing enough flexibility and generality to support the above model.</li>\n</ul>\\n<p>XORBoost works well for training datasets of reasonable size — hundreds of thousands of samples and hundreds of features. However, many real-world applications require training on more than a million samples. To achieve that type of scale, one can use <a href=\\"https://en.wikipedia.org/wiki/Federated_learning\\" target=\\"_blank\\">federated learning</a> (FL), which is an ML technique used to train a model on data samples held locally by multiple, decentralized edge devices without requiring the devices to exchange the samples.</p>\\n<p>FL differs from XORBoost mainly in that FL does not perform the entire training exercise on secret-shared values. Rather, each device trains a local model on its local data samples and sends its local model to one or more servers for aggregation. The aggregation protocol typically uses simple operations such as sum, average, and oblivious comparisons but no complex optimization.</p>\n<p>If the server receives the plaintext local-model updates from all of the devices, it could, in principle, recover the local training-data samples using <a href=\\"https://proceedings.neurips.cc/paper/2020/file/c4ede56bbd98819ae6112b20ac6bf145-Paper.pdf\\" target=\\"_blank\\">model-inversion attacks</a>. SMPC and other privacy-preserving computational techniques <a href=\\"https://arxiv.org/pdf/1611.04482v1.pdf\\" target=\\"_blank\\">can be applied</a> to aggregate local models without revealing them to the server. See the diagram below for the overall architecture.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/1e9f1e11c0b54fc392453a8e8f544cc3_image.png\\" alt=\\"image.png\\" /></p>\n<h4><a id=\\"Prediction_65\\"></a><strong>Prediction</strong></h4>\\n<p><a href=\\"https://www.amazon.science/publications/privacy-preserving-xgboost-inference\\" target=\\"_blank\\">PPXGBoost</a> is a privacy-preserving version of XGBoost prediction. More precisely, it is a system that supports encrypted queries to encrypted XGBoost models. PPXGBoost is designed for applications that start by training a plaintext model Ω on a suitable training-data set and then create, for each user U, a personalized, encrypted version ΩU of the model to which U will submit encrypted queries and from which she will receive encrypted results.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/118e5b56b49e41d9a12dc0d3a4d5a05a_image.png\\" alt=\\"image.png\\" /></p>\n<p>The PPXGBoost system architecture is shown in the figure above. On the client side, there is an app with which a user encrypts queries and decrypts results. On the server side, there is a module called Proxy that runs in a trusted environment and is responsible for setup (i.e., creating, for each authorized user, a personalized, encrypted model and a set of cryptographic keys) and an ML module that executes the encrypted queries. PPXGBoost uses two specialized types of encryption schemes (symmetric-key, order-preserving encryption and public-key, additive, homomorphic encryption) to encrypt models and evaluate encrypted queries. Each user is issued keys for both schemes during the setup phase.</p>\n<p>Note that PPXGBoost is a natural choice for researchers, clinicians, and patients who wish to make disease predictions repeatedly as the patients’ circumstances change. Potentially relevant changes include exposure to new environmental factors, experimental treatment for another condition, or simply aging. An individual patient can create a personalized, encrypted version of a disease-prediction model and store it on a server owned by the medical center at which he is receiving treatment. Patient and physician can then use it to monitor, in a privacy-preserving manner, changes in the patient’s likelihood of contracting the disease.</p>\n<h4><a id=\\"Conclusion_75\\"></a><strong>Conclusion</strong></h4>\\n<p>We have described the use of PPML to address privacy challenges in XGBoost training and prediction. In a future post, we will elaborate on how privacy-preserving federated learning enables researchers to train more-complex ML models on millions of samples stored on hundreds of thousands of devices.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Dimitar_Jetchevhttpswwwamazonscienceauthordimitarjetchev_81\\"></a><strong><a href=\\"https://www.amazon.science/author/dimitar-jetchev\\" target=\\"_blank\\">Dimitar Jetchev</a></strong></h4>\n<p>Dimitar Jetchev is the cofounder and chief technology</p>\n<p>officer of Inpher.</p>\n<h4><a id=\\"Joan_FeigenbaumJoan_Feigenbaum_87\\"></a><strong>[Joan Feigenbaum](Joan Feigenbaum)</strong></h4>\\n<p>Joan Feigenbaum is an Amazon Scholar and the Grace Murray Hopper professor of computer science at Yale University.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭