Amazon releases code to help reduce bias in machine learning models

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"In recent years, algorithmic bias has become an important research topic in machine learning. Sometimes, because of imbalances in training data or other factors, machine learning models will yield different results for different populations of users, when we want them to treat all populations the same.\n\nAt this year’s AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES), with our colleagues we are presenting a [paper](https://www.amazon.science/publications/fair-bayesian-optimization) demonstrating how to help mitigate bias simply by tuning a model’s hyperparameters.\n\n![image.png](https://dev-media.amazoncloud.cn/7b05f415550a4456ab008241927fdd4e_image.png)\n\nThis graph illustrates the intuition behind our work by plotting the accuracy against unfairness for gradient-boosted tree ensembles (XGBoost), random forests (RF), and fully connected feed-forward neural networks (NN) with different hyperparameter settings.\n\nOur method is a variation on Bayesian optimization (BO), which is a technique for sampling input-output pairs to efficiently estimate an unknown function. Our method, which we call fair Bayesian optimization, simultaneously models two functions, one that correlates hyperparameters with model accuracy and one that correlates them with a fairness measure. The approach is agnostic as to choice of fairness measure.\n\nAmazon Science [wrote about our approach](https://www.amazon.science/blog/amazon-researchers-win-best-paper-award-for-helping-promote-fairness-in-ai) when an earlier version of our paper won a best-paper award at an ICML workshop. But with the AIES paper, we have released our code using Amazon’s AutoML framework [AutoGluon](https://auto.gluon.ai/stable/index.html). In this post, we’d like to demonstrate how to apply constrained BO to mitigate unfairness while optimizing the accuracy of a machine learning model, using our code.\n\n#### **How to use fair BO**\n\nAs a running example, we are going to use the German Credit Data dataset from the UCI Machine Learning Repository. The dataset is annotated for a binary classification task, predicting whether a person is a “good” or “bad” credit risk. The dataset is unbalanced, with more than twice as many positive examples as negative ones. The unbalance is ever higher if we focus our attention to two subgroups: foreign and local workers.\n\nFirst, we need to choose a base model, whose hyperparameters we will tune. In this example, we select a random forest and tune three hyperparameters: min_samples_split, max_depth, and criterion. \n\nWe also need to select a fairness measure. We use the notion of statistical parity, which holds that the probability of a positive classification should be the same across subgroups. More precisely, we use difference in statistical parity (DSP), which requires that for two subgroups, A and B, the difference between their probabilities of positive classification should fall below some threshold, ϵ.\n\n![image.png](https://dev-media.amazoncloud.cn/edae794a61eb44fc9739882875dc59b4_image.png)\n\nWe next create the black box to optimize and set a fairness constraint on the DSP between local and foreign workers, with a value of ϵ equal to 0.01. \n\n![image.png](https://dev-media.amazoncloud.cn/1f705acbcead48ddaf8bfdfab0c30422_image.png)\n\nWe are now ready to create the scheduler and searcher and run a hyperparameter-tuning experiment through constrained Bayesian optimization:\n\n![image.png](https://dev-media.amazoncloud.cn/5af05ba9d44946a7b71677dbf87384cf_image.png)\n\nLet’s compare the models obtained by using standard BO and constrained BO (CBO) after 50 iterations:\n\n![image.png](https://dev-media.amazoncloud.cn/818183eac9fa4e7aa84530ec89766d58_image.png)\n\nIn the plots above, the horizontal line is the fairness constraint, set to DSP ≤ 0.01, and darker dots correspond to later BO iterations. Standard BO (left) can get stuck in high-performing yet unfair regions, failing to return a well-performing, feasible solution. Our CBO approach (right) is able to focus the exploration over the fair area of the hyperparameter space and finds a more accurate fair solution.\n\nFeel free to check out our code in [AutoGluon](https://auto.gluon.ai/). We have also published [a full tutorial](https://auto.gluon.ai/dev/tutorials/course/fairbo.html) on the use of our code.\n\nABOUT THE AUTHOR\n#### **Valerio Perrone**\nValerio Perrone is a machine learning scientist with Amazon Web Services.\n#### **Michele Donini**\nMichele Donini is a senior applied scientist with Amazon Web Services.","render":"<p>In recent years, algorithmic bias has become an important research topic in machine learning. Sometimes, because of imbalances in training data or other factors, machine learning models will yield different results for different populations of users, when we want them to treat all populations the same.</p>\n<p>At this year’s AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES), with our colleagues we are presenting a <a href=\\"https://www.amazon.science/publications/fair-bayesian-optimization\\" target=\\"_blank\\">paper</a> demonstrating how to help mitigate bias simply by tuning a model’s hyperparameters.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/7b05f415550a4456ab008241927fdd4e_image.png\\" alt=\\"image.png\\" /></p>\n<p>This graph illustrates the intuition behind our work by plotting the accuracy against unfairness for gradient-boosted tree ensembles (XGBoost), random forests (RF), and fully connected feed-forward neural networks (NN) with different hyperparameter settings.</p>\n<p>Our method is a variation on Bayesian optimization (BO), which is a technique for sampling input-output pairs to efficiently estimate an unknown function. Our method, which we call fair Bayesian optimization, simultaneously models two functions, one that correlates hyperparameters with model accuracy and one that correlates them with a fairness measure. The approach is agnostic as to choice of fairness measure.</p>\n<p>Amazon Science <a href=\\"https://www.amazon.science/blog/amazon-researchers-win-best-paper-award-for-helping-promote-fairness-in-ai\\" target=\\"_blank\\">wrote about our approach</a> when an earlier version of our paper won a best-paper award at an ICML workshop. But with the AIES paper, we have released our code using Amazon’s AutoML framework <a href=\\"https://auto.gluon.ai/stable/index.html\\" target=\\"_blank\\">AutoGluon</a>. In this post, we’d like to demonstrate how to apply constrained BO to mitigate unfairness while optimizing the accuracy of a machine learning model, using our code.</p>\\n<h4><a id=\\"How_to_use_fair_BO_12\\"></a><strong>How to use fair BO</strong></h4>\\n<p>As a running example, we are going to use the German Credit Data dataset from the UCI Machine Learning Repository. The dataset is annotated for a binary classification task, predicting whether a person is a “good” or “bad” credit risk. The dataset is unbalanced, with more than twice as many positive examples as negative ones. The unbalance is ever higher if we focus our attention to two subgroups: foreign and local workers.</p>\n<p>First, we need to choose a base model, whose hyperparameters we will tune. In this example, we select a random forest and tune three hyperparameters: min_samples_split, max_depth, and criterion.</p>\n<p>We also need to select a fairness measure. We use the notion of statistical parity, which holds that the probability of a positive classification should be the same across subgroups. More precisely, we use difference in statistical parity (DSP), which requires that for two subgroups, A and B, the difference between their probabilities of positive classification should fall below some threshold, ϵ.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/edae794a61eb44fc9739882875dc59b4_image.png\\" alt=\\"image.png\\" /></p>\n<p>We next create the black box to optimize and set a fairness constraint on the DSP between local and foreign workers, with a value of ϵ equal to 0.01.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/1f705acbcead48ddaf8bfdfab0c30422_image.png\\" alt=\\"image.png\\" /></p>\n<p>We are now ready to create the scheduler and searcher and run a hyperparameter-tuning experiment through constrained Bayesian optimization:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/5af05ba9d44946a7b71677dbf87384cf_image.png\\" alt=\\"image.png\\" /></p>\n<p>Let’s compare the models obtained by using standard BO and constrained BO (CBO) after 50 iterations:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/818183eac9fa4e7aa84530ec89766d58_image.png\\" alt=\\"image.png\\" /></p>\n<p>In the plots above, the horizontal line is the fairness constraint, set to DSP ≤ 0.01, and darker dots correspond to later BO iterations. Standard BO (left) can get stuck in high-performing yet unfair regions, failing to return a well-performing, feasible solution. Our CBO approach (right) is able to focus the exploration over the fair area of the hyperparameter space and finds a more accurate fair solution.</p>\n<p>Feel free to check out our code in <a href=\\"https://auto.gluon.ai/\\" target=\\"_blank\\">AutoGluon</a>. We have also published <a href=\\"https://auto.gluon.ai/dev/tutorials/course/fairbo.html\\" target=\\"_blank\\">a full tutorial</a> on the use of our code.</p>\\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Valerio_Perrone_39\\"></a><strong>Valerio Perrone</strong></h4>\\n<p>Valerio Perrone is a machine learning scientist with Amazon Web Services.</p>\n<h4><a id=\\"Michele_Donini_41\\"></a><strong>Michele Donini</strong></h4>\\n<p>Michele Donini is a senior applied scientist with Amazon Web Services.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭