Causal inference when treatments are continuous variables

机器学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"In scientific and business endeavors, we are often interested in the causal effect of a treatment — say, ++[changing the font](https://www.amazon.science/blog/machine-learning-tools-increase-power-of-hypothesis-testing)++ of a web page — on a response variable — say, how long visitors spend on the page. Often, the treatment is binary: the page is in one font or the other. But sometimes it’s continuous. For instance, a soft drink manufacturer might want to test a range of possibilities for adding lemon flavoring to a new drink.\n\nTypically, confounding factors exist that influence both the treatment and the response variable, and causal estimation has to account for them. While methods for handling confounders have been well studied when treatments are binary, causal inference with continuous treatments is far more challenging and largely understudied.\n\nAt this year’s International Conference on Machine Learning (++[ICML](https://www.amazon.science/conferences-and-events/icml-2022)++), my colleagues and I presented a paper proposing ++[a new way to estimate the effects](https://www.amazon.science/publications/end-to-end-balancing-for-causal-continuous-treatment-effect-estimation)++ of continuously varied treatment, one that uses an end-to-end machine learning model in combination with the concepts of propensity score weighting and entropy balancing.\n\nWe compare our method to four predecessors — including conventional entropy balancing — on two different synthetic datasets, one in which the relationship between the intervention and the response variable is linear, and one in which it is nonlinear. On the linear dataset, our method reduces the root-mean-square error by 27% versus the best-performing predecessor, while on the nonlinear dataset, the improvement is 38%.\n\n#### **Propensity scores**\n\n![下载 2.jpg](https://dev-media.amazoncloud.cn/d5c46b116fe843369dee6556ad118ac4_%E4%B8%8B%E8%BD%BD%20%282%29.jpg)\n\nIn this causal graph, x is a confounder that exerts a causal influence on both a and y.\n\nContinuous treatments make causal inference more difficult primarily because they induce uncountably many potential outcomes per unit (e.g., per subject), only one of which is observed for each unit and across units. For example, there is an infinite number of lemon-flavoring volumes between one milliliter and two, with a corresponding infinity of possible customer preferences. In the continuous-treatment setting, a causal-inference model maps a continuous input to a continuous output, or response curve.\n\nIf two variables are influenced by a third — a confounder — it can be difficult to determine the causal relationship between them. Consider a simple causal graph, involving a treatment, a, a response variable, y, and a confounder, x, which influences both a and y.\n\nIn the context of continuous treatments, the standard way to account for confounders is through propensity score weighting. Essentially, propensity score weighting discounts the effect of one variable on another if they are both influenced by a confounder.\n\nIn our example graph, for instance, we would weight the edge between a and y according to the inverse probability of agiven x. That is, the greater the likelihood of a given x, the less influence we take a to have on y.\n\nHowever, propensity scores can be very large for some units, causing data imbalances and leading to unstable estimation and uncertain inference. Entropy balancing is a way to rectify this problem, by selecting weights so as to minimize the differences between them — that is, maximize their entropy.\n\n#### **End-to-end balancing**\n\nOur new algorithm is based on entropy balancing and learns weights to directly maximize causal-inference accuracy through end-to-end optimization. We call end-to-end balancing, or E2B.\n\nThe figure below illustrates our approach. The variables {xi, ai} are pairs of confounders and treatments in the dataset, and lq is a neural network that learns to generate a set of entropy-balanced weights, {wi}, given a confounder-treatment pair. The function µ-bar (µ with a line over it) is a randomly selected response function — that is, a function that computes a value for a response variable (ȳ) given a treatment (a).\n\nThe triplets {xi, ai, ȳi} thus constitute a synthetic dataset: real x’s and a’s but synthetically generated y’s. During training, the neural network learns to produce entropy-balancing weights that re-create the known response function µ-bar. Then, once the network is trained, we apply it to the true dataset — with the real y’s — to estimate the true response function, µ-hat.\n\n![下载 3.jpg](https://dev-media.amazoncloud.cn/3d9d5b89e9324a3bafb421ebfe38c3a9_%E4%B8%8B%E8%BD%BD%20%283%29.jpg)\n\nFramework for end-to-end balancing.\n\nIn our paper, we provide a theoretical analysis demonstrating the consistency of our approach. We also study the impact of mis-specification in the synthetic data generation process. That is, we show that even the initial selection of a highly inaccurate random response function — µ-bar — does not prevent the model from converging on a good estimation of the real response function, µ-hat.\n\nABOUT THE AUTHOR\n#### **[Mohammad Taha Bahadori](https://www.amazon.science/author/mohammed-taha-bahadori)**\nTaha Bahadori is a senior machine learning scientist in the Amazon Devices organization.","render":"<p>In scientific and business endeavors, we are often interested in the causal effect of a treatment — say, <ins><a href=\\"https://www.amazon.science/blog/machine-learning-tools-increase-power-of-hypothesis-testing\\" target=\\"_blank\\">changing the font</a></ins> of a web page — on a response variable — say, how long visitors spend on the page. Often, the treatment is binary: the page is in one font or the other. But sometimes it’s continuous. For instance, a soft drink manufacturer might want to test a range of possibilities for adding lemon flavoring to a new drink.</p>\n<p>Typically, confounding factors exist that influence both the treatment and the response variable, and causal estimation has to account for them. While methods for handling confounders have been well studied when treatments are binary, causal inference with continuous treatments is far more challenging and largely understudied.</p>\n<p>At this year’s International Conference on Machine Learning (<ins><a href=\\"https://www.amazon.science/conferences-and-events/icml-2022\\" target=\\"_blank\\">ICML</a></ins>), my colleagues and I presented a paper proposing <ins><a href=\\"https://www.amazon.science/publications/end-to-end-balancing-for-causal-continuous-treatment-effect-estimation\\" target=\\"_blank\\">a new way to estimate the effects</a></ins> of continuously varied treatment, one that uses an end-to-end machine learning model in combination with the concepts of propensity score weighting and entropy balancing.</p>\n<p>We compare our method to four predecessors — including conventional entropy balancing — on two different synthetic datasets, one in which the relationship between the intervention and the response variable is linear, and one in which it is nonlinear. On the linear dataset, our method reduces the root-mean-square error by 27% versus the best-performing predecessor, while on the nonlinear dataset, the improvement is 38%.</p>\n<h4><a id=\\"Propensity_scores_8\\"></a><strong>Propensity scores</strong></h4>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/d5c46b116fe843369dee6556ad118ac4_%E4%B8%8B%E8%BD%BD%20%282%29.jpg\\" alt=\\"下载 2.jpg\\" /></p>\n<p>In this causal graph, x is a confounder that exerts a causal influence on both a and y.</p>\n<p>Continuous treatments make causal inference more difficult primarily because they induce uncountably many potential outcomes per unit (e.g., per subject), only one of which is observed for each unit and across units. For example, there is an infinite number of lemon-flavoring volumes between one milliliter and two, with a corresponding infinity of possible customer preferences. In the continuous-treatment setting, a causal-inference model maps a continuous input to a continuous output, or response curve.</p>\n<p>If two variables are influenced by a third — a confounder — it can be difficult to determine the causal relationship between them. Consider a simple causal graph, involving a treatment, a, a response variable, y, and a confounder, x, which influences both a and y.</p>\n<p>In the context of continuous treatments, the standard way to account for confounders is through propensity score weighting. Essentially, propensity score weighting discounts the effect of one variable on another if they are both influenced by a confounder.</p>\n<p>In our example graph, for instance, we would weight the edge between a and y according to the inverse probability of agiven x. That is, the greater the likelihood of a given x, the less influence we take a to have on y.</p>\n<p>However, propensity scores can be very large for some units, causing data imbalances and leading to unstable estimation and uncertain inference. Entropy balancing is a way to rectify this problem, by selecting weights so as to minimize the differences between them — that is, maximize their entropy.</p>\n<h4><a id=\\"Endtoend_balancing_24\\"></a><strong>End-to-end balancing</strong></h4>\\n<p>Our new algorithm is based on entropy balancing and learns weights to directly maximize causal-inference accuracy through end-to-end optimization. We call end-to-end balancing, or E2B.</p>\n<p>The figure below illustrates our approach. The variables {xi, ai} are pairs of confounders and treatments in the dataset, and lq is a neural network that learns to generate a set of entropy-balanced weights, {wi}, given a confounder-treatment pair. The function µ-bar (µ with a line over it) is a randomly selected response function — that is, a function that computes a value for a response variable (ȳ) given a treatment (a).</p>\n<p>The triplets {xi, ai, ȳi} thus constitute a synthetic dataset: real x’s and a’s but synthetically generated y’s. During training, the neural network learns to produce entropy-balancing weights that re-create the known response function µ-bar. Then, once the network is trained, we apply it to the true dataset — with the real y’s — to estimate the true response function, µ-hat.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/3d9d5b89e9324a3bafb421ebfe38c3a9_%E4%B8%8B%E8%BD%BD%20%283%29.jpg\\" alt=\\"下载 3.jpg\\" /></p>\n<p>Framework for end-to-end balancing.</p>\n<p>In our paper, we provide a theoretical analysis demonstrating the consistency of our approach. We also study the impact of mis-specification in the synthetic data generation process. That is, we show that even the initial selection of a highly inaccurate random response function — µ-bar — does not prevent the model from converging on a good estimation of the real response function, µ-hat.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Mohammad_Taha_Bahadorihttpswwwamazonscienceauthormohammedtahabahadori_39\\"></a><strong><a href=\\"https://www.amazon.science/author/mohammed-taha-bahadori\\" target=\\"_blank\\">Mohammad Taha Bahadori</a></strong></h4>\n<p>Taha Bahadori is a senior machine learning scientist in the Amazon Devices organization.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭