Bringing the power of deep learning to data in tables

机器学习
自然语言处理
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"In recent years, deep neural networks have been responsible for most top-performing AI systems. In particular, natural-language processing (NLP) applications are generally built atop Transformer-based language models such as BERT.\n\nOne exception to the deep-learning revolution has been applications that rely on data stored in tables, where machine learning approaches based on ++[decision trees](https://www.amazon.science/tag/gradient-boosted-decision-trees)++ have tended to work better.\n\nAt Amazon Web Services, we have been working to extend Transformers from NLP to table data with TabTransformer, a novel, deep, tabular, data-modeling architecture for supervised and semi-supervised learning.\n\nStarting today, TabTransformer is available through [Amazon SageMaker JumpStart](https://aws.amazon.com/cn/sagemaker/jumpstart/?trk=cndc-detail), where it can be used for both classification and regression tasks. TabTransformer can be accessed through the ++[SageMaker JumpStart UI](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html)++ inside of SageMaker Studio or through Python code using ++[SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/overview.html#use-sagemaker-jumpstart-algorithms-with-pretrained-models)++. To get started with TabTransformer on SageMaker JumpStart, please refer to the ++[program documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html)++.\n\nWe are also thrilled to see that TabTransformer has gained attention from people across industries: it has been ++[incorporated into the official repository of Keras](https://keras.io/examples/structured_data/tabtransformer/)++, a popular open-source software library for working with deep neural networks, and it has featured in posts on ++[Towards Data Science](https://towardsdatascience.com/pytorch-widedeep-deep-learning-for-tabular-data-9cd1c48eb40d)++ and ++[Medium](https://blog.ml6.eu/transformers-for-tabular-data-hot-or-not-e3000df3ed46)++. We also ++[presented a paper](https://weasul.github.io/papers/7.pdf)++ on the work at the ++[ICLR 2021](https://www.amazon.science/conferences-and-events/iclr-2021)++ Workshop on Weakly Supervised Learning.\n\n#### **The TabTransformer solution**\n\nTabTransformer uses Transformers to generate robust data representations — embeddings — for categorical variables, or variables that take on a finite set of discrete values, such as months of the year. Continuous variables (such as numerical values) are processed in a parallel stream.\n\nWe exploit a successful methodology from NLP in which a model is pretrained on unlabeled data, to learn a general embedding scheme, then fine-tuned on labeled data, to learn a particular task. We find that this approach increases the accuracy of TabTransformer, too.\n\nIn experiments on 15 publicly available datasets, we show that TabTransformer outperforms the state-of-the-art deep-learning methods for tabular data by at least 1. 0% on mean AUC, the area under the receiver-operating curve that plots false-positive rate against false-negative rate. We also show that it matches the performance of tree-based ensemble models.\n\nIn the semi-supervised setting, when labeled data is scarce, DNNs generally outperform decision-tree-based models, because they are better able to take advantage of unlabeled data. In our semi-supervised experiments, all of the DNNs outperformed decision trees, but with our novel unsupervised pre-training procedure, TabTransformer demonstrated an average 2. 1% AUC lift over the strongest DNN benchmark.\n\nFinally, we also demonstrate that the contextual embeddings learned from TabTransformer are highly robust against both missing and noisy data features and provide better interpretability.\n\n#### **Tabular data**\n\nTo get a sense of the problem our method addresses, consider a table where the rows represent different samples and the columns represent both sample features (predictor variables) and the sample label (the target variable). TabTransformer takes the features of each sample as input and generates an output to best approximate the corresponding label.\n\nIn a practical industry setting, where the labels are partially available (i.e., semi-supervised learning scenarios), TabTransformer can be pre-trained on all the samples without any labels and fine-tuned on the labeled samples.\n\nAdditionally, companies usually have one large table (e.g., describing customers/products) that contains multiple target variables, and they are interested in analyzing this data in multiple ways. TabTransformer can be pre-trained on the large number of unlabeled samples once and fine-tuned multiple times for multiple target variables.\n\nThe architecture of TabTransformer is shown below. In our experiments, we use standard feature-engineering techniques to transform data types such as text, zip codes, and IP addresses into either numeric or categorical features.\n![下载.jpg](https://dev-media.amazoncloud.cn/7a411a95941e4a578dfe208facac0510_%E4%B8%8B%E8%BD%BD.jpg)\n\nThe architecture of TabTransformer.\n\n#### **Pretraining procedures**\n\nWe explore two different types of pre-training procedures: masked language modeling (MLM) and replaced-token detection (RTD). In MLM, for each sample, we randomly select a certain portion of features to be masked and use the embeddings of the other features to reconstruct the masked features. In RTD, for each sample, instead of masking features, we replace them with random values chosen from the same columns.\n\nIn addition to comparing TabTransformer to baseline models, we conducted a study to demonstrate the interpretability of the embeddings produced by our contextual-embedding component.\n\nIn that study, we took contextual embeddings from different layers of the Transformer and computed a t-distributed stochastic neighbor embedding (t-SNE) to visualize their similarity in function space. More precisely, after training TabTransformer, we pass the categorical features in the test data through our trained model and extract all contextual embeddings (across all columns) from a certain layer of the Transformer. The t-SNE algorithm is then used to reduce each embedding to a 2-D point in the t-SNE plot.\n\n![下载 1.jpg](https://dev-media.amazoncloud.cn/c28f131a7a164d2e8dae453bf804dd35_%E4%B8%8B%E8%BD%BD%20%281%29.jpg)\n\nT-SNE plots of learned embeddings for categorical features in the dataset BankMarketing. **Left**: The embeddings generated from the last layer of the Transformer. **Center**: The embeddings before being passed into the Transformer. **Right**: The embeddings learned by the model without the Transformer layers.\n\nThe figure above shows the 2-D visualization of embeddings from the last layer of the Transformer for the dataset bank marketing. We can see that semantically similar classes are close to each other and form clusters (annotated by a set of labels) in the embedding space.\n\nFor example, all of the client-based features (colored markers), such as job, education level, and marital status, stay close to the center, and non-client-based features (gray markers), such as month (last contact month of the year) and day (last contact day of the week), lie outside the central area. In the bottom cluster, the embedding of having a housing loan stays close to that of having defaulted, while the embeddings of being a student, single marital status, not having a housing loan, and tertiary education level are close to each other.\n\nThe center figure is the t-SNE plot of embeddings before being passed through the Transformer (i.e., from layer 0). The right figure is the t-SNE plot of the embeddings the model produces when the Transformer layers are removed, converting it into an ordinary multilayer perceptron (MLP). In those plots, we do not observe the types of patterns seen in the left plot.\n\nFinally, we conduct extensive experiments on 15 publicly available datasets, using both supervised and semi-supervised learning. In the supervised-learning experiment, TabTransformer matched the performance of the state-of-the-art gradient-boosted decision-tree (GBDT) model and significantly outperformed the prior DNN models TabNet and Deep VIB.\n\n![0f1434a516a5afa173a6a760db8d074.png](https://dev-media.amazoncloud.cn/bd8d9e403a524f26a92d8e22b97dcfa7_0f1434a516a5afa173a6a760db8d074.png)\n\n\nModel performance with supervised learning. The evaluation metric is mean standard deviation of AUC score over the 15 datasets for each model. The larger the number, the better the result. The top two numbers are bold.\n\nIn the semi-supervised-learning experiment, we pretrain two TabTransformer models on the entire unlabeled set of training data, using the MLM and RTD methods respectively; then we fine-tune both models on labeled data.\n\nAs baselines, we use the semi-supervised learning methods pseudo labeling and entropy regularization to train both a TabTransformer network and an ordinary MLP. We also train a gradient-boosted-decision-tree model using pseudo-labeling and an MLP using a pretraining method called the swap-noise denoising autoencoder.\n\n![e92b6e9bc3cf11cfba0066ac6442e05.png](https://dev-media.amazoncloud.cn/c72215acb3e94f75aef4e61a9cc87206_e92b6e9bc3cf11cfba0066ac6442e05.png)\n\nSemi-supervised-learning results on six datasets, each with more than 30,000 unlabeled data points, and different number of labeled data points. Evaluation metric is mean AUC in percentage.\n\n![471b180813745fded2a73758587dbee.jpg](https://dev-media.amazoncloud.cn/b5ef0b9313464f84b7f7d3f1c1a34930_471b180813745fded2a73758587dbee.jpg)\n\nSemi-supervised learning results on nine datasets, each with fewer than 30,000 data points, and different numbers of labeled data points. Evaluation metric is mean AUC in percentage.\n\nTo gauge relative performance with different amounts of unlabeled data, we split the set of 15 datasets into two subsets. The first set consists of the six datasets that containing more than 30,000 data points. The second set includes the remaining nine datasets.\n\nWhen the amount of unlabeled data is large, TabTransformer-RTD and TabTransformer-MLM significantly outperform all the other competitors. Particularly, TabTransformer-RTD/MLM improvement are at least 1. 2%, 2. 0%, and 2. 1% on mean AUC for the scenarios of 50, 200, and 500 labeled data points, respectively. When the number of unlabeled data becomes smaller, as shown in Table 3, TabTransformer-RTD still outperforms most of its competitors but with a marginal improvement.\n\nAcknowledgments: ++[Ashish Khetan](https://www.amazon.science/author/ashish-khetan)++, Milan Cvitkovic, ++[Zohar Karnin](https://www.amazon.science/author/zohar-karnin)++\n\n\nABOUT THE AUTHOR\n\n#### **[Xin Huang](https://www.amazon.science/author/xin-huang)**\nXin Huang is an applied scientist with Amazon Web Services.\n","render":"<p>In recent years, deep neural networks have been responsible for most top-performing AI systems. In particular, natural-language processing (NLP) applications are generally built atop Transformer-based language models such as BERT.</p>\n<p>One exception to the deep-learning revolution has been applications that rely on data stored in tables, where machine learning approaches based on <ins><a href=\\"https://www.amazon.science/tag/gradient-boosted-decision-trees\\" target=\\"_blank\\">decision trees</a></ins> have tended to work better.</p>\n<p>At Amazon Web Services, we have been working to extend Transformers from NLP to table data with TabTransformer, a novel, deep, tabular, data-modeling architecture for supervised and semi-supervised learning.</p>\n<p>Starting today, TabTransformer is available through Amazon SageMaker JumpStart, where it can be used for both classification and regression tasks. TabTransformer can be accessed through the <ins><a href=\\"https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html\\" target=\\"_blank\\">SageMaker JumpStart UI</a></ins> inside of SageMaker Studio or through Python code using <ins><a href=\\"https://sagemaker.readthedocs.io/en/stable/overview.html#use-sagemaker-jumpstart-algorithms-with-pretrained-models\\" target=\\"_blank\\">SageMaker Python SDK</a></ins>. To get started with TabTransformer on SageMaker JumpStart, please refer to the <ins><a href=\\"https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html\\" target=\\"_blank\\">program documentation</a></ins>.</p>\n<p>We are also thrilled to see that TabTransformer has gained attention from people across industries: it has been <ins><a href=\\"https://keras.io/examples/structured_data/tabtransformer/\\" target=\\"_blank\\">incorporated into the official repository of Keras</a></ins>, a popular open-source software library for working with deep neural networks, and it has featured in posts on <ins><a href=\\"https://towardsdatascience.com/pytorch-widedeep-deep-learning-for-tabular-data-9cd1c48eb40d\\" target=\\"_blank\\">Towards Data Science</a></ins> and <ins><a href=\\"https://blog.ml6.eu/transformers-for-tabular-data-hot-or-not-e3000df3ed46\\" target=\\"_blank\\">Medium</a></ins>. We also <ins><a href=\\"https://weasul.github.io/papers/7.pdf\\" target=\\"_blank\\">presented a paper</a></ins> on the work at the <ins><a href=\\"https://www.amazon.science/conferences-and-events/iclr-2021\\" target=\\"_blank\\">ICLR 2021</a></ins> Workshop on Weakly Supervised Learning.</p>\n<h4><a id=\\"The_TabTransformer_solution_10\\"></a><strong>The TabTransformer solution</strong></h4>\\n<p>TabTransformer uses Transformers to generate robust data representations — embeddings — for categorical variables, or variables that take on a finite set of discrete values, such as months of the year. Continuous variables (such as numerical values) are processed in a parallel stream.</p>\n<p>We exploit a successful methodology from NLP in which a model is pretrained on unlabeled data, to learn a general embedding scheme, then fine-tuned on labeled data, to learn a particular task. We find that this approach increases the accuracy of TabTransformer, too.</p>\n<p>In experiments on 15 publicly available datasets, we show that TabTransformer outperforms the state-of-the-art deep-learning methods for tabular data by at least 1. 0% on mean AUC, the area under the receiver-operating curve that plots false-positive rate against false-negative rate. We also show that it matches the performance of tree-based ensemble models.</p>\n<p>In the semi-supervised setting, when labeled data is scarce, DNNs generally outperform decision-tree-based models, because they are better able to take advantage of unlabeled data. In our semi-supervised experiments, all of the DNNs outperformed decision trees, but with our novel unsupervised pre-training procedure, TabTransformer demonstrated an average 2. 1% AUC lift over the strongest DNN benchmark.</p>\n<p>Finally, we also demonstrate that the contextual embeddings learned from TabTransformer are highly robust against both missing and noisy data features and provide better interpretability.</p>\n<h4><a id=\\"Tabular_data_22\\"></a><strong>Tabular data</strong></h4>\\n<p>To get a sense of the problem our method addresses, consider a table where the rows represent different samples and the columns represent both sample features (predictor variables) and the sample label (the target variable). TabTransformer takes the features of each sample as input and generates an output to best approximate the corresponding label.</p>\n<p>In a practical industry setting, where the labels are partially available (i.e., semi-supervised learning scenarios), TabTransformer can be pre-trained on all the samples without any labels and fine-tuned on the labeled samples.</p>\n<p>Additionally, companies usually have one large table (e.g., describing customers/products) that contains multiple target variables, and they are interested in analyzing this data in multiple ways. TabTransformer can be pre-trained on the large number of unlabeled samples once and fine-tuned multiple times for multiple target variables.</p>\n<p>The architecture of TabTransformer is shown below. In our experiments, we use standard feature-engineering techniques to transform data types such as text, zip codes, and IP addresses into either numeric or categorical features.<br />\\n<img src=\\"https://dev-media.amazoncloud.cn/7a411a95941e4a578dfe208facac0510_%E4%B8%8B%E8%BD%BD.jpg\\" alt=\\"下载.jpg\\" /></p>\n<p>The architecture of TabTransformer.</p>\n<h4><a id=\\"Pretraining_procedures_35\\"></a><strong>Pretraining procedures</strong></h4>\\n<p>We explore two different types of pre-training procedures: masked language modeling (MLM) and replaced-token detection (RTD). In MLM, for each sample, we randomly select a certain portion of features to be masked and use the embeddings of the other features to reconstruct the masked features. In RTD, for each sample, instead of masking features, we replace them with random values chosen from the same columns.</p>\n<p>In addition to comparing TabTransformer to baseline models, we conducted a study to demonstrate the interpretability of the embeddings produced by our contextual-embedding component.</p>\n<p>In that study, we took contextual embeddings from different layers of the Transformer and computed a t-distributed stochastic neighbor embedding (t-SNE) to visualize their similarity in function space. More precisely, after training TabTransformer, we pass the categorical features in the test data through our trained model and extract all contextual embeddings (across all columns) from a certain layer of the Transformer. The t-SNE algorithm is then used to reduce each embedding to a 2-D point in the t-SNE plot.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/c28f131a7a164d2e8dae453bf804dd35_%E4%B8%8B%E8%BD%BD%20%281%29.jpg\\" alt=\\"下载 1.jpg\\" /></p>\n<p>T-SNE plots of learned embeddings for categorical features in the dataset BankMarketing. <strong>Left</strong>: The embeddings generated from the last layer of the Transformer. <strong>Center</strong>: The embeddings before being passed into the Transformer. <strong>Right</strong>: The embeddings learned by the model without the Transformer layers.</p>\\n<p>The figure above shows the 2-D visualization of embeddings from the last layer of the Transformer for the dataset bank marketing. We can see that semantically similar classes are close to each other and form clusters (annotated by a set of labels) in the embedding space.</p>\n<p>For example, all of the client-based features (colored markers), such as job, education level, and marital status, stay close to the center, and non-client-based features (gray markers), such as month (last contact month of the year) and day (last contact day of the week), lie outside the central area. In the bottom cluster, the embedding of having a housing loan stays close to that of having defaulted, while the embeddings of being a student, single marital status, not having a housing loan, and tertiary education level are close to each other.</p>\n<p>The center figure is the t-SNE plot of embeddings before being passed through the Transformer (i.e., from layer 0). The right figure is the t-SNE plot of the embeddings the model produces when the Transformer layers are removed, converting it into an ordinary multilayer perceptron (MLP). In those plots, we do not observe the types of patterns seen in the left plot.</p>\n<p>Finally, we conduct extensive experiments on 15 publicly available datasets, using both supervised and semi-supervised learning. In the supervised-learning experiment, TabTransformer matched the performance of the state-of-the-art gradient-boosted decision-tree (GBDT) model and significantly outperformed the prior DNN models TabNet and Deep VIB.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/bd8d9e403a524f26a92d8e22b97dcfa7_0f1434a516a5afa173a6a760db8d074.png\\" alt=\\"0f1434a516a5afa173a6a760db8d074.png\\" /></p>\n<p>Model performance with supervised learning. The evaluation metric is mean standard deviation of AUC score over the 15 datasets for each model. The larger the number, the better the result. The top two numbers are bold.</p>\n<p>In the semi-supervised-learning experiment, we pretrain two TabTransformer models on the entire unlabeled set of training data, using the MLM and RTD methods respectively; then we fine-tune both models on labeled data.</p>\n<p>As baselines, we use the semi-supervised learning methods pseudo labeling and entropy regularization to train both a TabTransformer network and an ordinary MLP. We also train a gradient-boosted-decision-tree model using pseudo-labeling and an MLP using a pretraining method called the swap-noise denoising autoencoder.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/c72215acb3e94f75aef4e61a9cc87206_e92b6e9bc3cf11cfba0066ac6442e05.png\\" alt=\\"e92b6e9bc3cf11cfba0066ac6442e05.png\\" /></p>\n<p>Semi-supervised-learning results on six datasets, each with more than 30,000 unlabeled data points, and different number of labeled data points. Evaluation metric is mean AUC in percentage.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/b5ef0b9313464f84b7f7d3f1c1a34930_471b180813745fded2a73758587dbee.jpg\\" alt=\\"471b180813745fded2a73758587dbee.jpg\\" /></p>\n<p>Semi-supervised learning results on nine datasets, each with fewer than 30,000 data points, and different numbers of labeled data points. Evaluation metric is mean AUC in percentage.</p>\n<p>To gauge relative performance with different amounts of unlabeled data, we split the set of 15 datasets into two subsets. The first set consists of the six datasets that containing more than 30,000 data points. The second set includes the remaining nine datasets.</p>\n<p>When the amount of unlabeled data is large, TabTransformer-RTD and TabTransformer-MLM significantly outperform all the other competitors. Particularly, TabTransformer-RTD/MLM improvement are at least 1. 2%, 2. 0%, and 2. 1% on mean AUC for the scenarios of 50, 200, and 500 labeled data points, respectively. When the number of unlabeled data becomes smaller, as shown in Table 3, TabTransformer-RTD still outperforms most of its competitors but with a marginal improvement.</p>\n<p>Acknowledgments: <ins><a href=\\"https://www.amazon.science/author/ashish-khetan\\" target=\\"_blank\\">Ashish Khetan</a></ins>, Milan Cvitkovic, <ins><a href=\\"https://www.amazon.science/author/zohar-karnin\\" target=\\"_blank\\">Zohar Karnin</a></ins></p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Xin_Huanghttpswwwamazonscienceauthorxinhuang_81\\"></a><strong><a href=\\"https://www.amazon.science/author/xin-huang\\" target=\\"_blank\\">Xin Huang</a></strong></h4>\n<p>Xin Huang is an applied scientist with Amazon Web Services.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭