{"value":"[Time series forecasting](https://www.amazon.science/tag/time-series) and **graph** representations of data are both major topics of research at Amazon: time series forecasting is crucial to both supply chain optimization and product recommendation, and graph representations help make sense of the large datasets that are common at Amazon’s scale, such as the Amazon product catalogue.\n\nSo it’s no surprise that both topics are well represented among the Amazon papers at the 2022 International Conference on Learning Representations ([ICLR](https://www.amazon.science/conferences-and-events/iclr-2022)), which takes place this week. Another paper also touches on one of Amazon’s core scientific interests, [natural-language processing](https://www.amazon.science/tag/nlp), or computation involving free-form text inputs.\n\nThe remaining Amazon papers discuss more general machine learning techniques, such as **data augmentation**, or automatically selecting or generating training examples that can improve the performance of machine learning models. Another paper looks at **dataset optimization** more generally, proposing a technique that could be used to evaluate individual examples for inclusion in a dataset or exclusion from it. And two papers from Amazon Web Services’ [Causal-Representation Learning](https://www.amazon.science/tag/causal-inference) team, which includes Amazon vice president and distinguished scientist Bernhard Schölkopf, examine the limitations of existing approaches to machine learning.\n\n\n#### **Graphs**\n\n\nGraphs represent data as nodes, usually depicted as circles, and edges, usually depicted as line segments connecting nodes. Graph-structured data can make machine learning more efficient, because the graph explicitly encodes relationships that a machine learning model would otherwise have to infer from data correlations.\n\n[Graph neural networks](https://www.amazon.science/tag/graph-neural-networks) (GNNs) are a powerful tool for working with graph-structured data. Like most neural networks, GNNs produce embeddings, or fixed-length vector representations of input data, that are useful for particular computational tasks. In the case of GNNs, the embeddings capture information about both the object associated with a given node and the structure of the graph.\n\nIn real-world applications — say, a graph indicating which products tend to be purchased together — some nodes may not be connected to any others, and some connections may be spurious inferences from sparse data. In “[Cold Brew: Distilling graph node representations with incomplete or missing neighborhoods](https://www.amazon.science/publications/cold-brew-distilling-graph-node-representations-with-incomplete-or-missing-neighborhoods)”, Amazon scientists present a method for handling nodes whose edge data is absent or erroneous.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/bf29e49c008b4ce4b054da01dc48d499_%E4%B8%8B%E8%BD%BD.jpg)\n\nCold Brew addresses the real-world problem in which graph representations of data feature potentially spurious connections (tail nodes) or absent connections (cold start). Figure from \"[Cold Brew: Distilling graph node representations with incomplete or missing neighborhoods](https://www.amazon.science/publications/cold-brew-distilling-graph-node-representations-with-incomplete-or-missing-neighborhoods)\".\n\nIn a variation on [knowledge distillation](https://www.amazon.science/tag/knowledge-distillation), they use a conventional GNN, which requires that each input node be connected to the rest of the graph, to train a teacher network that can produce embeddings for connected nodes. Then they train a standard multilayer perceptron — a student network — to mimic the teacher’s outputs. Unlike a conventional GNN, the student network doesn’t explicitly use structural data to produce embeddings, so it can also handle unconnected nodes. The method demonstrates significant improvements over existing methods of inferring graph structure on several benchmark datasets.\n\nAcross disciplines, AI research has recently seen a surge in the popularity of [self-supervised learning](https://www.amazon.science/tag/self-supervised-learning), in which a machine learning model is first trained on a “proxy task”, which is related to but not identical to the target task, using unlabeled or automatically labeled data. Then the model is fine-tuned on labeled data for the target task.\n\nWith GNNs, the proxy tasks generally teach the network only how to represent node data. But in “[Node feature extraction by self-supervised multi-scale neighborhood prediction](https://www.amazon.science/publications/node-feature-extraction-by-self-supervised-multi-scale-neighborhood-prediction)”, Amazon researchers and their colleagues at the University of Illinois and UCLA present a proxy task that teaches the GNN how to represent information about graph structure as well. Their approach is highly scalable, working with graphs with hundreds of millions of nodes, and in experiments, they show that it improves GNN performance on three benchmark datasets, by almost 30% on one of them.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/9de6d5a7202444d38865e108370fb712_%E4%B8%8B%E8%BD%BD.jpg)\n\nXR-Transformer creates a hierarchical tree that sorts data into finer- and finer-grained clusters. In the context of graph neural networks, the clusters represent graph neighborhoods. Figure from \"[Node feature extraction by self-supervised multi-scale neighborhood prediction](https://www.amazon.science/publications/node-feature-extraction-by-self-supervised-multi-scale-neighborhood-prediction)\".\n\nThe approach, which builds on Amazon’s [XR-Transformer model](https://www.amazon.science/blog/neurips-2021-amazon-pushes-the-boundaries-of-extreme-multilabel-classification) and is known as GIANT-XRT, has already been widely adopted and is used by the leading teams in several of the public Open Graph Benchmark competitions hosted by Stanford University ([leaderboard 1](https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-products) | [leaderboard 2](https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-papers100M) | [leaderboard 3](https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-arxiv)).\n\nA third paper, “[Graph-relational domain adaptation](https://www.amazon.science/publications/graph-relational-domain-adaptation)”, applies graphs to the problem of domain adaptation, or optimizing a machine learning model to work on data with a different distribution than the data it was trained on. Conventional domain adaptation techniques treat all target domains the same, but the Amazon researchers and their colleagues at Rutgers and MIT instead use graphs to represent relationships among all source and target domains. For instance, weather patterns in adjacent U.S. states tend to be more similar than the weather patterns in states distant from each other. In experiments, the researchers show that their method improves on existing domain adaptation methods on both synthetic and real-world datasets.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/f202d34d53b64e28960f23b2f34d9ba9_%E4%B8%8B%E8%BD%BD.jpg)\n\nWhere traditional domain adaptation (left) treats all target domains the same, a new method (right) uses graphs to represent relationships between source and target domains. For instance, weather patterns in adjacent U.S. states tend to be more similar than the weather patterns in states distant from each other. Figure from “[Graph-relational domain adaptation](https://www.amazon.science/publications/graph-relational-domain-adaptation)”.\n\n\n #### **Time series**\n\n\nTime series forecasting is essential to demand prediction, which Amazon uses to manage inventory, and it’s also useful for recommendation, which can be interpreted as continuing a sequence of product (say, music or movie) selections.\n\nIn “[Bridging recommendation and marketing via recurrent intensity modeling](https://www.amazon.science/publications/bridging-recommendation-and-marketing-via-recurrent-intensity-modeling)”, Amazon scientists adapt existing mechanisms for making personal recommendations on the basis of time series data (purchase histories) to the problem of identifying the target audience for a new product.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/b32610f060b446fdb064b18a6d51c43a_%E4%B8%8B%E8%BD%BD.jpg)\n\nProduct recommendation can be interpreted as a time-series-forecasting problem, in which a product is recommended according to its likelihood of continuing a sequence of purchases. Figure from \"[Bridging recommendation and marketing via recurrent intensity modeling](https://www.amazon.science/publications/bridging-recommendation-and-marketing-via-recurrent-intensity-modeling)\".\n\nWhere methods for identifying a product’s potential customers tend to treat customers as atemporal collections of purchase decisions, the Amazon researchers instead frame the problem as optimizing both the product’s relevance to the customer and the customer’s activity level, or likelihood of buying any product in a given time span. In experiments, this improved the accuracy of a prediction model on several datasets.\n\nOne obstacle to the development of machine learning models that base predictions on time series data is the availability of training examples. In “[PSA-GAN: Progressive self attention GANs for synthetic time series](https://www.amazon.science/publications/psa-gan-progressive-self-attention-gans-for-synthetic-time-series)”, Amazon researchers propose a method for using generative adversarial networks ([GANs](https://www.amazon.science/tag/generative-adversarial-networks)) to artificially produce time series training data.\n\nGANs pit generators, which produce synthetic data, against discriminators, which try to distinguish synthetic data from real. The two are trained together, each improving the performance of the other.\n\nThe Amazon researchers show how to synthesize plausible time series data by progressively growing — or adding network layers to — both the generator and the discriminator. This enables the generator to first learn general characteristics that the time series as a whole should have, then learn how to produce series that exhibit those characteristics.\n\n\n#### **Data augmentation**\n\n\nIn addition to the paper on synthetic time series, one of Amazon’s other papers at ICLR, “[Deep AutoAugment](https://www.amazon.science/publications/deep-autoaugment)”, also focuses on data augmentation.\n\nIt’s become standard practice to augment the datasets used to train machine learning models by subjecting real data to sequences of transformations. For instance, a training image for a computer vision task might be flipped, stretched, rotated or cropped, or its color or contrast might be modified. Typically, the first few transformations are selected automatically, based on experiments in which a model is trained and retrained, and then domain experts add a few additional transformations to try to make the modified data look like real data.\n\nIn “Deep AutoAugment”, former Amazon senior applied scientist Zhi Zhang and colleagues at Michigan State University propose a method for fully automating the construction of a data augmentation pipeline. The goal is to continuously add transformations that steer the feature distribution of the synthetic data toward that of the real data. To do that, the researchers use gradient matching, or identifying training data whose sequential updates to the model parameters look like those of the real data. In tests, this approach improved on 10 other data augmentation techniques across four sets of real data.\n\n\n#### **Natural-language processing**\n\n\nMany natural-language-processing tasks involve pairwise comparison of sentences. Cross-encoders, which map pairs of sentences against each other, yield the most accurate comparison, but they’re computationally intensive, as they need to compute new mappings for every sentence pair. Moreover, converting a pretrained language model into a cross-encoder requires fine-tuning it on labeled data, which is resource intensive to acquire.\n\nBi-encoders, on the other hand, embed sentences in a common representational space and measure the distances between them. This is efficient but less accurate.\n\nIn “[Trans-encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations](https://www.amazon.science/publications/trans-encoder-unsupervised-sentence-pair-modelling-through-self-and-mutual-distillations)”, Amazon researchers, together with a former intern, propose a model that is trained in an entirely unsupervised way — that is, without unlabeled examples — and captures advantages of both approaches.\n\nThe researchers begin with a pretrained language model, fine-tune it in an unsupervised manner using bi-encoding, then use the fine-tuned model to generate training targets for cross-encoding. They then use the outputs of the cross-encoding model to fine-tune the bi-encoder, iterating back and forth between the two approaches until training converges. In experiments, their model outperformed multiple state-of-the-art unsupervised sentence encoders on several benchmark tasks, with improvements of up to 5% over the best-performing prior models.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/fbd139b87b424cbbbdb08817ee97b3c5_%E4%B8%8B%E8%BD%BD.jpg)\n\nThe trans-encoder training process, in which a bi-encoder trained in an unsupervised fashion creates training targets for a cross-encoder, which in turn outputs training targets for the bi-encoder.\n\n\n#### **Dataset optimization**\n\n\nWeeding errors out of a dataset, selecting new training examples to augment a dataset, and determining how to weight the data in a dataset to better match a target distribution are all examples of dataset optimization. Assessing individual training examples’ contribution to the accuracy of a model, however, is difficult: retraining the model on a dataset with and without every single example is hardly practical.\n\nIn “[DIVA: Dataset derivative of a learning task](https://www.amazon.science/publications/diva-dataset-derivative-of-a-learning-task)”, Amazon researchers show how to compute the dataset derivative: a function that can be used to assess a given training example’s utility relative to a particular neural-network model. During training, the model learns not only the weights of network parameters but also weights for individual training examples. The researchers show that, using a linearization technique, they can derive a closed-form equation for the dataset derivative, allowing them to assess the utility of a given training example without retraining the network.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/4dede92136e943a4a30b3ed6efad6d3d_%E4%B8%8B%E8%BD%BD.jpg)\n\nTraining examples that DIVA assigns high weights (left) and low (right) for the task of classifying aircraft. Figure from \"[DIVA: Dataset derivative of a learning task](https://www.amazon.science/publications/diva-dataset-derivative-of-a-learning-task)\".\n\n\n#### **Limitations**\n\n\n“Machine learning ultimately is based on statistical dependencies,” Bernhard Schölkopf [recently told Amazon Science](https://www.amazon.science/videos-webinars/neurips-luminaries-on-the-future-of-ai). “Oftentimes, it's enough if we work at the surface and just learn from these dependencies. But it turns out that it's only enough as long as we're in this setting where nothing changes.”\n\nThe two ICLR papers from the Causal Representation Learning team explore contexts in which learning statistical dependencies is not enough. “[Visual representation learning does not generalize strongly within the same domain](https://www.amazon.science/publications/visual-representation-learning-does-not-generalize-strongly-within-the-same-domain)” describes experiments with image datasets in which each image is defined by specific values of a set of variables — say, different shapes of different sizes and colors, or faces that are either smiling or not and differ in hair color or age.\n\nThe researchers test 17 machine learning models and show that, if certain combinations of variables or specific variable values are held out of the training data, all 17 have trouble recognizing them in the test data. For instance, a model trained to recognize small hearts and large squares has trouble recognizing large hearts and small squares. This suggests that we need revised training techniques or model designs to ensure that machine learning systems are really learning what they’re supposed to.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/ee0f66d76ed7499e865814f7392902a9_%E4%B8%8B%E8%BD%BD.jpg)\n\nAn illustration of the four methods of separating training data (black dots) and test data (red dots) in \"[Visual representation learning does not generalize strongly within the same domain](https://www.amazon.science/publications/visual-representation-learning-does-not-generalize-strongly-within-the-same-domain)\".\n\nSimilarly, in “[You mostly walk alone: Analyzing feature attribution in trajectory prediction](https://www.amazon.science/publications/you-mostly-walk-alone-analyzing-feature-attribution-in-trajectory-prediction)”, members of the team consider the problem of predicting the trajectories of moving objects as they interact with other objects, an essential capacity for self-driving cars and other AI systems. For instance, if a person is walking down the street, and a ball bounces into her path, it could be useful to know that the person might deviate from her trajectory to retrieve the ball.\n\nAdapting the game-theoretical concept of Shapley values, which enable the isolation of different variables’ contributions to an outcome, the researchers examine the best-performing recent models for predicting trajectories in interactive contexts and show that, for the most part, their predictions are based on past trajectories; they pay little attention to the influence of interactions.\n\n![下载.jpg](https://dev-media.amazoncloud.cn/e5b6529dd86c4075aa6bbf48df07d8d0_%E4%B8%8B%E8%BD%BD.jpg)\n\nA new method enables the comparison of different trajectory prediction models according to the extent to which they use social interactions for making predictions (left: none; middle: weak; right: strong). The target agent, whose future trajectory is to be predicted, is shown in red, and modeled interactions are represented by arrows whose width indicates interaction strength. From \"[You mostly walk alone: Analyzing feature attribution in trajectory prediction](https://www.amazon.science/publications/you-mostly-walk-alone-analyzing-feature-attribution-in-trajectory-prediction)\".\n\nThe one exception is a models trained on a dataset of basketball video, where all the players’ movements are constantly coordinated. There, existing models do indeed learn to recognize the influence of interaction. This suggests that careful curation of training data could enable existing models to account for interactions when predicting trajectories.\n\n\nABOUT THE AUTHOR\n\n#### [**Larry Hardesty**](https://www.amazon.science/author/larry-hardesty)\n\nLarry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.","render":"<p><a href=\"https://www.amazon.science/tag/time-series\" target=\"_blank\">Time series forecasting</a> and <strong>graph</strong> representations of data are both major topics of research at Amazon: time series forecasting is crucial to both supply chain optimization and product recommendation, and graph representations help make sense of the large datasets that are common at Amazon’s scale, such as the Amazon product catalogue.</p>\n<p>So it’s no surprise that both topics are well represented among the Amazon papers at the 2022 International Conference on Learning Representations (<a href=\"https://www.amazon.science/conferences-and-events/iclr-2022\" target=\"_blank\">ICLR</a>), which takes place this week. Another paper also touches on one of Amazon’s core scientific interests, <a href=\"https://www.amazon.science/tag/nlp\" target=\"_blank\">natural-language processing</a>, or computation involving free-form text inputs.</p>\n<p>The remaining Amazon papers discuss more general machine learning techniques, such as <strong>data augmentation</strong>, or automatically selecting or generating training examples that can improve the performance of machine learning models. Another paper looks at <strong>dataset optimization</strong> more generally, proposing a technique that could be used to evaluate individual examples for inclusion in a dataset or exclusion from it. And two papers from Amazon Web Services’ <a href=\"https://www.amazon.science/tag/causal-inference\" target=\"_blank\">Causal-Representation Learning</a> team, which includes Amazon vice president and distinguished scientist Bernhard Schölkopf, examine the limitations of existing approaches to machine learning.</p>\n<h4><a id=\"Graphs_7\"></a><strong>Graphs</strong></h4>\n<p>Graphs represent data as nodes, usually depicted as circles, and edges, usually depicted as line segments connecting nodes. Graph-structured data can make machine learning more efficient, because the graph explicitly encodes relationships that a machine learning model would otherwise have to infer from data correlations.</p>\n<p><a href=\"https://www.amazon.science/tag/graph-neural-networks\" target=\"_blank\">Graph neural networks</a> (GNNs) are a powerful tool for working with graph-structured data. Like most neural networks, GNNs produce embeddings, or fixed-length vector representations of input data, that are useful for particular computational tasks. In the case of GNNs, the embeddings capture information about both the object associated with a given node and the structure of the graph.</p>\n<p>In real-world applications — say, a graph indicating which products tend to be purchased together — some nodes may not be connected to any others, and some connections may be spurious inferences from sparse data. In “<a href=\"https://www.amazon.science/publications/cold-brew-distilling-graph-node-representations-with-incomplete-or-missing-neighborhoods\" target=\"_blank\">Cold Brew: Distilling graph node representations with incomplete or missing neighborhoods</a>”, Amazon scientists present a method for handling nodes whose edge data is absent or erroneous.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/bf29e49c008b4ce4b054da01dc48d499_%E4%B8%8B%E8%BD%BD.jpg\" alt=\"下载.jpg\" /></p>\n<p>Cold Brew addresses the real-world problem in which graph representations of data feature potentially spurious connections (tail nodes) or absent connections (cold start). Figure from “<a href=\"https://www.amazon.science/publications/cold-brew-distilling-graph-node-representations-with-incomplete-or-missing-neighborhoods\" target=\"_blank\">Cold Brew: Distilling graph node representations with incomplete or missing neighborhoods</a>”.</p>\n<p>In a variation on <a href=\"https://www.amazon.science/tag/knowledge-distillation\" target=\"_blank\">knowledge distillation</a>, they use a conventional GNN, which requires that each input node be connected to the rest of the graph, to train a teacher network that can produce embeddings for connected nodes. Then they train a standard multilayer perceptron — a student network — to mimic the teacher’s outputs. Unlike a conventional GNN, the student network doesn’t explicitly use structural data to produce embeddings, so it can also handle unconnected nodes. The method demonstrates significant improvements over existing methods of inferring graph structure on several benchmark datasets.</p>\n<p>Across disciplines, AI research has recently seen a surge in the popularity of <a href=\"https://www.amazon.science/tag/self-supervised-learning\" target=\"_blank\">self-supervised learning</a>, in which a machine learning model is first trained on a “proxy task”, which is related to but not identical to the target task, using unlabeled or automatically labeled data. Then the model is fine-tuned on labeled data for the target task.</p>\n<p>With GNNs, the proxy tasks generally teach the network only how to represent node data. But in “<a href=\"https://www.amazon.science/publications/node-feature-extraction-by-self-supervised-multi-scale-neighborhood-prediction\" target=\"_blank\">Node feature extraction by self-supervised multi-scale neighborhood prediction</a>”, Amazon researchers and their colleagues at the University of Illinois and UCLA present a proxy task that teaches the GNN how to represent information about graph structure as well. Their approach is highly scalable, working with graphs with hundreds of millions of nodes, and in experiments, they show that it improves GNN performance on three benchmark datasets, by almost 30% on one of them.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/9de6d5a7202444d38865e108370fb712_%E4%B8%8B%E8%BD%BD.jpg\" alt=\"下载.jpg\" /></p>\n<p>XR-Transformer creates a hierarchical tree that sorts data into finer- and finer-grained clusters. In the context of graph neural networks, the clusters represent graph neighborhoods. Figure from “<a href=\"https://www.amazon.science/publications/node-feature-extraction-by-self-supervised-multi-scale-neighborhood-prediction\" target=\"_blank\">Node feature extraction by self-supervised multi-scale neighborhood prediction</a>”.</p>\n<p>The approach, which builds on Amazon’s <a href=\"https://www.amazon.science/blog/neurips-2021-amazon-pushes-the-boundaries-of-extreme-multilabel-classification\" target=\"_blank\">XR-Transformer model</a> and is known as GIANT-XRT, has already been widely adopted and is used by the leading teams in several of the public Open Graph Benchmark competitions hosted by Stanford University (<a href=\"https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-products\" target=\"_blank\">leaderboard 1</a> | <a href=\"https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-papers100M\" target=\"_blank\">leaderboard 2</a> | <a href=\"https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-arxiv\" target=\"_blank\">leaderboard 3</a>).</p>\n<p>A third paper, “<a href=\"https://www.amazon.science/publications/graph-relational-domain-adaptation\" target=\"_blank\">Graph-relational domain adaptation</a>”, applies graphs to the problem of domain adaptation, or optimizing a machine learning model to work on data with a different distribution than the data it was trained on. Conventional domain adaptation techniques treat all target domains the same, but the Amazon researchers and their colleagues at Rutgers and MIT instead use graphs to represent relationships among all source and target domains. For instance, weather patterns in adjacent U.S. states tend to be more similar than the weather patterns in states distant from each other. In experiments, the researchers show that their method improves on existing domain adaptation methods on both synthetic and real-world datasets.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/f202d34d53b64e28960f23b2f34d9ba9_%E4%B8%8B%E8%BD%BD.jpg\" alt=\"下载.jpg\" /></p>\n<p>Where traditional domain adaptation (left) treats all target domains the same, a new method (right) uses graphs to represent relationships between source and target domains. For instance, weather patterns in adjacent U.S. states tend to be more similar than the weather patterns in states distant from each other. Figure from “<a href=\"https://www.amazon.science/publications/graph-relational-domain-adaptation\" target=\"_blank\">Graph-relational domain adaptation</a>”.</p>\n<h4><a id=\"Time_series_39\"></a><strong>Time series</strong></h4>\n<p>Time series forecasting is essential to demand prediction, which Amazon uses to manage inventory, and it’s also useful for recommendation, which can be interpreted as continuing a sequence of product (say, music or movie) selections.</p>\n<p>In “<a href=\"https://www.amazon.science/publications/bridging-recommendation-and-marketing-via-recurrent-intensity-modeling\" target=\"_blank\">Bridging recommendation and marketing via recurrent intensity modeling</a>”, Amazon scientists adapt existing mechanisms for making personal recommendations on the basis of time series data (purchase histories) to the problem of identifying the target audience for a new product.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/b32610f060b446fdb064b18a6d51c43a_%E4%B8%8B%E8%BD%BD.jpg\" alt=\"下载.jpg\" /></p>\n<p>Product recommendation can be interpreted as a time-series-forecasting problem, in which a product is recommended according to its likelihood of continuing a sequence of purchases. Figure from “<a href=\"https://www.amazon.science/publications/bridging-recommendation-and-marketing-via-recurrent-intensity-modeling\" target=\"_blank\">Bridging recommendation and marketing via recurrent intensity modeling</a>”.</p>\n<p>Where methods for identifying a product’s potential customers tend to treat customers as atemporal collections of purchase decisions, the Amazon researchers instead frame the problem as optimizing both the product’s relevance to the customer and the customer’s activity level, or likelihood of buying any product in a given time span. In experiments, this improved the accuracy of a prediction model on several datasets.</p>\n<p>One obstacle to the development of machine learning models that base predictions on time series data is the availability of training examples. In “<a href=\"https://www.amazon.science/publications/psa-gan-progressive-self-attention-gans-for-synthetic-time-series\" target=\"_blank\">PSA-GAN: Progressive self attention GANs for synthetic time series</a>”, Amazon researchers propose a method for using generative adversarial networks (<a href=\"https://www.amazon.science/tag/generative-adversarial-networks\" target=\"_blank\">GANs</a>) to artificially produce time series training data.</p>\n<p>GANs pit generators, which produce synthetic data, against discriminators, which try to distinguish synthetic data from real. The two are trained together, each improving the performance of the other.</p>\n<p>The Amazon researchers show how to synthesize plausible time series data by progressively growing — or adding network layers to — both the generator and the discriminator. This enables the generator to first learn general characteristics that the time series as a whole should have, then learn how to produce series that exhibit those characteristics.</p>\n<h4><a id=\"Data_augmentation_59\"></a><strong>Data augmentation</strong></h4>\n<p>In addition to the paper on synthetic time series, one of Amazon’s other papers at ICLR, “<a href=\"https://www.amazon.science/publications/deep-autoaugment\" target=\"_blank\">Deep AutoAugment</a>”, also focuses on data augmentation.</p>\n<p>It’s become standard practice to augment the datasets used to train machine learning models by subjecting real data to sequences of transformations. For instance, a training image for a computer vision task might be flipped, stretched, rotated or cropped, or its color or contrast might be modified. Typically, the first few transformations are selected automatically, based on experiments in which a model is trained and retrained, and then domain experts add a few additional transformations to try to make the modified data look like real data.</p>\n<p>In “Deep AutoAugment”, former Amazon senior applied scientist Zhi Zhang and colleagues at Michigan State University propose a method for fully automating the construction of a data augmentation pipeline. The goal is to continuously add transformations that steer the feature distribution of the synthetic data toward that of the real data. To do that, the researchers use gradient matching, or identifying training data whose sequential updates to the model parameters look like those of the real data. In tests, this approach improved on 10 other data augmentation techniques across four sets of real data.</p>\n<h4><a id=\"Naturallanguage_processing_69\"></a><strong>Natural-language processing</strong></h4>\n<p>Many natural-language-processing tasks involve pairwise comparison of sentences. Cross-encoders, which map pairs of sentences against each other, yield the most accurate comparison, but they’re computationally intensive, as they need to compute new mappings for every sentence pair. Moreover, converting a pretrained language model into a cross-encoder requires fine-tuning it on labeled data, which is resource intensive to acquire.</p>\n<p>Bi-encoders, on the other hand, embed sentences in a common representational space and measure the distances between them. This is efficient but less accurate.</p>\n<p>In “<a href=\"https://www.amazon.science/publications/trans-encoder-unsupervised-sentence-pair-modelling-through-self-and-mutual-distillations\" target=\"_blank\">Trans-encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations</a>”, Amazon researchers, together with a former intern, propose a model that is trained in an entirely unsupervised way — that is, without unlabeled examples — and captures advantages of both approaches.</p>\n<p>The researchers begin with a pretrained language model, fine-tune it in an unsupervised manner using bi-encoding, then use the fine-tuned model to generate training targets for cross-encoding. They then use the outputs of the cross-encoding model to fine-tune the bi-encoder, iterating back and forth between the two approaches until training converges. In experiments, their model outperformed multiple state-of-the-art unsupervised sentence encoders on several benchmark tasks, with improvements of up to 5% over the best-performing prior models.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/fbd139b87b424cbbbdb08817ee97b3c5_%E4%B8%8B%E8%BD%BD.jpg\" alt=\"下载.jpg\" /></p>\n<p>The trans-encoder training process, in which a bi-encoder trained in an unsupervised fashion creates training targets for a cross-encoder, which in turn outputs training targets for the bi-encoder.</p>\n<h4><a id=\"Dataset_optimization_85\"></a><strong>Dataset optimization</strong></h4>\n<p>Weeding errors out of a dataset, selecting new training examples to augment a dataset, and determining how to weight the data in a dataset to better match a target distribution are all examples of dataset optimization. Assessing individual training examples’ contribution to the accuracy of a model, however, is difficult: retraining the model on a dataset with and without every single example is hardly practical.</p>\n<p>In “<a href=\"https://www.amazon.science/publications/diva-dataset-derivative-of-a-learning-task\" target=\"_blank\">DIVA: Dataset derivative of a learning task</a>”, Amazon researchers show how to compute the dataset derivative: a function that can be used to assess a given training example’s utility relative to a particular neural-network model. During training, the model learns not only the weights of network parameters but also weights for individual training examples. The researchers show that, using a linearization technique, they can derive a closed-form equation for the dataset derivative, allowing them to assess the utility of a given training example without retraining the network.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/4dede92136e943a4a30b3ed6efad6d3d_%E4%B8%8B%E8%BD%BD.jpg\" alt=\"下载.jpg\" /></p>\n<p>Training examples that DIVA assigns high weights (left) and low (right) for the task of classifying aircraft. Figure from “<a href=\"https://www.amazon.science/publications/diva-dataset-derivative-of-a-learning-task\" target=\"_blank\">DIVA: Dataset derivative of a learning task</a>”.</p>\n<h4><a id=\"Limitations_97\"></a><strong>Limitations</strong></h4>\n<p>“Machine learning ultimately is based on statistical dependencies,” Bernhard Schölkopf <a href=\"https://www.amazon.science/videos-webinars/neurips-luminaries-on-the-future-of-ai\" target=\"_blank\">recently told Amazon Science</a>. “Oftentimes, it’s enough if we work at the surface and just learn from these dependencies. But it turns out that it’s only enough as long as we’re in this setting where nothing changes.”</p>\n<p>The two ICLR papers from the Causal Representation Learning team explore contexts in which learning statistical dependencies is not enough. “<a href=\"https://www.amazon.science/publications/visual-representation-learning-does-not-generalize-strongly-within-the-same-domain\" target=\"_blank\">Visual representation learning does not generalize strongly within the same domain</a>” describes experiments with image datasets in which each image is defined by specific values of a set of variables — say, different shapes of different sizes and colors, or faces that are either smiling or not and differ in hair color or age.</p>\n<p>The researchers test 17 machine learning models and show that, if certain combinations of variables or specific variable values are held out of the training data, all 17 have trouble recognizing them in the test data. For instance, a model trained to recognize small hearts and large squares has trouble recognizing large hearts and small squares. This suggests that we need revised training techniques or model designs to ensure that machine learning systems are really learning what they’re supposed to.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/ee0f66d76ed7499e865814f7392902a9_%E4%B8%8B%E8%BD%BD.jpg\" alt=\"下载.jpg\" /></p>\n<p>An illustration of the four methods of separating training data (black dots) and test data (red dots) in “<a href=\"https://www.amazon.science/publications/visual-representation-learning-does-not-generalize-strongly-within-the-same-domain\" target=\"_blank\">Visual representation learning does not generalize strongly within the same domain</a>”.</p>\n<p>Similarly, in “<a href=\"https://www.amazon.science/publications/you-mostly-walk-alone-analyzing-feature-attribution-in-trajectory-prediction\" target=\"_blank\">You mostly walk alone: Analyzing feature attribution in trajectory prediction</a>”, members of the team consider the problem of predicting the trajectories of moving objects as they interact with other objects, an essential capacity for self-driving cars and other AI systems. For instance, if a person is walking down the street, and a ball bounces into her path, it could be useful to know that the person might deviate from her trajectory to retrieve the ball.</p>\n<p>Adapting the game-theoretical concept of Shapley values, which enable the isolation of different variables’ contributions to an outcome, the researchers examine the best-performing recent models for predicting trajectories in interactive contexts and show that, for the most part, their predictions are based on past trajectories; they pay little attention to the influence of interactions.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/e5b6529dd86c4075aa6bbf48df07d8d0_%E4%B8%8B%E8%BD%BD.jpg\" alt=\"下载.jpg\" /></p>\n<p>A new method enables the comparison of different trajectory prediction models according to the extent to which they use social interactions for making predictions (left: none; middle: weak; right: strong). The target agent, whose future trajectory is to be predicted, is shown in red, and modeled interactions are represented by arrows whose width indicates interaction strength. From “<a href=\"https://www.amazon.science/publications/you-mostly-walk-alone-analyzing-feature-attribution-in-trajectory-prediction\" target=\"_blank\">You mostly walk alone: Analyzing feature attribution in trajectory prediction</a>”.</p>\n<p>The one exception is a models trained on a dataset of basketball video, where all the players’ movements are constantly coordinated. There, existing models do indeed learn to recognize the influence of interaction. This suggests that careful curation of training data could enable existing models to account for interactions when predicting trajectories.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Larry_Hardestyhttpswwwamazonscienceauthorlarryhardesty_123\"></a><a href=\"https://www.amazon.science/author/larry-hardesty\" target=\"_blank\"><strong>Larry Hardesty</strong></a></h4>\n<p>Larry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.</p>\n"}