The 10 most read research papers of 2021

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"![image.png](https://dev-media.amazoncloud.cn/c5a4737e11054791951c76b28dc270b6_image.png)\n\nAmazon scientists published more research papers in 2021 than in any previous year in the company's history. Below is the list of the most downloaded papers from our site in 2021.\n\n#### 1. [Using lightweight formal methods to validate a key-value storage node in Amazon S3](https://www.amazon.science/publications/using-lightweight-formal-methods-to-validate-a-key-value-storage-node-in-amazon-s3)\n\n\"This paper reports our experience applying lightweight formal methods to validate the correctness of ShardStore, a new key-value storage node implementation for the [Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail) cloud object storage service.\n\nBy “lightweight formal methods\" we mean a pragmatic approach to verifying the correctness of a production storage node that is under ongoing feature development by a full-time engineering team. We do not aim to achieve full formal verification, but instead emphasize automation, usability, and the ability to continually ensure correctness as both software and its specification evolve over time.\"\n\n#### Read our blog post about this paper\n\nAt the ACM Symposium on Operating Systems Principles, the authors won a best-paper award. [James Bornholt writes about how the paper describes lightweight formal methods](https://www.amazon.science/blog/aws-team-wins-best-paper-award-for-work-on-automated-reasoning) for validating new S3 data storage service.\n\n#### 2. [A map of bandits for e-commerce](https://www.amazon.science/publications/a-map-of-bandits-for-e-commerce)\n\n\"The rich body of Bandit literature not only offers a diverse toolbox of algorithms, but also makes it hard for a practitioner to find the right solution to solve the problem at hand. Typical textbooks on Bandits focus on designing and analyzing algorithms, and surveys on applications often present a list of individual applications. While these are valuable resources, there exists a gap in mapping applications to appropriate Bandit algorithms. In this paper, we aim to reduce this gap with a structured map of Bandits to help practitioners navigate to find relevant and practical Bandit algorithms.\"\n\n#### 3. [Necessary and sufficient conditions for causal feature selection in time series with latent common causes](https://www.amazon.science/publications/necessary-and-sufficent-conditions-for-causal-feature-selection-in-time-series-with-latent-common-causes)\n\n\"We study the identification of direct and indirect causes on time series with latent variables, and provide a constrained-based causal feature selection method, which we prove tha\n\n#### Read our blog post about this paper\n\nAuthors Atalanti Mastakouri and Dominik Janzing wrote about the paper they co-authored with Bernhard Schölkopf. [Read their post about \\"a new technique for detecting all the direct causal features of a target time series.](https://www.amazon.science/blog/determining-causality-in-correlated-time-series)\"\n\nOur theory and estimation algorithm require only two conditional independence tests for each observed candidate time series to determine whether or not it is a cause of an observed target time series. Furthermore, our selection of the conditioning set is such that it improves signal to noise ratio. We apply our method on real data, and on a wide range of simulated experiments, which yield very low false positive and relatively low false negative rates.\"\n\n#### 4. [SiamMOT: Siamese multi-object tracking](https://www.amazon.science/publications/siammot-siamese-multi-object-tracking)\n\n\"In this paper, we focus on improving online multi-object tracking (MOT). In particular, we introduce a region-based Siamese Multi-Object Tracking network, which we name SiamMOT. SiamMOT includes a motion model that estimates the instance’s movement between two frames such that detected instances are associated. To explore how the motion modelling affects its tracking capability, we present two variants of Siamese tracker, one that implicitly models motion and one that models it explicitly. We carry out extensive quantitative experiments on three different MOT datasets: MOT17, TAO-person and Caltech Roadside Pedestrians, showing the importance of motion modelling for MOT and the ability of SiamMOT to substantially outperform the state-of-the-art.\"\n\n#### 5. [Getting your package to the right place: Supervised machine learning for geolocation](https://www.amazon.science/publications/getting-your-package-to-the-right-place-supervised-machine-learning-for-geolocation)\n\n\"Amazon Last Mile strives to learn an accurate delivery point for each address by using the noisy GPS locations reported from past deliveries. Centroids and other center-finding methods do not serve well, because the noise is consistently biased.\n\nThe problem calls for supervised machine learning, but how? We addressed it with a novel adaptation of learning to rank from the information retrieval domain. This also enabled information fusion from map layers. Offline experiments show outstanding reduction in error distance, and online experiments estimated millions in annualized savings.\"\n\n#### Read our blog post about this paper\n\nGeorge Forman wrote about the paper he presented at the European Conference on Machine Learning. [Learn more about how he adapted \\"an idea from information retrieval — learning-to-rank](https://www.amazon.science/blog/using-learning-to-rank-to-precisely-locate-where-to-deliver-packages) — to the problem of predicting the coordinates of a delivery location from past GPS data.\"\n\n#### 6. [Seasonal relevance in e-commerce search](https://www.amazon.science/publications/seasonal-relevance-in-e-commerce-search)\n\n\"Seasonality is an important dimension for relevance in e-commerce search. For example, a query jacket has a different set of relevant documents in winter than summer. For an optimal user experience, the e-commerce search engines should incorporate seasonality in product search. In this paper, we formally introduce the concept of seasonal relevance, define it and quantify using data from a major e-commerce store. In our analyses, we find 39% queries are highly seasonally relevant to the time of search and would benefit from handling seasonality in ranking. We propose LogSR and VelSR features to capture product seasonality using state-of-the-art neural models based on self-attention. Comprehensive offline and online experiments over large datasets show the efficacy of our methods to model seasonal relevance. The online A/B test on 784 MM queries shows the treatment with seasonal relevance features results in 2.20% higher purchases and better customer experience overall.\"\n\n#### 7. [Position paper: Reducing Amazon’s packaging waste using multimodal deep learning](https://www.amazon.science/publications/position-paper-reducing-amazons-packaging-wasteusing-multimodal-deep-learning)\n\n\"Since 2015, Amazon has reduced the weight of its outbound packaging by 36%, eliminating over 1,000,000 tons of packaging material worldwide, or the equivalent of over 2 billion shipping boxes, thereby reducing carbon footprint throughout its fulfillment supply chain. In this position paper, we share insights on using deep learning to identify the optimal packaging type best suited to ship each item in a diverse product catalog at scale so that it arrives undamaged, delights customers, and reduces packaging waste. Incorporating multimodal data on products including product images and class imbalance handling technique are important to improving model performance.\"\n\n#### 8. [CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models](https://www.amazon.science/publications/ctr-bert-cost-effective-knowledge-distillation-for-billion-parameter-teacher-models)\n\n\"While pre-trained large language models (LLM) like BERT have achieved state-of-the-art in several NLP tasks, their performance on tasks with additional grounding e.g. with numeric and categorical features is less studied. In this paper, we study the application of pre-trained LLM for click-through-rate (CTR) prediction for product advertisement in e-commerce. This is challenging because the model needs to a) learn from language as well as tabular data features, b) maintain low-latency (<5 ms) at inference time, and c) adapt to constantly changing advertisement distribution. We first show that scaling the pre-trained language model to 1.5 billion parameters significantly improves performance over conventional CTR baselines. We then present CTR-BERT, a novel lightweight cache-friendly factorized model for CTR prediction that consists of twin-structured BERT-like encoders for text with a mechanism for late fusion for text and tabular features.\\"\\n\\n#### 9. [Probabilistic forecasting: A level-set approach](https://www.amazon.science/publications/probabilistic-forecasting-a-level-set-approach)\\n\\n\\"Large-scale time series panels have become ubiquitous over the last years in areas such as retail, operational metrics, IoT, and medical domain (to name only a few). This has resulted in a need for forecasting techniques that effectively leverage all available data by learning across all time series in each panel. Among the desirable properties of forecasting techniques, being able to generate probabilistic predictions ranks among the top. In this paper, we therefore present Level Set Forecaster (LSF), a simple yet effective general approach to transform a point estimator into a probabilistic one. By recognizing the connection of our algorithm to random forests (RFs) and quantile regression forests (QRFs), we are able to prove consistency guarantees of our approach under mild assumptions on the underlying point estimator. As a byproduct, we prove the first consistency results for QRFs under the CART-splitting criterion. Empirical experiments show that our approach, equipped with tree-based models as the point estimator, rivals state-of-the-art deep learning models in terms of forecasting accuracy.\\"\\n\\n#### 10. [Contextual rephrase detection for reducing friction in dialogue system](https://www.amazon.science/publications/contextual-rephrase-detection-for-reducing-friction-in-dialogue-system)\\n\\n\\"For voice assistants like Alexa, Google Assistant and Siri, correctly interpreting users’ intentions is of utmost importance. However, users sometimes experience friction with these assistants, caused by errors from different system components or user errors such as slips of the tongue. Users tend to rephrase their query until they get a satisfactory response. Rephrase detection is used to identify the rephrases and has long been treated as a task with pairwise input, which does not fully utilize the contextual information (e.g. users’ implicit feedback). To this and, we propose a contextual rephrase detection model ContReph to automatically identify rephrases from multi-turn dialogues. We showcase how to leverage the dialogue context and user-agent interaction signals, including user’s implicit feedback and the time gap between different turns, which can help significantly outperform the pairwise rephrase detection models.\\"\\n\\nABOUT THE AUTHOR\\n\\n#### Staff writer","render":"<p><img src=\\"https://dev-media.amazoncloud.cn/c5a4737e11054791951c76b28dc270b6_image.png\\" alt=\\"image.png\\" /></p>\n<p>Amazon scientists published more research papers in 2021 than in any previous year in the company’s history. Below is the list of the most downloaded papers from our site in 2021.</p>\n<h4><a id=\\"1_Using_lightweight_formal_methods_to_validate_a_keyvalue_storage_node_in_Amazon_S3httpswwwamazonsciencepublicationsusinglightweightformalmethodstovalidateakeyvaluestoragenodeinamazons3_4\\"></a>1. <a href=\\"https://www.amazon.science/publications/using-lightweight-formal-methods-to-validate-a-key-value-storage-node-in-amazon-s3\\" target=\\"_blank\\">Using lightweight formal methods to validate a key-value storage node in Amazon S3</a></h4>\\n<p>&quot;This paper reports our experience applying lightweight formal methods to validate the correctness of ShardStore, a new key-value storage node implementation for the Amazon S3 cloud object storage service.</p>\n<p>By “lightweight formal methods&quot; we mean a pragmatic approach to verifying the correctness of a production storage node that is under ongoing feature development by a full-time engineering team. We do not aim to achieve full formal verification, but instead emphasize automation, usability, and the ability to continually ensure correctness as both software and its specification evolve over time.&quot;</p>\n<h4><a id=\\"Read_our_blog_post_about_this_paper_10\\"></a>Read our blog post about this paper</h4>\\n<p>At the ACM Symposium on Operating Systems Principles, the authors won a best-paper award. <a href=\\"https://www.amazon.science/blog/aws-team-wins-best-paper-award-for-work-on-automated-reasoning\\" target=\\"_blank\\">James Bornholt writes about how the paper describes lightweight formal methods</a> for validating new S3 data storage service.</p>\\n<h4><a id=\\"2_A_map_of_bandits_for_ecommercehttpswwwamazonsciencepublicationsamapofbanditsforecommerce_14\\"></a>2. <a href=\\"https://www.amazon.science/publications/a-map-of-bandits-for-e-commerce\\" target=\\"_blank\\">A map of bandits for e-commerce</a></h4>\\n<p>“The rich body of Bandit literature not only offers a diverse toolbox of algorithms, but also makes it hard for a practitioner to find the right solution to solve the problem at hand. Typical textbooks on Bandits focus on designing and analyzing algorithms, and surveys on applications often present a list of individual applications. While these are valuable resources, there exists a gap in mapping applications to appropriate Bandit algorithms. In this paper, we aim to reduce this gap with a structured map of Bandits to help practitioners navigate to find relevant and practical Bandit algorithms.”</p>\n<h4><a id=\\"3_Necessary_and_sufficient_conditions_for_causal_feature_selection_in_time_series_with_latent_common_causeshttpswwwamazonsciencepublicationsnecessaryandsufficentconditionsforcausalfeatureselectionintimeserieswithlatentcommoncauses_18\\"></a>3. <a href=\\"https://www.amazon.science/publications/necessary-and-sufficent-conditions-for-causal-feature-selection-in-time-series-with-latent-common-causes\\" target=\\"_blank\\">Necessary and sufficient conditions for causal feature selection in time series with latent common causes</a></h4>\\n<p>&quot;We study the identification of direct and indirect causes on time series with latent variables, and provide a constrained-based causal feature selection method, which we prove tha</p>\n<h4><a id=\\"Read_our_blog_post_about_this_paper_22\\"></a>Read our blog post about this paper</h4>\\n<p>Authors Atalanti Mastakouri and Dominik Janzing wrote about the paper they co-authored with Bernhard Schölkopf. <a href=\\"https://www.amazon.science/blog/determining-causality-in-correlated-time-series\\" target=\\"_blank\\">Read their post about &quot;a new technique for detecting all the direct causal features of a target time series.</a>&quot;</p>\\n<p>Our theory and estimation algorithm require only two conditional independence tests for each observed candidate time series to determine whether or not it is a cause of an observed target time series. Furthermore, our selection of the conditioning set is such that it improves signal to noise ratio. We apply our method on real data, and on a wide range of simulated experiments, which yield very low false positive and relatively low false negative rates.&quot;</p>\n<h4><a id=\\"4_SiamMOT_Siamese_multiobject_trackinghttpswwwamazonsciencepublicationssiammotsiamesemultiobjecttracking_28\\"></a>4. <a href=\\"https://www.amazon.science/publications/siammot-siamese-multi-object-tracking\\" target=\\"_blank\\">SiamMOT: Siamese multi-object tracking</a></h4>\\n<p>“In this paper, we focus on improving online multi-object tracking (MOT). In particular, we introduce a region-based Siamese Multi-Object Tracking network, which we name SiamMOT. SiamMOT includes a motion model that estimates the instance’s movement between two frames such that detected instances are associated. To explore how the motion modelling affects its tracking capability, we present two variants of Siamese tracker, one that implicitly models motion and one that models it explicitly. We carry out extensive quantitative experiments on three different MOT datasets: MOT17, TAO-person and Caltech Roadside Pedestrians, showing the importance of motion modelling for MOT and the ability of SiamMOT to substantially outperform the state-of-the-art.”</p>\n<h4><a id=\\"5_Getting_your_package_to_the_right_place_Supervised_machine_learning_for_geolocationhttpswwwamazonsciencepublicationsgettingyourpackagetotherightplacesupervisedmachinelearningforgeolocation_32\\"></a>5. <a href=\\"https://www.amazon.science/publications/getting-your-package-to-the-right-place-supervised-machine-learning-for-geolocation\\" target=\\"_blank\\">Getting your package to the right place: Supervised machine learning for geolocation</a></h4>\\n<p>&quot;Amazon Last Mile strives to learn an accurate delivery point for each address by using the noisy GPS locations reported from past deliveries. Centroids and other center-finding methods do not serve well, because the noise is consistently biased.</p>\n<p>The problem calls for supervised machine learning, but how? We addressed it with a novel adaptation of learning to rank from the information retrieval domain. This also enabled information fusion from map layers. Offline experiments show outstanding reduction in error distance, and online experiments estimated millions in annualized savings.&quot;</p>\n<h4><a id=\\"Read_our_blog_post_about_this_paper_38\\"></a>Read our blog post about this paper</h4>\\n<p>George Forman wrote about the paper he presented at the European Conference on Machine Learning. <a href=\\"https://www.amazon.science/blog/using-learning-to-rank-to-precisely-locate-where-to-deliver-packages\\" target=\\"_blank\\">Learn more about how he adapted &quot;an idea from information retrieval — learning-to-rank</a> — to the problem of predicting the coordinates of a delivery location from past GPS data.&quot;</p>\\n<h4><a id=\\"6_Seasonal_relevance_in_ecommerce_searchhttpswwwamazonsciencepublicationsseasonalrelevanceinecommercesearch_42\\"></a>6. <a href=\\"https://www.amazon.science/publications/seasonal-relevance-in-e-commerce-search\\" target=\\"_blank\\">Seasonal relevance in e-commerce search</a></h4>\\n<p>“Seasonality is an important dimension for relevance in e-commerce search. For example, a query jacket has a different set of relevant documents in winter than summer. For an optimal user experience, the e-commerce search engines should incorporate seasonality in product search. In this paper, we formally introduce the concept of seasonal relevance, define it and quantify using data from a major e-commerce store. In our analyses, we find 39% queries are highly seasonally relevant to the time of search and would benefit from handling seasonality in ranking. We propose LogSR and VelSR features to capture product seasonality using state-of-the-art neural models based on self-attention. Comprehensive offline and online experiments over large datasets show the efficacy of our methods to model seasonal relevance. The online A/B test on 784 MM queries shows the treatment with seasonal relevance features results in 2.20% higher purchases and better customer experience overall.”</p>\n<h4><a id=\\"7_Position_paper_Reducing_Amazons_packaging_waste_using_multimodal_deep_learninghttpswwwamazonsciencepublicationspositionpaperreducingamazonspackagingwasteusingmultimodaldeeplearning_46\\"></a>7. <a href=\\"https://www.amazon.science/publications/position-paper-reducing-amazons-packaging-wasteusing-multimodal-deep-learning\\" target=\\"_blank\\">Position paper: Reducing Amazon’s packaging waste using multimodal deep learning</a></h4>\\n<p>“Since 2015, Amazon has reduced the weight of its outbound packaging by 36%, eliminating over 1,000,000 tons of packaging material worldwide, or the equivalent of over 2 billion shipping boxes, thereby reducing carbon footprint throughout its fulfillment supply chain. In this position paper, we share insights on using deep learning to identify the optimal packaging type best suited to ship each item in a diverse product catalog at scale so that it arrives undamaged, delights customers, and reduces packaging waste. Incorporating multimodal data on products including product images and class imbalance handling technique are important to improving model performance.”</p>\n<h4><a id=\\"8_CTRBERT_Costeffective_knowledge_distillation_for_billionparameter_teacher_modelshttpswwwamazonsciencepublicationsctrbertcosteffectiveknowledgedistillationforbillionparameterteachermodels_50\\"></a>8. <a href=\\"https://www.amazon.science/publications/ctr-bert-cost-effective-knowledge-distillation-for-billion-parameter-teacher-models\\" target=\\"_blank\\">CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models</a></h4>\\n<p>“While pre-trained large language models (LLM) like BERT have achieved state-of-the-art in several NLP tasks, their performance on tasks with additional grounding e.g. with numeric and categorical features is less studied. In this paper, we study the application of pre-trained LLM for click-through-rate (CTR) prediction for product advertisement in e-commerce. This is challenging because the model needs to a) learn from language as well as tabular data features, b) maintain low-latency (&lt;5 ms) at inference time, and c) adapt to constantly changing advertisement distribution. We first show that scaling the pre-trained language model to 1.5 billion parameters significantly improves performance over conventional CTR baselines. We then present CTR-BERT, a novel lightweight cache-friendly factorized model for CTR prediction that consists of twin-structured BERT-like encoders for text with a mechanism for late fusion for text and tabular features.”</p>\n<h4><a id=\\"9_Probabilistic_forecasting_A_levelset_approachhttpswwwamazonsciencepublicationsprobabilisticforecastingalevelsetapproach_54\\"></a>9. <a href=\\"https://www.amazon.science/publications/probabilistic-forecasting-a-level-set-approach\\" target=\\"_blank\\">Probabilistic forecasting: A level-set approach</a></h4>\\n<p>“Large-scale time series panels have become ubiquitous over the last years in areas such as retail, operational metrics, IoT, and medical domain (to name only a few). This has resulted in a need for forecasting techniques that effectively leverage all available data by learning across all time series in each panel. Among the desirable properties of forecasting techniques, being able to generate probabilistic predictions ranks among the top. In this paper, we therefore present Level Set Forecaster (LSF), a simple yet effective general approach to transform a point estimator into a probabilistic one. By recognizing the connection of our algorithm to random forests (RFs) and quantile regression forests (QRFs), we are able to prove consistency guarantees of our approach under mild assumptions on the underlying point estimator. As a byproduct, we prove the first consistency results for QRFs under the CART-splitting criterion. Empirical experiments show that our approach, equipped with tree-based models as the point estimator, rivals state-of-the-art deep learning models in terms of forecasting accuracy.”</p>\n<h4><a id=\\"10_Contextual_rephrase_detection_for_reducing_friction_in_dialogue_systemhttpswwwamazonsciencepublicationscontextualrephrasedetectionforreducingfrictionindialoguesystem_58\\"></a>10. <a href=\\"https://www.amazon.science/publications/contextual-rephrase-detection-for-reducing-friction-in-dialogue-system\\" target=\\"_blank\\">Contextual rephrase detection for reducing friction in dialogue system</a></h4>\\n<p>“For voice assistants like Alexa, Google Assistant and Siri, correctly interpreting users’ intentions is of utmost importance. However, users sometimes experience friction with these assistants, caused by errors from different system components or user errors such as slips of the tongue. Users tend to rephrase their query until they get a satisfactory response. Rephrase detection is used to identify the rephrases and has long been treated as a task with pairwise input, which does not fully utilize the contextual information (e.g. users’ implicit feedback). To this and, we propose a contextual rephrase detection model ContReph to automatically identify rephrases from multi-turn dialogues. We showcase how to leverage the dialogue context and user-agent interaction signals, including user’s implicit feedback and the time gap between different turns, which can help significantly outperform the pairwise rephrase detection models.”</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Staff_writer_64\\"></a>Staff writer</h4>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭