The top Amazon Science blog posts of 2021

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"![image.png](https://dev-media.amazoncloud.cn/057219f5ef23441cbc1bd099a4d3a36e_image.png)\n\nThese are images from some of the top blog posts published on Amazon Science in 2021.\n\n#### 1. Building machine learning models with encrypted data\n\n[![image.png](https://dev-media.amazoncloud.cn/cc2e5565023447bf80084f811164924e_image.png)](https://www.amazon.science/blog/building-machine-learning-models-with-encrypted-data)\n\nAt the Workshop on Encrypted Computing and Applied Homomorphic Cryptography, Amazon researchers presented a paper exploring the application of homomorphic encryption to logistic regression, a statistical model used for myriad machine learning applications, from genomics to tax compliance. [Learn how this new approach to homomorphic encryption speeds up the training of encrypted machine learning models sixfold](https://www.amazon.science/blog/building-machine-learning-models-with-encrypted-data).\n\n#### 2. Improving explainable AI’s explanations\n\n[![image.png](https://dev-media.amazoncloud.cn/53da9f654d394407b4d658a6acabd646_image.png)](https://www.amazon.science/blog/improving-explainable-ais-explanations)\n\nA causal graph of a concept-based explanatory model, with a confounding variable (u) and a debiased concept variable (d).\n\nMohammad Taha Bahadori and David Heckerman presented a paper at the International Conference on Learning Representations, where they \"adapt a technique for removing confounders from causal models, called instrumental-variable analysis, to the problem of concept-based explanation.\" Learn more about [how causal analysis improves both the classification accuracy and the relevance of the concepts](https://www.amazon.science/blog/improving-explainable-ais-explanations) identified by popular concept-based explanatory models.\n\n#### 3. Alexa enters the “age of self”\n\n[![image.png](https://dev-media.amazoncloud.cn/a18fd9fc94424cec8875d76931d7cfa8_image.png)](https://www.amazon.science/blog/alexa-enters-the-age-of-self)\n\nPrem Natarajan, Alexa AI vice president of natural understanding, at a conference in 2018.\n\n\"Some of the technologies we’ve begun to introduce, together with others we’re now investigating, are harbingers of a step change in Alexa’s development — and in the field of AI itself,\" wrote Prem Natarajan, Alexa AI vice president of natural understanding. [Read his post explaining why more-autonomous machine learning systems will make Alexa more self-aware, self-learning, and self-service.](https://www.amazon.science/blog/alexa-enters-the-age-of-self)\n\n#### 4. New take on hierarchical time series forecasting improves accuracy\n\n[![下载.gif](https://dev-media.amazoncloud.cn/e857bc3576114783b220649897967e9f_%E4%B8%8B%E8%BD%BD.gif)](https://www.amazon.science/blog/new-take-on-hierarchical-time-series-forecasting-improves-accuracy)\n\nThe researchers' method enforces coherence, or agreement among different levels of a hierarchical time series, through projection. The plane (S) is the subspace of coherent samples; yt+h is a sample from the standard distribution (which is always coherent); ŷt+h is the transformation of the sample into a sample from a learned distribution; and ỹt+h is the projection of ŷt+h back into the coherent subspace.\n\nIn a paper presented at the International Conference on Machine Learning, Amazon scientists \"describe a new approach to hierarchical time series forecasting that uses a single machine learning model, trained end to end, to simultaneously predict outputs at every level of the hierarchy and to reconcile them.\" Read more about [how this method enforces “coherence” of hierarchical time series](https://www.amazon.science/blog/new-take-on-hierarchical-time-series-forecasting-improves-accuracy), in which the values at each level of the hierarchy are sums of the values at the level below.\n\n#### 5. Determining causality in correlated time series\n\n[![下载.gif](https://dev-media.amazoncloud.cn/00007b4b61ec47f2a056f27f83c2f82d_%E4%B8%8B%E8%BD%BD.gif)](https://www.amazon.science/blog/determining-causality-in-correlated-time-series)\n\nThe researchers' new method constructs a conditioning set — a set of variables that must be controlled for — that enables tests for conditional dependence and independence in a causal graph.\n\nIn a paper presented at the International Conference on Machine Learning, coauthored by Bernhard Schölkopf, Amazon researchers \"described a new technique for detecting all the direct causal features of a target time series — and only the direct or indirect causal features — given some graph constraints.\" [Learn how the proposed method goes beyond Granger causality and \"yielded false-positive rates of detected causes close to zero](https://www.amazon.science/blog/determining-causality-in-correlated-time-series)\".\n\n#### 6. How to train large graph neural networks efficiently\n\n[![下载.gif](https://dev-media.amazoncloud.cn/08fdcc9c38944783aa3c9f7ea2312dd2_%E4%B8%8B%E8%BD%BD.gif)](https://www.amazon.science/blog/how-to-train-large-graph-neural-networks-efficiently)\n\nBy caching data about graph nodes in GPU memory, global neighbor sampling dramatically reduces the amount of data transferred from the CPU to the GPU during the training of large graph neural networks.\n\nIn a paper presented at KDD, Amazon scientists \"describe a new sampling strategy for training graph neural network models with a combination of CPUs and GPUs.\" [Learn how their method enables two- to 14-fold speedups over its best-performing predecessors.](https://www.amazon.science/blog/how-to-train-large-graph-neural-networks-efficiently)\n\n#### 7. How to make on-device speech recognition practical\n\n[![下载.gif](https://dev-media.amazoncloud.cn/db3471794a9b4621888fef94bf4a6406_%E4%B8%8B%E8%BD%BD.gif)](https://www.amazon.science/blog/how-to-make-on-device-speech-recognition-practical)\n\nAn advantage of our diffing approach is that we can target a different set of weights with each model update, which gives us more flexibility in adapting to a changing data landscape.\n\nAt this year’s Interspeech, Amazon scientists presented two papers describing some of the innovations that will make it practical to run Alexa at the edge. [Learn how branching encoder networks make operation more efficient, while “neural diffing” reduces bandwidth requirements for model updates.](https://www.amazon.science/blog/how-to-make-on-device-speech-recognition-practical)\n\n#### 8. Using learning-to-rank to precisely locate where to deliver packages\n\n[![image.png](https://dev-media.amazoncloud.cn/8a18f454761647bf86ef363f6defccc4_image.png)](https://www.amazon.science/blog/using-learning-to-rank-to-precisely-locate-where-to-deliver-packages)\n\nIn this figure, the dark-blue circles represent the GPS coordinates recorded for deliveries to the same address. The red circle is the actual location of the customer’s doorstep. Taking the average (centroid) value of the measurements yields a location (light-blue circle) in the middle of the street, leaving the driver uncertain and causing delays.\n\nIn a paper presented at the European Conference on Machine Learning, a principal applied scientist in the Amazon Last Mile organization adapts \"an idea from information retrieval — learning-to-rank — to the problem of predicting the coordinates of a delivery location from past GPS data.\" Learn more [about how models adapted from information retrieval deal well with noisy GPS input](https://www.amazon.science/blog/using-learning-to-rank-to-precisely-locate-where-to-deliver-packages) and can leverage map information.\n\n#### 9. 3Q: Making silicon-vacancy centers practical for quantum networking\n\n[![image.png](https://dev-media.amazoncloud.cn/c6e597dbcaac4abeaedcb6deba465b7d_image.png)](https://www.amazon.science/blog/3q-making-silicon-vacancy-centers-practical-for-quantum-networking)\n\nIn the researchers' setup, if a photon reaches the detector, it conveys information about the quantum state of one silicon-vacancy qubit (SiV B), even though it interacted only with the other qubit (SiV A).\n\nSynthetic-diamond chips with so-called silicon-vacancy centers are a promising technology for quantum networking because they’re natural light emitters, and they’re small, solid state, and relatively easy to manufacture at scale. But they’ve had one severe drawback, which is that they tend to emit light at a range of different frequencies, which makes exchanging quantum information difficult.\n\nMembers of Amazon’s AWS Center for Quantum Computing, together with colleagues at Harvard University, the University of Hamburg, the Hamburg Centre for Ultrafast Imaging, and the Hebrew University of Jerusalem, demonstrated a technique that promises to overcome that drawback. The first author on the paper, [David Levonian, a graduate student at Harvard and a quantum research scientist at Amazon, answered three questions about the research for Amazon Science](https://www.amazon.science/blog/3q-making-silicon-vacancy-centers-practical-for-quantum-networking).\n\n#### 10. AWS team wins best-paper award for work on automated reasoning\n\n[![下载.gif](https://dev-media.amazoncloud.cn/1fb17b0be76944bea37e8716b98e0316_%E4%B8%8B%E8%BD%BD.gif)](https://www.amazon.science/blog/aws-team-wins-best-paper-award-for-work-on-automated-reasoning)\n\nAn example of the ShardStore deletion procedure. Deleting the second data chunk in extent 18 (grey box) requires copying the other three chunks to different extents (extents 19 and 20) and resetting the write pointer for extent 18. The log-structured merge-tree itself is also stored on disk (in this case, in extent 17). See below for details.\n\nAt the ACM Symposium on Operating Systems Principles, researchers at Amazon Web Services and won a best-paper award for their work using automated reasoning to validate that ShardStore — Amazon's new S3 storage node microservice — will do what it’s supposed to. [Learn more about lightweight formal methods for validating the new S3 data storage service](https://www.amazon.science/blog/aws-team-wins-best-paper-award-for-work-on-automated-reasoning).\n\nABOUT THE AUTHOR\n\n#### Staff writer","render":"<p><img src=\"https://dev-media.amazoncloud.cn/057219f5ef23441cbc1bd099a4d3a36e_image.png\" alt=\"image.png\" /></p>\n<p>These are images from some of the top blog posts published on Amazon Science in 2021.</p>\n<h4><a id=\"1_Building_machine_learning_models_with_encrypted_data_4\"></a>1. Building machine learning models with encrypted data</h4>\n<p><a href=\"https://www.amazon.science/blog/building-machine-learning-models-with-encrypted-data\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/cc2e5565023447bf80084f811164924e_image.png\" alt=\"image.png\" /></a></p>\n<p>At the Workshop on Encrypted Computing and Applied Homomorphic Cryptography, Amazon researchers presented a paper exploring the application of homomorphic encryption to logistic regression, a statistical model used for myriad machine learning applications, from genomics to tax compliance. <a href=\"https://www.amazon.science/blog/building-machine-learning-models-with-encrypted-data\" target=\"_blank\">Learn how this new approach to homomorphic encryption speeds up the training of encrypted machine learning models sixfold</a>.</p>\n<h4><a id=\"2_Improving_explainable_AIs_explanations_10\"></a>2. Improving explainable AI’s explanations</h4>\n<p><a href=\"https://www.amazon.science/blog/improving-explainable-ais-explanations\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/53da9f654d394407b4d658a6acabd646_image.png\" alt=\"image.png\" /></a></p>\n<p>A causal graph of a concept-based explanatory model, with a confounding variable (u) and a debiased concept variable (d).</p>\n<p>Mohammad Taha Bahadori and David Heckerman presented a paper at the International Conference on Learning Representations, where they “adapt a technique for removing confounders from causal models, called instrumental-variable analysis, to the problem of concept-based explanation.” Learn more about <a href=\"https://www.amazon.science/blog/improving-explainable-ais-explanations\" target=\"_blank\">how causal analysis improves both the classification accuracy and the relevance of the concepts</a> identified by popular concept-based explanatory models.</p>\n<h4><a id=\"3_Alexa_enters_the_age_of_self_18\"></a>3. Alexa enters the “age of self”</h4>\n<p><a href=\"https://www.amazon.science/blog/alexa-enters-the-age-of-self\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/a18fd9fc94424cec8875d76931d7cfa8_image.png\" alt=\"image.png\" /></a></p>\n<p>Prem Natarajan, Alexa AI vice president of natural understanding, at a conference in 2018.</p>\n<p>“Some of the technologies we’ve begun to introduce, together with others we’re now investigating, are harbingers of a step change in Alexa’s development — and in the field of AI itself,” wrote Prem Natarajan, Alexa AI vice president of natural understanding. <a href=\"https://www.amazon.science/blog/alexa-enters-the-age-of-self\" target=\"_blank\">Read his post explaining why more-autonomous machine learning systems will make Alexa more self-aware, self-learning, and self-service.</a></p>\n<h4><a id=\"4_New_take_on_hierarchical_time_series_forecasting_improves_accuracy_26\"></a>4. New take on hierarchical time series forecasting improves accuracy</h4>\n<p><a href=\"https://www.amazon.science/blog/new-take-on-hierarchical-time-series-forecasting-improves-accuracy\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/e857bc3576114783b220649897967e9f_%E4%B8%8B%E8%BD%BD.gif\" alt=\"下载.gif\" /></a></p>\n<p>The researchers’ method enforces coherence, or agreement among different levels of a hierarchical time series, through projection. The plane (S) is the subspace of coherent samples; yt+h is a sample from the standard distribution (which is always coherent); ŷt+h is the transformation of the sample into a sample from a learned distribution; and ỹt+h is the projection of ŷt+h back into the coherent subspace.</p>\n<p>In a paper presented at the International Conference on Machine Learning, Amazon scientists “describe a new approach to hierarchical time series forecasting that uses a single machine learning model, trained end to end, to simultaneously predict outputs at every level of the hierarchy and to reconcile them.” Read more about <a href=\"https://www.amazon.science/blog/new-take-on-hierarchical-time-series-forecasting-improves-accuracy\" target=\"_blank\">how this method enforces “coherence” of hierarchical time series</a>, in which the values at each level of the hierarchy are sums of the values at the level below.</p>\n<h4><a id=\"5_Determining_causality_in_correlated_time_series_34\"></a>5. Determining causality in correlated time series</h4>\n<p><a href=\"https://www.amazon.science/blog/determining-causality-in-correlated-time-series\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/00007b4b61ec47f2a056f27f83c2f82d_%E4%B8%8B%E8%BD%BD.gif\" alt=\"下载.gif\" /></a></p>\n<p>The researchers’ new method constructs a conditioning set — a set of variables that must be controlled for — that enables tests for conditional dependence and independence in a causal graph.</p>\n<p>In a paper presented at the International Conference on Machine Learning, coauthored by Bernhard Schölkopf, Amazon researchers “described a new technique for detecting all the direct causal features of a target time series — and only the direct or indirect causal features — given some graph constraints.” <a href=\"https://www.amazon.science/blog/determining-causality-in-correlated-time-series\" target=\"_blank\">Learn how the proposed method goes beyond Granger causality and &quot;yielded false-positive rates of detected causes close to zero</a>&quot;.</p>\n<h4><a id=\"6_How_to_train_large_graph_neural_networks_efficiently_42\"></a>6. How to train large graph neural networks efficiently</h4>\n<p><a href=\"https://www.amazon.science/blog/how-to-train-large-graph-neural-networks-efficiently\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/08fdcc9c38944783aa3c9f7ea2312dd2_%E4%B8%8B%E8%BD%BD.gif\" alt=\"下载.gif\" /></a></p>\n<p>By caching data about graph nodes in GPU memory, global neighbor sampling dramatically reduces the amount of data transferred from the CPU to the GPU during the training of large graph neural networks.</p>\n<p>In a paper presented at KDD, Amazon scientists “describe a new sampling strategy for training graph neural network models with a combination of CPUs and GPUs.” <a href=\"https://www.amazon.science/blog/how-to-train-large-graph-neural-networks-efficiently\" target=\"_blank\">Learn how their method enables two- to 14-fold speedups over its best-performing predecessors.</a></p>\n<h4><a id=\"7_How_to_make_ondevice_speech_recognition_practical_50\"></a>7. How to make on-device speech recognition practical</h4>\n<p><a href=\"https://www.amazon.science/blog/how-to-make-on-device-speech-recognition-practical\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/db3471794a9b4621888fef94bf4a6406_%E4%B8%8B%E8%BD%BD.gif\" alt=\"下载.gif\" /></a></p>\n<p>An advantage of our diffing approach is that we can target a different set of weights with each model update, which gives us more flexibility in adapting to a changing data landscape.</p>\n<p>At this year’s Interspeech, Amazon scientists presented two papers describing some of the innovations that will make it practical to run Alexa at the edge. <a href=\"https://www.amazon.science/blog/how-to-make-on-device-speech-recognition-practical\" target=\"_blank\">Learn how branching encoder networks make operation more efficient, while “neural diffing” reduces bandwidth requirements for model updates.</a></p>\n<h4><a id=\"8_Using_learningtorank_to_precisely_locate_where_to_deliver_packages_58\"></a>8. Using learning-to-rank to precisely locate where to deliver packages</h4>\n<p><a href=\"https://www.amazon.science/blog/using-learning-to-rank-to-precisely-locate-where-to-deliver-packages\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/8a18f454761647bf86ef363f6defccc4_image.png\" alt=\"image.png\" /></a></p>\n<p>In this figure, the dark-blue circles represent the GPS coordinates recorded for deliveries to the same address. The red circle is the actual location of the customer’s doorstep. Taking the average (centroid) value of the measurements yields a location (light-blue circle) in the middle of the street, leaving the driver uncertain and causing delays.</p>\n<p>In a paper presented at the European Conference on Machine Learning, a principal applied scientist in the Amazon Last Mile organization adapts “an idea from information retrieval — learning-to-rank — to the problem of predicting the coordinates of a delivery location from past GPS data.” Learn more <a href=\"https://www.amazon.science/blog/using-learning-to-rank-to-precisely-locate-where-to-deliver-packages\" target=\"_blank\">about how models adapted from information retrieval deal well with noisy GPS input</a> and can leverage map information.</p>\n<h4><a id=\"9_3Q_Making_siliconvacancy_centers_practical_for_quantum_networking_66\"></a>9. 3Q: Making silicon-vacancy centers practical for quantum networking</h4>\n<p><a href=\"https://www.amazon.science/blog/3q-making-silicon-vacancy-centers-practical-for-quantum-networking\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/c6e597dbcaac4abeaedcb6deba465b7d_image.png\" alt=\"image.png\" /></a></p>\n<p>In the researchers’ setup, if a photon reaches the detector, it conveys information about the quantum state of one silicon-vacancy qubit (SiV B), even though it interacted only with the other qubit (SiV A).</p>\n<p>Synthetic-diamond chips with so-called silicon-vacancy centers are a promising technology for quantum networking because they’re natural light emitters, and they’re small, solid state, and relatively easy to manufacture at scale. But they’ve had one severe drawback, which is that they tend to emit light at a range of different frequencies, which makes exchanging quantum information difficult.</p>\n<p>Members of Amazon’s AWS Center for Quantum Computing, together with colleagues at Harvard University, the University of Hamburg, the Hamburg Centre for Ultrafast Imaging, and the Hebrew University of Jerusalem, demonstrated a technique that promises to overcome that drawback. The first author on the paper, <a href=\"https://www.amazon.science/blog/3q-making-silicon-vacancy-centers-practical-for-quantum-networking\" target=\"_blank\">David Levonian, a graduate student at Harvard and a quantum research scientist at Amazon, answered three questions about the research for Amazon Science</a>.</p>\n<h4><a id=\"10_AWS_team_wins_bestpaper_award_for_work_on_automated_reasoning_76\"></a>10. AWS team wins best-paper award for work on automated reasoning</h4>\n<p><a href=\"https://www.amazon.science/blog/aws-team-wins-best-paper-award-for-work-on-automated-reasoning\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/1fb17b0be76944bea37e8716b98e0316_%E4%B8%8B%E8%BD%BD.gif\" alt=\"下载.gif\" /></a></p>\n<p>An example of the ShardStore deletion procedure. Deleting the second data chunk in extent 18 (grey box) requires copying the other three chunks to different extents (extents 19 and 20) and resetting the write pointer for extent 18. The log-structured merge-tree itself is also stored on disk (in this case, in extent 17). See below for details.</p>\n<p>At the ACM Symposium on Operating Systems Principles, researchers at Amazon Web Services and won a best-paper award for their work using automated reasoning to validate that ShardStore — Amazon’s new S3 storage node microservice — will do what it’s supposed to. <a href=\"https://www.amazon.science/blog/aws-team-wins-best-paper-award-for-work-on-automated-reasoning\" target=\"_blank\">Learn more about lightweight formal methods for validating the new S3 data storage service</a>.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Staff_writer_86\"></a>Staff writer</h4>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭
contact-us