TheWebConf: Where communities converge on questions of scale

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Xin Luna Dong, a senior principal scientist at Amazon, leads research on Amazon’s product graph, a huge [graphical](https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)) representation of the products in the Amazon Store and their properties, such as brand, color, and flavor. The product graph can help organize product descriptions and suggest substitutes for out-of-stock items and complements to selected products, among other things.\n\nAs [knowledge graph](https://www.amazon.science/tag/knowledge-graphs) researchers, Dong and her team frequently publish at conferences [on knowledge management](https://www.amazon.science/conferences-and-events/cikm-2020) and [knowledge discovery](https://www.amazon.science/conferences-and-events/kdd-2020). This year, they also have two papers at the Web Conference.\n\n![image.png](https://dev-media.amazoncloud.cn/c8e0c939329447e2a287db581747bba6_image.png)\n\nSenior principal scientist Xin Luna Dong.\n\n“It has a huge audience, and it has an audience from different communities,” Dong says. “It has people from the data mining community; it has people from the NLP [natural-language processing] and the information retrieval community; it has people from the Web community. It has a diverse audience, and we can learn a lot of different things from them. We can get different opinions. I also like the web perspective. Once you talk about the web, you have to scale; you won't survive with a small-scale solution.” And scalability is essential to a project that seeks to encode information about millions of products, which can be related to each other in millions of different ways.\n\nThe diversity of the audience at the Web Conference, Dong adds, reflects the diversity of techniques necessary to build the Amazon product graph.\n\n“At the very beginning,” she says, “we need to collect knowledge from text, semi-structured data, et cetera, and this requires knowledge extraction. That comes from the NLP community and, because of this recent idea of multimodal knowledge extraction, it also draws on techniques from computer vision.\n\n“Once we get knowledge from different sources, we want to integrate it. We want to align the knowledge — for example, to understand how different sources might use different names for the same entity. And that part is from the database data integration and the data-mining communities.\n\n“Finally, after we get everything cleaned up and put in a knowledge graph, we want to use it to serve different applications. We want to use it to support IR [information retrieval]: that's the IR community. We want to use it to support [question answering](https://www.amazon.science/tag/question-answering): that's the NLP and IR communities. We want to use it to support recommendation. That's the data mining and recommendation community. And so on and so forth. The whole process of building a knowledge graph and using that to help improve customer experience, it really is a cross-community project.”\n\n#### **Unstructured data**\n\nAt first glance, it may not be obvious why building the Amazon product graph requires knowledge extraction: after all, the Amazon catalogue has dedicated fields for attributes like price, size, color, style, brand, and the like.\n\nBut the data in the Amazon catalogue is usually contributed by third-party retailers or manufacturers, who may not make use of those dedicated fields.\n\n“As an example, they just put everything into the product title, into the product descriptions, and give us a bunch of bullets,” Dong says. “You often see that on the Amazon Detail Page. And then when you look at that big chunk of text, you try to understand, ‘What flavor is this?’ ‘What is the scent?’, et cetera. You don't immediately get it.”\n\nSimilarly, Dong explains, similar products may be sold in different volumes, or with different numbers of items per package. “When you try to compare the price per unit for similar products, again, you can't easily get it without the structured data,” she explains.\n\nMuch of the research conducted by Dong’s group involves trying to impose structure on that unstructured data, by extracting structured values from it and learning classification hierarchies. The group’s techniques often combine NLP with analysis of click behaviors. \n\n“If people search for tea and then end up buying green-tea products, we infer that green tea is a subtype of tea,” she explains. “What is the hierarchy-subtype relationship? We [published that last year](https://www.amazon.science/blog/building-product-graphs-automatically) in KDD. And then we want to assign the products to those product types. That's [the publication this year](https://www.amazon.science/publications/minimally-supervised-structure-rich-text-categorization-via-learning-on-text-rich-networks) for the WebConf.”\n\n#### **Graph neural networks**\n\nDong first attended the Web Conference in 2014, and at the time, “there was definitely not that much of deep learning,” she says. “And now, everything is kind of deep-learning enabled.”\n\nAround 2018, she says, she began to notice a new emphasis on graph neural networks (GNNs), deep-learning models that generalize the concept of embedding to graphs. Embeddings represent data as points in a multidimensional space, such that spatial relationships between the points carry information about the data. In a graph neural network, the embedding of each node is based on the node itself and on its immediate neighborhood — the nodes it’s directly connected to and the nature of those connections.\n\n“You can do one-hop neighborhoods, or two-hop neighborhoods, but typically, you don't want to do too many hops, because then the information is diluted,” Dong explains. “For example, you can understand me by looking at my neighbors — the companies I’ve worked for, my fields, the people I'm interacting with. For every entity in the graph, you decide a representation according to the neighbors. And then according to that representation, you can make predictions.\n\n“We use GNNs for [information extraction](https://www.amazon.science/publications/tcn-table-convolutional-network-for-web-table-interpretation), multimodal extraction. We use it for integration and data linkage. We use it for cleaning. That's definitely one of the most powerful tools we are using at this time.”\n\nABOUT THE AUTHOR\n\n#### **[Larry Hardesty](https://www.amazon.science/author/larry-hardesty)**\n\nLarry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.","render":"<p>Xin Luna Dong, a senior principal scientist at Amazon, leads research on Amazon’s product graph, a huge <a href=\"https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)\" target=\"_blank\">graphical</a> representation of the products in the Amazon Store and their properties, such as brand, color, and flavor. The product graph can help organize product descriptions and suggest substitutes for out-of-stock items and complements to selected products, among other things.</p>\n<p>As <a href=\"https://www.amazon.science/tag/knowledge-graphs\" target=\"_blank\">knowledge graph</a> researchers, Dong and her team frequently publish at conferences <a href=\"https://www.amazon.science/conferences-and-events/cikm-2020\" target=\"_blank\">on knowledge management</a> and <a href=\"https://www.amazon.science/conferences-and-events/kdd-2020\" target=\"_blank\">knowledge discovery</a>. This year, they also have two papers at the Web Conference.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/c8e0c939329447e2a287db581747bba6_image.png\" alt=\"image.png\" /></p>\n<p>Senior principal scientist Xin Luna Dong.</p>\n<p>“It has a huge audience, and it has an audience from different communities,” Dong says. “It has people from the data mining community; it has people from the NLP [natural-language processing] and the information retrieval community; it has people from the Web community. It has a diverse audience, and we can learn a lot of different things from them. We can get different opinions. I also like the web perspective. Once you talk about the web, you have to scale; you won’t survive with a small-scale solution.” And scalability is essential to a project that seeks to encode information about millions of products, which can be related to each other in millions of different ways.</p>\n<p>The diversity of the audience at the Web Conference, Dong adds, reflects the diversity of techniques necessary to build the Amazon product graph.</p>\n<p>“At the very beginning,” she says, “we need to collect knowledge from text, semi-structured data, et cetera, and this requires knowledge extraction. That comes from the NLP community and, because of this recent idea of multimodal knowledge extraction, it also draws on techniques from computer vision.</p>\n<p>“Once we get knowledge from different sources, we want to integrate it. We want to align the knowledge — for example, to understand how different sources might use different names for the same entity. And that part is from the database data integration and the data-mining communities.</p>\n<p>“Finally, after we get everything cleaned up and put in a knowledge graph, we want to use it to serve different applications. We want to use it to support IR [information retrieval]: that’s the IR community. We want to use it to support <a href=\"https://www.amazon.science/tag/question-answering\" target=\"_blank\">question answering</a>: that’s the NLP and IR communities. We want to use it to support recommendation. That’s the data mining and recommendation community. And so on and so forth. The whole process of building a knowledge graph and using that to help improve customer experience, it really is a cross-community project.”</p>\n<h4><a id=\"Unstructured_data_18\"></a><strong>Unstructured data</strong></h4>\n<p>At first glance, it may not be obvious why building the Amazon product graph requires knowledge extraction: after all, the Amazon catalogue has dedicated fields for attributes like price, size, color, style, brand, and the like.</p>\n<p>But the data in the Amazon catalogue is usually contributed by third-party retailers or manufacturers, who may not make use of those dedicated fields.</p>\n<p>“As an example, they just put everything into the product title, into the product descriptions, and give us a bunch of bullets,” Dong says. “You often see that on the Amazon Detail Page. And then when you look at that big chunk of text, you try to understand, ‘What flavor is this?’ ‘What is the scent?’, et cetera. You don’t immediately get it.”</p>\n<p>Similarly, Dong explains, similar products may be sold in different volumes, or with different numbers of items per package. “When you try to compare the price per unit for similar products, again, you can’t easily get it without the structured data,” she explains.</p>\n<p>Much of the research conducted by Dong’s group involves trying to impose structure on that unstructured data, by extracting structured values from it and learning classification hierarchies. The group’s techniques often combine NLP with analysis of click behaviors.</p>\n<p>“If people search for tea and then end up buying green-tea products, we infer that green tea is a subtype of tea,” she explains. “What is the hierarchy-subtype relationship? We <a href=\"https://www.amazon.science/blog/building-product-graphs-automatically\" target=\"_blank\">published that last year</a> in KDD. And then we want to assign the products to those product types. That’s <a href=\"https://www.amazon.science/publications/minimally-supervised-structure-rich-text-categorization-via-learning-on-text-rich-networks\" target=\"_blank\">the publication this year</a> for the WebConf.”</p>\n<h4><a id=\"Graph_neural_networks_32\"></a><strong>Graph neural networks</strong></h4>\n<p>Dong first attended the Web Conference in 2014, and at the time, “there was definitely not that much of deep learning,” she says. “And now, everything is kind of deep-learning enabled.”</p>\n<p>Around 2018, she says, she began to notice a new emphasis on graph neural networks (GNNs), deep-learning models that generalize the concept of embedding to graphs. Embeddings represent data as points in a multidimensional space, such that spatial relationships between the points carry information about the data. In a graph neural network, the embedding of each node is based on the node itself and on its immediate neighborhood — the nodes it’s directly connected to and the nature of those connections.</p>\n<p>“You can do one-hop neighborhoods, or two-hop neighborhoods, but typically, you don’t want to do too many hops, because then the information is diluted,” Dong explains. “For example, you can understand me by looking at my neighbors — the companies I’ve worked for, my fields, the people I’m interacting with. For every entity in the graph, you decide a representation according to the neighbors. And then according to that representation, you can make predictions.</p>\n<p>“We use GNNs for <a href=\"https://www.amazon.science/publications/tcn-table-convolutional-network-for-web-table-interpretation\" target=\"_blank\">information extraction</a>, multimodal extraction. We use it for integration and data linkage. We use it for cleaning. That’s definitely one of the most powerful tools we are using at this time.”</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Larry_Hardestyhttpswwwamazonscienceauthorlarryhardesty_44\"></a><strong><a href=\"https://www.amazon.science/author/larry-hardesty\" target=\"_blank\">Larry Hardesty</a></strong></h4>\n<p>Larry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭
contact-us