How to reduce communication overhead for database queries by up to 97%

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Relational databases are typically composed of many different tables: one table might store contact data for a company’s customers; another might store data about all the company’s retail stores; another might store individual customers’ purchase histories; another might log details about customer service calls; and so on.\n\nCustomers who use the Amazon Redshift cloud data warehouse service from Amazon Web Services often have databases that consist of thousands of tables, which are constantly being updated and expanded. These tables naturally have to be distributed across multiple servers in Amazon Web Services data centers.\n\nAt the 46th International Conference on Very Large Databases ([VLDB](https://www.amazon.science/conferences-and-events/vldb-2020)), my colleagues — [Yonatan Naamad](https://www.amazon.science/author/yonatan-naamad), [Peter Van Bouwel](https://www.amazon.science/author/peter-van-bouwel), [Christos Faloutsos](https://www.amazon.science/author/christos-faloutsos), and Michalis Petropoulos — and I [presented](https://assets.amazon.science/92/a6/0805c07140a393c4569dc79676af/fast-and-effective-distribution-key-recommendation-for-amazon-redshift.pdf) a new method for allocating data across servers. In experiments involving queries that retrieved data from multiple tables, our method reduced communications overhead by as much as 97% relative to the original, unoptimized configuration.\n\nFor the last year, [Amazon Redshift Advisor](https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-redshift-now-recommends-distribution-keys-for-improved-query-performance/) has used this method to recommend data storage configurations to our customers, enabling them to perform more-efficient database queries.\n\n![image.png](https://dev-media.amazoncloud.cn/28a32316cfbb46cfa12f26f403489426_image.png)\n\nAn example of a real join multigraph. The thickness of the lines indicates the data transfer required by joins on particular attributes.\n\nTo get a sense of the problem our method addresses, consider a company that wishes to inform customers about sales at their local stores. That requires a database query that pulls customer data from the customer table and sale data from a store table.\n\nTo find the right store for each customer, the query matches entries from both tables by city. The query thus performs a join operation using the attribute “city”.\n\nOne standard way to distribute database tables across servers is to use distribution keys. For each data entry in a table (that is, each row of the table), a hash function is applied to one value of the entry — the distribution key. The hash function maps that value to the address of a server on the network, which is where the table row is stored.\n\nIn our example of a join operation, if the distribution key for both the customer table and the store table is the attribute “city”, then all customer entries and store entries that share a city will be stored on the same server. Each server then contains enough information to perform the join independently and in parallel with the other servers, without the need for data reshuffling at query time.\n\nThis is the basis of our method. Essentially, we analyze the query data for a particular database and identify the attributes whose joins involve the largest data transfers; then we use those attributes as the distribution keys for the associated tables.\n\n#### **The join multigraph**\n\nThe first step in this process is to create what we call a join multigraph. This is a graph in the graph-theoretical sense: a data structure consisting of vertices — often depicted as circles — and edges — usually depicted as line segments connecting vertices. The edges may also have numbers associated with them, known as weights.\n\nIn the join multigraph, the vertices are tables of a database. The edges connect attributes of separate tables on which join operations have been performed, and the edge weights indicate the data transfer required by joins between these attributes.\n\nOur goal is now to partition the graph into pairs of vertices, each connected by a single edge (a single attribute pair), such that we maximize the cumulative weight of all the edges. Unfortunately, in our paper, we show that this problem is NP-complete, meaning that solving it exactly isn’t computationally practical.\n\n![image.png](https://dev-media.amazoncloud.cn/d2435bdf1a2c43b29ceab13833795741_image.png)\n\nAn example of a simple join multigraph (left) and two different partitions of it, using different distribution keys. The nodes (circles) are tables, and the smaller letters indicate the attributes on which join operations have been performed. In the first graph, the thickness of the lines indicates the data transfer required by joins between the associated attributes. The distribution keys selected for the first partition (red circles) yield greater savings in communication overhead than those selected in the second partition (green circles).\n\nWe also show, however, that the optimization technique known as integer *linear programming may*, for any given instance of the problem, yield an optimal solution in a reasonable amount of time. So the first step in our method is to try to partition the graph using integer linear programming, with a limit on how much time the linear-programming solver can spend on the problem.\n\nIf the solver times out, then our next step is to use four different heuristics to partition the graph, and we select the one that yields the greatest cumulative weight. We call our method the best-of-all-worlds approach, since it canvasses five different possibilities and chooses the one that works best.\n\nAll four heuristics are approximate solutions of the maximum-weight matching problem, which we prove to be a special case of the problem we’re trying to solve (the distribution key recommendation problem).\n\n#### **Heuristics**\n\nWe begin with two empty sets of distribution key recommendations. Then we select a vertex of the graph (a table) at random and identify its most heavily weighted edge. The attributes that define that edge become the recommended distribution key for the tables the edge connects, and that recommendation is added to the first empty set.\n\nThen we repeat the process, with another randomly selected vertex, and add the resulting recommendation to the second empty set of clustering recommendations. We repeat this process, alternating between the two sets of recommendations, until none of the vertices in the graph remain unaccounted for.\n\nNow we have two different sets of recommendations, with two different sets of vertices, and we select the one with the greater cumulative edge weights. The differences between our four heuristics lie in the processes we use to add back the edges missing from the recommendation set we’ve selected — processes we’ve dubbed greedy matching, random choice, random neighbor, and naïve greedy. (Details are in the [paper](https://assets.amazon.science/92/a6/0805c07140a393c4569dc79676af/fast-and-effective-distribution-key-recommendation-for-amazon-redshift.pdf).)\n\nIn tests on four different data sets, our method reduced communication overhead by between 80% and 97%, savings that would directly translate to performance improvements for our customers.\n\nABOUT THE AUTHOR\n\n#### **[Panos Parchas](https://www.amazon.science/author/panos-parchas)**\n\nPanos Parchas is a software development engineer on the Redshift team.\n","render":"<p>Relational databases are typically composed of many different tables: one table might store contact data for a company’s customers; another might store data about all the company’s retail stores; another might store individual customers’ purchase histories; another might log details about customer service calls; and so on.</p>\n<p>Customers who use the Amazon Redshift cloud data warehouse service from Amazon Web Services often have databases that consist of thousands of tables, which are constantly being updated and expanded. These tables naturally have to be distributed across multiple servers in Amazon Web Services data centers.</p>\n<p>At the 46th International Conference on Very Large Databases (<a href=\"https://www.amazon.science/conferences-and-events/vldb-2020\" target=\"_blank\">VLDB</a>), my colleagues — <a href=\"https://www.amazon.science/author/yonatan-naamad\" target=\"_blank\">Yonatan Naamad</a>, <a href=\"https://www.amazon.science/author/peter-van-bouwel\" target=\"_blank\">Peter Van Bouwel</a>, <a href=\"https://www.amazon.science/author/christos-faloutsos\" target=\"_blank\">Christos Faloutsos</a>, and Michalis Petropoulos — and I <a href=\"https://assets.amazon.science/92/a6/0805c07140a393c4569dc79676af/fast-and-effective-distribution-key-recommendation-for-amazon-redshift.pdf\" target=\"_blank\">presented</a> a new method for allocating data across servers. In experiments involving queries that retrieved data from multiple tables, our method reduced communications overhead by as much as 97% relative to the original, unoptimized configuration.</p>\n<p>For the last year, <a href=\"https://aws.amazon.com/about-aws/whats-new/2019/08/amazon-redshift-now-recommends-distribution-keys-for-improved-query-performance/\" target=\"_blank\">Amazon Redshift Advisor</a> has used this method to recommend data storage configurations to our customers, enabling them to perform more-efficient database queries.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/28a32316cfbb46cfa12f26f403489426_image.png\" alt=\"image.png\" /></p>\n<p>An example of a real join multigraph. The thickness of the lines indicates the data transfer required by joins on particular attributes.</p>\n<p>To get a sense of the problem our method addresses, consider a company that wishes to inform customers about sales at their local stores. That requires a database query that pulls customer data from the customer table and sale data from a store table.</p>\n<p>To find the right store for each customer, the query matches entries from both tables by city. The query thus performs a join operation using the attribute “city”.</p>\n<p>One standard way to distribute database tables across servers is to use distribution keys. For each data entry in a table (that is, each row of the table), a hash function is applied to one value of the entry — the distribution key. The hash function maps that value to the address of a server on the network, which is where the table row is stored.</p>\n<p>In our example of a join operation, if the distribution key for both the customer table and the store table is the attribute “city”, then all customer entries and store entries that share a city will be stored on the same server. Each server then contains enough information to perform the join independently and in parallel with the other servers, without the need for data reshuffling at query time.</p>\n<p>This is the basis of our method. Essentially, we analyze the query data for a particular database and identify the attributes whose joins involve the largest data transfers; then we use those attributes as the distribution keys for the associated tables.</p>\n<h4><a id=\"The_join_multigraph_22\"></a><strong>The join multigraph</strong></h4>\n<p>The first step in this process is to create what we call a join multigraph. This is a graph in the graph-theoretical sense: a data structure consisting of vertices — often depicted as circles — and edges — usually depicted as line segments connecting vertices. The edges may also have numbers associated with them, known as weights.</p>\n<p>In the join multigraph, the vertices are tables of a database. The edges connect attributes of separate tables on which join operations have been performed, and the edge weights indicate the data transfer required by joins between these attributes.</p>\n<p>Our goal is now to partition the graph into pairs of vertices, each connected by a single edge (a single attribute pair), such that we maximize the cumulative weight of all the edges. Unfortunately, in our paper, we show that this problem is NP-complete, meaning that solving it exactly isn’t computationally practical.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/d2435bdf1a2c43b29ceab13833795741_image.png\" alt=\"image.png\" /></p>\n<p>An example of a simple join multigraph (left) and two different partitions of it, using different distribution keys. The nodes (circles) are tables, and the smaller letters indicate the attributes on which join operations have been performed. In the first graph, the thickness of the lines indicates the data transfer required by joins between the associated attributes. The distribution keys selected for the first partition (red circles) yield greater savings in communication overhead than those selected in the second partition (green circles).</p>\n<p>We also show, however, that the optimization technique known as integer <em>linear programming may</em>, for any given instance of the problem, yield an optimal solution in a reasonable amount of time. So the first step in our method is to try to partition the graph using integer linear programming, with a limit on how much time the linear-programming solver can spend on the problem.</p>\n<p>If the solver times out, then our next step is to use four different heuristics to partition the graph, and we select the one that yields the greatest cumulative weight. We call our method the best-of-all-worlds approach, since it canvasses five different possibilities and chooses the one that works best.</p>\n<p>All four heuristics are approximate solutions of the maximum-weight matching problem, which we prove to be a special case of the problem we’re trying to solve (the distribution key recommendation problem).</p>\n<h4><a id=\"Heuristics_40\"></a><strong>Heuristics</strong></h4>\n<p>We begin with two empty sets of distribution key recommendations. Then we select a vertex of the graph (a table) at random and identify its most heavily weighted edge. The attributes that define that edge become the recommended distribution key for the tables the edge connects, and that recommendation is added to the first empty set.</p>\n<p>Then we repeat the process, with another randomly selected vertex, and add the resulting recommendation to the second empty set of clustering recommendations. We repeat this process, alternating between the two sets of recommendations, until none of the vertices in the graph remain unaccounted for.</p>\n<p>Now we have two different sets of recommendations, with two different sets of vertices, and we select the one with the greater cumulative edge weights. The differences between our four heuristics lie in the processes we use to add back the edges missing from the recommendation set we’ve selected — processes we’ve dubbed greedy matching, random choice, random neighbor, and naïve greedy. (Details are in the <a href=\"https://assets.amazon.science/92/a6/0805c07140a393c4569dc79676af/fast-and-effective-distribution-key-recommendation-for-amazon-redshift.pdf\" target=\"_blank\">paper</a>.)</p>\n<p>In tests on four different data sets, our method reduced communication overhead by between 80% and 97%, savings that would directly translate to performance improvements for our customers.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\"Panos_Parchashttpswwwamazonscienceauthorpanosparchas_52\"></a><strong><a href=\"https://www.amazon.science/author/panos-parchas\" target=\"_blank\">Panos Parchas</a></strong></h4>\n<p>Panos Parchas is a software development engineer on the Redshift team.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭