Manage long-running read queries on Amazon Aurora PostgreSQL-Compatible Edition

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
48
0
{"value":"An A[mazon Aurora PostgreSQL-Compatible Edition ](https://aws.amazon.com/rds/aurora/postgresql-features/)cluster consists of a primary/writer node and up to 15 reader nodes in a Region. You may offload read-only workloads to reader nodes to scale-out reads. Long-running queries on reader nodes that are accessing tables with high transactional activities on the primary node can cause conflicts, and lead to undesirable effects. In this post, we go over the potential conflicts and share best practices for managing long-running read queries on Aurora PostgreSQL.\n\n### **Storage differences between Amazon RDS and Aurora PostgreSQL**\n\nIn [Amazon Relational Database Service (Amazon RDS) for PostgreSQL](https://aws.amazon.com/rds/postgresql/), each read replica instance has its own independent copy of the database, kept in sync by physical replication from the primary node. In contrast, in Aurora PostgreSQL, there is a single shared database managed by a distributed storage engine redundantly stored on six storage nodes to which each database compute instance attaches. The following graphic shows the fundamental differences in these two approaches.\n\n![image.png](https://dev-media.amazoncloud.cn/dcc81ca763a7436988ab26b298217621_image.png)\n\n\nThe shared storage model of Aurora has many advantages, such as minimal time to spin up new reader nodes because the new instance simply attaches to existing storage nodes over the network. However, with this approach, Aurora PostgreSQL must keep the different reader nodes’ cached copies of the same page more closely in sync than Amazon RDS for PostgreSQL because there is only one physical copy of the database shared by all the nodes of the Aurora PostgreSQL cluster.\n\n### **Impacts of long-running queries on Aurora reader nodes**\n\nNormal DML operations (such as insert, update, and delete) on an Aurora primary node generate transaction log records that, in addition to being sent to all the Aurora storage nodes, are also sent to all the Aurora reader nodes in the cluster in case any reader node has a copy of the same page in its own buffer cache. In the preceding figure, these are the cache update data flows from the primary node to the reader nodes. If a reader node doesn’t have the page buffer in memory, those log records are ignored; otherwise they’re processed in transaction sequence once they’re committed. As queries are processed, either on the primary or reader nodes, sometimes they need to hold an internal lock on a page buffer in memory to ensure the page buffer contents don’t change underneath it. Generally, these are known as buffer pins and although they’re non-transactional, they can affect concurrency.\n\nPostgreSQL tends to be smart about handling conflicts on the primary node itself because it understands all queries running on it at that moment. For example, the PostgreSQL engine on the primary node could choose to skip autovacuum of a page buffer if a different read session is holding a buffer pin on that page buffer. However, the primary node is oblivious to the workload and buffer cache contents in the reader nodes, so a common challenge in Aurora reader nodes is the potential for conflicts that block applying log records affecting the page buffers in memory in a timely manner.\n\nWhen conflict occurs, if the conflicting query is short, it’s desirable to allow it to complete by delayed applying log records on the reader nodes. However, in the case of a long-running query on the reader node, allowing the transaction log apply process to wait indefinitely will cause the reader node to increasingly lag farther behind the primary node. To balance between running queries and high availability, Aurora PostgreSQL takes the following actions when there is conflict on the reader node that blocks applying the log records:\n\n- Delay application of the log records on the reader node to allow the conflicting query on the reader node to finish, then apply the log records. The amount of time to delay is determined by the configured ```max_standby_streaming_delay```parameter value. Aurora PostgreSQL allows a configuration of```max_standby_streaming_delay```up to 30 seconds. If you’re running mixed OLTP and OLAP workloads, the OLAP queries may run for longer than 30 seconds. One of the objectives of this post is to share best practices to manage long-running queries and mitigate conflicts in this scenario.\n- Cancel the conflicting query on the reader node if the conflict lasts longer than ```max_standby_streaming_delay```(maximum 30 seconds). This is different from Amazon RDS or self-managed PostgreSQL. With Amazon RDS or self-managed PostgreSQL, the instance has its own physical copy of the database, and you’re able to set the parameter ```max_standby_streaming_delay```as high as you want to prevent query cancellation.\n- If the conflicting query can’t cancel in time, or if multiple long-running queries are causing the replication lag to go beyond 60 seconds, Aurora restarts the reader node to ensure it’s not lagging far behind the primary node.\n\n### **Common causes of conflict in an Aurora reader node**\n\nThere are several common scenarios that can cause conflicts when applying log records on the reader node:\n\n- **Snapshot conflicts** — A conflict can occur if the vacuum process removes dead tuples on the primary node, and a long-running query on the reader node started before the vacuum and has an older snapshot.\n- **Lock conflicts** – PostgreSQL’s MVCC protocol eliminates the need to lock entire page buffers in memory during normal DML commands. However, there are also heavier operations that affect a physical page, such as DDL (alter and drop table), vacuum full, lock table, truncating page from a relation, and some autovacuum operations where the operation must acquire an ```Access Exclusive```lock. This lock then generates a```XLOG_STANDBY_LOCK```log record, which causes reader nodes to acquire the ```Access Exclusive```lock in order to invalidate the memory structure in its buffer cache.\n- **Buffer pin conflicts** — This conflict occurs when you have an exclusive page lock on the primary node while a long-running read query on the reader node has the page pinned.\n\nYou can follow the steps in this post to reproduce a lock conflict scenario.\n\n### **Prerequisites**\n\nBefore you get started, make sure you have the following prerequisites:\n\n- An Amazon Web Services account.\n- An [Aurora PostgreSQL cluster with one reader node](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_GettingStartedAurora.CreatingConnecting.AuroraPostgreSQL.html). The sample in this post deploys both the primary node and the reader node on [db.r6g.large](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html#Concepts.DBInstanceClass.Types) running PostgreSQL v13.6.\n- A client environment with connectivity set up to your Aurora cluster. The sample in this post uses psql as the client tool.\n\nWe assume that you’re familiar with working with Aurora PostgreSQL and a PostgreSQL client environment.\n\n### **Generate test data**\n\nOpen a new psql connection to the primary node and run the following code to create sample test data:\n\n```\nCREATE TABLE rep_conflict (c1 bigint primary key, c2 varchar(80));\nCREATE INDEX idx_rep_conflict_c2 ON rep_conflict(c2);\n\nINSERT INTO rep_conflict\nSELECT r, 'test replication conflict '||r FROM generate_series(1, 10) r;\n\nSELECT * FROM rep_conflict;\n\n```\nThe following screenshot shows our output.\n\n![image.png](https://dev-media.amazoncloud.cn/a3c4040dff184b549ce42703e7e1596d_image.png)\n\n### **Verify max_standby_streaming_delay settings on the reader node**\n\nOpen a new psql connection to the reader node and check the ```max_standby_streaming_delay```parameter setting. The default value may change in different Aurora PostgreSQL versions. Aurora PostgreSQL now allows you to change the default value using a customized [parameter groups](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithParamGroups.html). You can set ```max_standby_streaming_delay```to any value between 1,000–30,000 milliseconds on both the cluster parameter groups or database instance parameter groups levels. If both places are set, the configuration setting on the cluster level takes precedence.\n\nCheck the parameter with the following code:\n\n```\nshow transaction_read_only;\nSELECT setting FROM pg_settings WHERE name = 'max_standby_streaming_delay'; \n```\nIn this test, ```max_standby_streaming_delay```is set to 14,000 milliseconds (14 seconds).\n\n![image.png](https://dev-media.amazoncloud.cn/fb864b8f17aa47e28b69a220f50e05ce_image.png)\n\n### **Run a long-running query**\n\nOn an existing connection to your Aurora reader node, enable timing and run a long-running query and capture the query runtime when there is no replication conflict:\n\n```\n\\timing\nSELECT count(*) FROM \n rep_conflict a,\n rep_conflict b,\n rep_conflict c,\n rep_conflict d,\n rep_conflict e,\n rep_conflict f,\n rep_conflict g, \n rep_conflict h,\n rep_conflict i;\n```\nOn my environment with a db.r6.large DB instance, the long-running query ran for about 72 seconds.\n\n![image.png](https://dev-media.amazoncloud.cn/8dff0366fe364007901bce92e9cb96a3_image.png)\n\n### **Rerun the long-running query with a conflict**\nOn your existing connection to the Aurora reader node, rerun the long-running query from the previous step. This time, we manually generate a conflicting DDL operation from the Aurora primary node.\n\n```\nSELECT count(*) FROM \n rep_conflict a,\n rep_conflict b,\n rep_conflict c,\n rep_conflict d,\n rep_conflict e,\n rep_conflict f,\n rep_conflict g,\n rep_conflict h,\n rep_conflict i;\n```\n#### **Observe the conflict on the primary node**\n\nSwitch to the session with the existing connection to your Aurora primary node while the long-running query is running on the reader node. Run the following DROP INDEX statement and observe what happens to the query on the reader node:\n\nDROP INDEX idx_rep_conflict_c2;\n![image.png](https://dev-media.amazoncloud.cn/561bddeda4fb4e75966f797257f637dc_image.png)\n\n#### **Review the error message on the reader node**\n\nSwitch to the Aurora reader session. If you wait a little bit, you should see an error message similar to the following screenshot.\n\n![image.png](https://dev-media.amazoncloud.cn/2214f025576147e38a47b2d91bc0f925_image.png)\n\nThe DROP INDEX operation on the primary node affected physical pages on Aurora shared storage. When the transaction log records are applied to the reader node, it causes the reader node to acquire the ```Access Exclusive ```lock in order to invalidate the memory structure in its buffer cache. However, the existing long-running query has the memory pages pinned. The DROP INDEX operation resulted in a lock conflict. In this scenario, Aurora PostgreSQL waited 14 seconds (the ```max_standby_streaming_delay```value configured) for the query to finish before cancelling the query.\n\n### **Considerations and best practices**\n\nNow that you have a better understanding of the Aurora shared storage architecture and scenarios that cause conflicts, you can take measurements to manage long-running queries to avoid or proactively handle a conflicting situation. The followings are options to consider:\n\n- **Query tuning** – Tune your query on the reader node to finish before reaching ```max_standby_streaming_delay```. On an OLTP system, queries are typically expected to return within a short time. An unintended long-running query can be caused by a missing index. If you don’t expect your query to run over 30 seconds, you should set the database parameter for [statement_timeout](https://postgresqlco.nf/doc/en/param/statement_timeout/) and [log_min_error_statement](https://postgresqlco.nf/doc/en/param/log_min_error_statement/) to ERROR or lower on the reader nodes to proactively cancel any runaway queries and log the query statement in the PostgreSQL log for later tuning.\n- Implement table partitioning – If the size of table is large, you can consider [table partitioning](https://aws.amazon.com/blogs/database/improve-performance-and-manageability-of-large-postgresql-tables-by-migrating-to-partitioned-tables-on-amazon-aurora-and-amazon-rds/) to reduce query runtime. Partition query pruning can reduce the amount of data Aurora PostgreSQL needs to process for a query and enhance query performance.\n- **Implement manual vacuum** – If you’re running mixed OLTP and OLAP workloads that include long-running queries on the reader nodes, consider turning off autovacuum and setting up operation methods (such as pg_cron) to manually run vacuum at appropriate times of the day or over the weekend when workload volume allows.\n- **Implement** retry logics in the application – The occurrence of a replication conflict depends on the nature of the operations that happened on the primary node and what’s running on the reader nodes. A query being canceled may very well be able to run successfully if it’s submitted again. You should implement [retry logics in the application](https://aws.amazon.com/builders-library/timeouts-retries-and-backoff-with-jitter/) to catch a query being canceled and resubmit the query. You should also have application logics in place to reconnect and resubmit the query in the case of a connection failure.\n- **Redirect the query to the Aurora primary node** – To avoid long-running read queries being forcibly canceled or causing an unexpected Aurora reader restart, you can redirect the query to the primary node. When a query runs and holds resources for a long time on the primary node, other sessions may result in waiting. But the query will finish unless you take actions to cancel the query.\n- **Offload the query to an Aurora fast clone** – You can consider creating a clone of the database using the [Aurora fast database cloning](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html#Aurora.Clone.Overview) feature. Long-running queries that don’t need access to real-time production data may be offloaded to an Aurora clone (for example, a query used to generate quarterly reports).\n- **Split the query into multiple smaller queries** – Breaking down a complex long-running query into multiple shorter and smaller queries allows you to better utilize the horizontal scalability of the application layer and distribute queries to multiple application instances.\n- **Keep your environment up to date** – PostgreSQL community and Amazon Web Services make continuous efforts on enhancements and bug fixes to mitigate conflicts. For example, PostgreSQL v14 added parameter [client_connection_check_interval](https://www.postgresql.org/docs/14/runtime-config-connection.html#GUC-CLIENT-CONNECTION-CHECK-INTERVAL) to allow you to abort a long-running query sooner is the client is disconnected, and parameter [idle_session_timeout](https://www.postgresql.org/docs/current/runtime-config-client.html#RUNTIME-CONFIG-CLIENT-STATEMENT) to close an idle session. Aurora PostgreSQL v12.9 and v13.5 (and higher) added an optimization to minimize the need to cancel some queries running longer than ```max_standby_streaming_delay```under buffer pin conflicts. For more information on the enhancement, refer to [Amazon Aurora PostgreSQL updates](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraPostgreSQLReleaseNotes/AuroraPostgreSQL.Updates.html). It’s highly recommended that you keep your Aurora environment up to date on the latest minor releases.\n\nIt’s important to continue running heavy maintenance operations like vacuum or DDL at times when workload volumes are lighter or more tolerant of query cancellations.\n\n### **Conclusion**\n\nIn this post, we discussed common scenarios of long-running read queries on Aurora reader nodes that may cause conflicts with DML and DDL operations on the Aurora primary node. We shared best practices to proactively manage long-running read queries and mitigate impacts of potential conflicts.\n\nWe welcome your feedback. Share your experience and any questions in the comments.\n\n**Since you’re reading this post, you may also be interested in the following:**\n\n[Improve performance and manageability of large PostgreSQL tables by migrating to partitioned tables on Amazon Aurora and Amazon RDS](https://aws.amazon.com/blogs/database/improve-performance-and-manageability-of-large-postgresql-tables-by-migrating-to-partitioned-tables-on-amazon-aurora-and-amazon-rds/)\n\n#### **About the authors**\n\n![image.png](https://dev-media.amazoncloud.cn/d42ee3bd5c544d78a84e492fdd15bf91_image.png)\n\n**Wanda He** is a Sr. Database Specialist Solutions Architect at Amazon Web Services. She works with customers on design, deploy, and optimize relational databases on Amazon Web Services.\n\n![image.png](https://dev-media.amazoncloud.cn/1cc17579423c4af3aaf0bdc623d00851_image.png)\n\n**Avi Jain** is a Sr. Product Manager with the Technical Benchmarking team at Amazon. He is part of team that evaluates, measures, and drives improvements to the end-to-end customer experience (CX) of using Amazon technical products and services.","render":"<p>An A<a href=\"https://aws.amazon.com/rds/aurora/postgresql-features/\" target=\"_blank\">mazon Aurora PostgreSQL-Compatible Edition </a>cluster consists of a primary/writer node and up to 15 reader nodes in a Region. You may offload read-only workloads to reader nodes to scale-out reads. Long-running queries on reader nodes that are accessing tables with high transactional activities on the primary node can cause conflicts, and lead to undesirable effects. In this post, we go over the potential conflicts and share best practices for managing long-running read queries on Aurora PostgreSQL.</p>\n<h3><a id=\"Storage_differences_between_Amazon_RDS_and_Aurora_PostgreSQL_2\"></a><strong>Storage differences between Amazon RDS and Aurora PostgreSQL</strong></h3>\n<p>In <a href=\"https://aws.amazon.com/rds/postgresql/\" target=\"_blank\">Amazon Relational Database Service (Amazon RDS) for PostgreSQL</a>, each read replica instance has its own independent copy of the database, kept in sync by physical replication from the primary node. In contrast, in Aurora PostgreSQL, there is a single shared database managed by a distributed storage engine redundantly stored on six storage nodes to which each database compute instance attaches. The following graphic shows the fundamental differences in these two approaches.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/dcc81ca763a7436988ab26b298217621_image.png\" alt=\"image.png\" /></p>\n<p>The shared storage model of Aurora has many advantages, such as minimal time to spin up new reader nodes because the new instance simply attaches to existing storage nodes over the network. However, with this approach, Aurora PostgreSQL must keep the different reader nodes’ cached copies of the same page more closely in sync than Amazon RDS for PostgreSQL because there is only one physical copy of the database shared by all the nodes of the Aurora PostgreSQL cluster.</p>\n<h3><a id=\"Impacts_of_longrunning_queries_on_Aurora_reader_nodes_11\"></a><strong>Impacts of long-running queries on Aurora reader nodes</strong></h3>\n<p>Normal DML operations (such as insert, update, and delete) on an Aurora primary node generate transaction log records that, in addition to being sent to all the Aurora storage nodes, are also sent to all the Aurora reader nodes in the cluster in case any reader node has a copy of the same page in its own buffer cache. In the preceding figure, these are the cache update data flows from the primary node to the reader nodes. If a reader node doesn’t have the page buffer in memory, those log records are ignored; otherwise they’re processed in transaction sequence once they’re committed. As queries are processed, either on the primary or reader nodes, sometimes they need to hold an internal lock on a page buffer in memory to ensure the page buffer contents don’t change underneath it. Generally, these are known as buffer pins and although they’re non-transactional, they can affect concurrency.</p>\n<p>PostgreSQL tends to be smart about handling conflicts on the primary node itself because it understands all queries running on it at that moment. For example, the PostgreSQL engine on the primary node could choose to skip autovacuum of a page buffer if a different read session is holding a buffer pin on that page buffer. However, the primary node is oblivious to the workload and buffer cache contents in the reader nodes, so a common challenge in Aurora reader nodes is the potential for conflicts that block applying log records affecting the page buffers in memory in a timely manner.</p>\n<p>When conflict occurs, if the conflicting query is short, it’s desirable to allow it to complete by delayed applying log records on the reader nodes. However, in the case of a long-running query on the reader node, allowing the transaction log apply process to wait indefinitely will cause the reader node to increasingly lag farther behind the primary node. To balance between running queries and high availability, Aurora PostgreSQL takes the following actions when there is conflict on the reader node that blocks applying the log records:</p>\n<ul>\n<li>Delay application of the log records on the reader node to allow the conflicting query on the reader node to finish, then apply the log records. The amount of time to delay is determined by the configured <code>max_standby_streaming_delay</code>parameter value. Aurora PostgreSQL allows a configuration of<code>max_standby_streaming_delay</code>up to 30 seconds. If you’re running mixed OLTP and OLAP workloads, the OLAP queries may run for longer than 30 seconds. One of the objectives of this post is to share best practices to manage long-running queries and mitigate conflicts in this scenario.</li>\n<li>Cancel the conflicting query on the reader node if the conflict lasts longer than <code>max_standby_streaming_delay</code>(maximum 30 seconds). This is different from Amazon RDS or self-managed PostgreSQL. With Amazon RDS or self-managed PostgreSQL, the instance has its own physical copy of the database, and you’re able to set the parameter <code>max_standby_streaming_delay</code>as high as you want to prevent query cancellation.</li>\n<li>If the conflicting query can’t cancel in time, or if multiple long-running queries are causing the replication lag to go beyond 60 seconds, Aurora restarts the reader node to ensure it’s not lagging far behind the primary node.</li>\n</ul>\n<h3><a id=\"Common_causes_of_conflict_in_an_Aurora_reader_node_23\"></a><strong>Common causes of conflict in an Aurora reader node</strong></h3>\n<p>There are several common scenarios that can cause conflicts when applying log records on the reader node:</p>\n<ul>\n<li><strong>Snapshot conflicts</strong> — A conflict can occur if the vacuum process removes dead tuples on the primary node, and a long-running query on the reader node started before the vacuum and has an older snapshot.</li>\n<li><strong>Lock conflicts</strong> – PostgreSQL’s MVCC protocol eliminates the need to lock entire page buffers in memory during normal DML commands. However, there are also heavier operations that affect a physical page, such as DDL (alter and drop table), vacuum full, lock table, truncating page from a relation, and some autovacuum operations where the operation must acquire an <code>Access Exclusive</code>lock. This lock then generates a<code>XLOG_STANDBY_LOCK</code>log record, which causes reader nodes to acquire the <code>Access Exclusive</code>lock in order to invalidate the memory structure in its buffer cache.</li>\n<li><strong>Buffer pin conflicts</strong> — This conflict occurs when you have an exclusive page lock on the primary node while a long-running read query on the reader node has the page pinned.</li>\n</ul>\n<p>You can follow the steps in this post to reproduce a lock conflict scenario.</p>\n<h3><a id=\"Prerequisites_33\"></a><strong>Prerequisites</strong></h3>\n<p>Before you get started, make sure you have the following prerequisites:</p>\n<ul>\n<li>An Amazon Web Services account.</li>\n<li>An <a href=\"https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_GettingStartedAurora.CreatingConnecting.AuroraPostgreSQL.html\" target=\"_blank\">Aurora PostgreSQL cluster with one reader node</a>. The sample in this post deploys both the primary node and the reader node on <a href=\"https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html#Concepts.DBInstanceClass.Types\" target=\"_blank\">db.r6g.large</a> running PostgreSQL v13.6.</li>\n<li>A client environment with connectivity set up to your Aurora cluster. The sample in this post uses psql as the client tool.</li>\n</ul>\n<p>We assume that you’re familiar with working with Aurora PostgreSQL and a PostgreSQL client environment.</p>\n<h3><a id=\"Generate_test_data_43\"></a><strong>Generate test data</strong></h3>\n<p>Open a new psql connection to the primary node and run the following code to create sample test data:</p>\n<pre><code class=\"lang-\">CREATE TABLE rep_conflict (c1 bigint primary key, c2 varchar(80));\nCREATE INDEX idx_rep_conflict_c2 ON rep_conflict(c2);\n\nINSERT INTO rep_conflict\nSELECT r, 'test replication conflict '||r FROM generate_series(1, 10) r;\n\nSELECT * FROM rep_conflict;\n\n</code></pre>\n<p>The following screenshot shows our output.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/a3c4040dff184b549ce42703e7e1596d_image.png\" alt=\"image.png\" /></p>\n<h3><a id=\"Verify_max_standby_streaming_delay_settings_on_the_reader_node_61\"></a><strong>Verify max_standby_streaming_delay settings on the reader node</strong></h3>\n<p>Open a new psql connection to the reader node and check the <code>max_standby_streaming_delay</code>parameter setting. The default value may change in different Aurora PostgreSQL versions. Aurora PostgreSQL now allows you to change the default value using a customized <a href=\"https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_WorkingWithParamGroups.html\" target=\"_blank\">parameter groups</a>. You can set <code>max_standby_streaming_delay</code>to any value between 1,000–30,000 milliseconds on both the cluster parameter groups or database instance parameter groups levels. If both places are set, the configuration setting on the cluster level takes precedence.</p>\n<p>Check the parameter with the following code:</p>\n<pre><code class=\"lang-\">show transaction_read_only;\nSELECT setting FROM pg_settings WHERE name = 'max_standby_streaming_delay'; \n</code></pre>\n<p>In this test, <code>max_standby_streaming_delay</code>is set to 14,000 milliseconds (14 seconds).</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/fb864b8f17aa47e28b69a220f50e05ce_image.png\" alt=\"image.png\" /></p>\n<h3><a id=\"Run_a_longrunning_query_75\"></a><strong>Run a long-running query</strong></h3>\n<p>On an existing connection to your Aurora reader node, enable timing and run a long-running query and capture the query runtime when there is no replication conflict:</p>\n<pre><code class=\"lang-\">\\timing\nSELECT count(*) FROM \n rep_conflict a,\n rep_conflict b,\n rep_conflict c,\n rep_conflict d,\n rep_conflict e,\n rep_conflict f,\n rep_conflict g, \n rep_conflict h,\n rep_conflict i;\n</code></pre>\n<p>On my environment with a db.r6.large DB instance, the long-running query ran for about 72 seconds.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/8dff0366fe364007901bce92e9cb96a3_image.png\" alt=\"image.png\" /></p>\n<h3><a id=\"Rerun_the_longrunning_query_with_a_conflict_96\"></a><strong>Rerun the long-running query with a conflict</strong></h3>\n<p>On your existing connection to the Aurora reader node, rerun the long-running query from the previous step. This time, we manually generate a conflicting DDL operation from the Aurora primary node.</p>\n<pre><code class=\"lang-\">SELECT count(*) FROM \n rep_conflict a,\n rep_conflict b,\n rep_conflict c,\n rep_conflict d,\n rep_conflict e,\n rep_conflict f,\n rep_conflict g,\n rep_conflict h,\n rep_conflict i;\n</code></pre>\n<h4><a id=\"Observe_the_conflict_on_the_primary_node_111\"></a><strong>Observe the conflict on the primary node</strong></h4>\n<p>Switch to the session with the existing connection to your Aurora primary node while the long-running query is running on the reader node. Run the following DROP INDEX statement and observe what happens to the query on the reader node:</p>\n<p>DROP INDEX idx_rep_conflict_c2;<br />\n<img src=\"https://dev-media.amazoncloud.cn/561bddeda4fb4e75966f797257f637dc_image.png\" alt=\"image.png\" /></p>\n<h4><a id=\"Review_the_error_message_on_the_reader_node_118\"></a><strong>Review the error message on the reader node</strong></h4>\n<p>Switch to the Aurora reader session. If you wait a little bit, you should see an error message similar to the following screenshot.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/2214f025576147e38a47b2d91bc0f925_image.png\" alt=\"image.png\" /></p>\n<p>The DROP INDEX operation on the primary node affected physical pages on Aurora shared storage. When the transaction log records are applied to the reader node, it causes the reader node to acquire the <code>Access Exclusive </code>lock in order to invalidate the memory structure in its buffer cache. However, the existing long-running query has the memory pages pinned. The DROP INDEX operation resulted in a lock conflict. In this scenario, Aurora PostgreSQL waited 14 seconds (the <code>max_standby_streaming_delay</code>value configured) for the query to finish before cancelling the query.</p>\n<h3><a id=\"Considerations_and_best_practices_126\"></a><strong>Considerations and best practices</strong></h3>\n<p>Now that you have a better understanding of the Aurora shared storage architecture and scenarios that cause conflicts, you can take measurements to manage long-running queries to avoid or proactively handle a conflicting situation. The followings are options to consider:</p>\n<ul>\n<li><strong>Query tuning</strong> – Tune your query on the reader node to finish before reaching <code>max_standby_streaming_delay</code>. On an OLTP system, queries are typically expected to return within a short time. An unintended long-running query can be caused by a missing index. If you don’t expect your query to run over 30 seconds, you should set the database parameter for <a href=\"https://postgresqlco.nf/doc/en/param/statement_timeout/\" target=\"_blank\">statement_timeout</a> and <a href=\"https://postgresqlco.nf/doc/en/param/log_min_error_statement/\" target=\"_blank\">log_min_error_statement</a> to ERROR or lower on the reader nodes to proactively cancel any runaway queries and log the query statement in the PostgreSQL log for later tuning.</li>\n<li>Implement table partitioning – If the size of table is large, you can consider <a href=\"https://aws.amazon.com/blogs/database/improve-performance-and-manageability-of-large-postgresql-tables-by-migrating-to-partitioned-tables-on-amazon-aurora-and-amazon-rds/\" target=\"_blank\">table partitioning</a> to reduce query runtime. Partition query pruning can reduce the amount of data Aurora PostgreSQL needs to process for a query and enhance query performance.</li>\n<li><strong>Implement manual vacuum</strong> – If you’re running mixed OLTP and OLAP workloads that include long-running queries on the reader nodes, consider turning off autovacuum and setting up operation methods (such as pg_cron) to manually run vacuum at appropriate times of the day or over the weekend when workload volume allows.</li>\n<li><strong>Implement</strong> retry logics in the application – The occurrence of a replication conflict depends on the nature of the operations that happened on the primary node and what’s running on the reader nodes. A query being canceled may very well be able to run successfully if it’s submitted again. You should implement <a href=\"https://aws.amazon.com/builders-library/timeouts-retries-and-backoff-with-jitter/\" target=\"_blank\">retry logics in the application</a> to catch a query being canceled and resubmit the query. You should also have application logics in place to reconnect and resubmit the query in the case of a connection failure.</li>\n<li><strong>Redirect the query to the Aurora primary node</strong> – To avoid long-running read queries being forcibly canceled or causing an unexpected Aurora reader restart, you can redirect the query to the primary node. When a query runs and holds resources for a long time on the primary node, other sessions may result in waiting. But the query will finish unless you take actions to cancel the query.</li>\n<li><strong>Offload the query to an Aurora fast clone</strong> – You can consider creating a clone of the database using the <a href=\"https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html#Aurora.Clone.Overview\" target=\"_blank\">Aurora fast database cloning</a> feature. Long-running queries that don’t need access to real-time production data may be offloaded to an Aurora clone (for example, a query used to generate quarterly reports).</li>\n<li><strong>Split the query into multiple smaller queries</strong> – Breaking down a complex long-running query into multiple shorter and smaller queries allows you to better utilize the horizontal scalability of the application layer and distribute queries to multiple application instances.</li>\n<li><strong>Keep your environment up to date</strong> – PostgreSQL community and Amazon Web Services make continuous efforts on enhancements and bug fixes to mitigate conflicts. For example, PostgreSQL v14 added parameter <a href=\"https://www.postgresql.org/docs/14/runtime-config-connection.html#GUC-CLIENT-CONNECTION-CHECK-INTERVAL\" target=\"_blank\">client_connection_check_interval</a> to allow you to abort a long-running query sooner is the client is disconnected, and parameter <a href=\"https://www.postgresql.org/docs/current/runtime-config-client.html#RUNTIME-CONFIG-CLIENT-STATEMENT\" target=\"_blank\">idle_session_timeout</a> to close an idle session. Aurora PostgreSQL v12.9 and v13.5 (and higher) added an optimization to minimize the need to cancel some queries running longer than <code>max_standby_streaming_delay</code>under buffer pin conflicts. For more information on the enhancement, refer to <a href=\"https://docs.aws.amazon.com/AmazonRDS/latest/AuroraPostgreSQLReleaseNotes/AuroraPostgreSQL.Updates.html\" target=\"_blank\">Amazon Aurora PostgreSQL updates</a>. It’s highly recommended that you keep your Aurora environment up to date on the latest minor releases.</li>\n</ul>\n<p>It’s important to continue running heavy maintenance operations like vacuum or DDL at times when workload volumes are lighter or more tolerant of query cancellations.</p>\n<h3><a id=\"Conclusion_141\"></a><strong>Conclusion</strong></h3>\n<p>In this post, we discussed common scenarios of long-running read queries on Aurora reader nodes that may cause conflicts with DML and DDL operations on the Aurora primary node. We shared best practices to proactively manage long-running read queries and mitigate impacts of potential conflicts.</p>\n<p>We welcome your feedback. Share your experience and any questions in the comments.</p>\n<p><strong>Since you’re reading this post, you may also be interested in the following:</strong></p>\n<p><a href=\"https://aws.amazon.com/blogs/database/improve-performance-and-manageability-of-large-postgresql-tables-by-migrating-to-partitioned-tables-on-amazon-aurora-and-amazon-rds/\" target=\"_blank\">Improve performance and manageability of large PostgreSQL tables by migrating to partitioned tables on Amazon Aurora and Amazon RDS</a></p>\n<h4><a id=\"About_the_authors_151\"></a><strong>About the authors</strong></h4>\n<p><img src=\"https://dev-media.amazoncloud.cn/d42ee3bd5c544d78a84e492fdd15bf91_image.png\" alt=\"image.png\" /></p>\n<p><strong>Wanda He</strong> is a Sr. Database Specialist Solutions Architect at Amazon Web Services. She works with customers on design, deploy, and optimize relational databases on Amazon Web Services.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/1cc17579423c4af3aaf0bdc623d00851_image.png\" alt=\"image.png\" /></p>\n<p><strong>Avi Jain</strong> is a Sr. Product Manager with the Technical Benchmarking team at Amazon. He is part of team that evaluates, measures, and drives improvements to the end-to-end customer experience (CX) of using Amazon technical products and services.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭
contact-us