{"value":"*This is a guest post by Dhanraj Gaikwad, Principal Engineer on Dream11 Data Engineering team.*\n\n[Dream11](https://about.dream11.com/) is the world’s largest fantasy sports platform, with over 120 million users playing fantasy cricket, football, kabaddi, basketball, hockey, volleyball, handball, rugby, futsal, American football, and baseball. Dream11 is the flagship brand of Dream Sports, India’s leading Sports Technology company, and has partnerships with several national and international sports bodies and cricketers.\n\nIn this post, we look at how we supercharged our data highway, the backbone of our major analytics pipeline, by migrating our [Amazon Redshift](http://aws.amazon.com/redshift) clusters to RA3 nodes. We also look at why we were excited about this migration, the challenges we faced during the migration and how we overcame them, as well as the benefits accrued from the migration.\n\n### **Background**\n\nThe Dream11 Data Engineering team runs the analytics pipelines (what we call our [Data Highway](https://aws.amazon.com/blogs/big-data/dream11s-journey-to-building-their-data-highway-on-aws/)) across Dream Sports. In near-real time, we analyze various aspects that directly impact the end-user experience, which can have a profound business impact for Dream11.\n\nInitially, we were analyzing upwards of terabytes of data per day with [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) clusters that ran mainly on dc2.8xlarge nodes. However, due to a rapid increase in our user participation over the last few years, we observed that our data volumes increased multi-fold. Because we were using dc2.8xlarge clusters, this meant adding more nodes of dc2.8xlarge instance types to the [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) clusters. Not only was this increasing our costs, it also meant that we were adding additional compute power when what we really needed was more storage. Because we anticipated significant growth during the Indian Premier League (IPL) 2021, we actively explored various options using our AWS Enterprise Support team. Additionally, we were expecting more data volume over the next few years.\n\n### **The solution**\n\nAfter discussions with AWS experts and the [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) product team, we at Dream11 were recommended the most viable option of migrating our [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) clusters from dc2.8xlarge to the newer RA3 nodes. The most obvious reason for this was the decoupled storage from compute. As a result, we could use lesser nodes and move our storage to [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) managed storage. This allowed us to respond to data volume growth in the coming years as well as reduce our costs.\n\nTo start off, we conducted a few elementary tests using an [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) RA3 test cluster. After we were convinced that this wouldn’t require many changes in our [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) queries, we decided to carry out a complete head-to-head performance test between the two clusters.\n\n### **Validating the solution**\n\nBecause the user traffic on the Dream11 app tends to spike during big ticket tournaments like the IPL, we wanted to ensure that the RA3 clusters could handle the same traffic that we usually experience during our peak. The AWS Enterprise Support team suggested using the [Simple Replay tool](https://aws.amazon.com/cn/blogs/big-data/supercharging-dream11s-data-highway-with-amazon-redshift-ra3-clusters/blank), an open-sourced tool released by AWS that you can use to record and replay the queries from one [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) cluster to another. This tool allows you to capture queries on a source [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) cluster, and then replay the same queries on a destination [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) cluster (or clusters). We decided to use this tool to capture our performance test queries on the existing dc2.8xlarge clusters and replay them on a test [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) cluster composed of RA3 nodes. During this time of our experimentation, the newer version of the automated [AWS CloudFormation](http://aws.amazon.com/cloudformation)-based toolset (now on [GitHub](https://github.com/awslabs/amazon-redshift-utils/tree/master/src/SimpleReplay/cloudformation)), was not available.\n\n### **Challenges faced**\n\nThe first challenge came up when using the Simple Replay tool because there was no easy way to compare the performance of like-to-like queries on the two types of clusters. Although [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) provides various statistics using meta-tables about individual queries and their performance, the Simple Replay tool adds additional comments in each [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) query on the target cluster to make it easier to know if these queries were run by the Simple Replay tool. In addition, the Simple Replay tool drops comments from the queries on the source cluster.\n\nComparing each query performance with the [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) performance test suite would mean writing additional scripts for easy performance comparison. An alternative would have been to modify the Simple Replay tool code, because it’s open source on GitHub. However, with the IPL 2022 beginning in just a few days, we had to explore another option urgently.\n\nAfter further discussions with the AWS Enterprise Support team, we decided to use two test clusters: one with the old dc2.8xlarge nodes, and another with the newer RA3 nodes. The idea was to use the Simple Replay tool to run the captured queries from our original cluster on both test clusters. This meant that the queries would be identical on both test clusters, making it easier to compare. Although this meant running an additional test cluster for a few days, we went ahead with this option. As a side note, the newer automated [AWS CloudFormation-based toolset](https://aws.amazon.com/blogs/big-data/simplify-amazon-redshift-ra3-migration-evaluation-with-simple-replay-utility/) does exactly the same in an automated way.\n\nAfter we were convinced that most of our [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) queries performed satisfactorily, we noticed that certain queries were performing slower on the RA3-based cluster than the dc2.8xlarge cluster. We narrowed down the problem to SQL queries with full table scans. We rectified it by following proper data modelling practices in the ETL workflow. Then we were ready to migrate to the newer RA3 nodes.\n\n### **The migration to RA3**\n\nThe migration from the old cluster to the new cluster was smoother than we thought. We used the elastic resize approach, which meant we only had a few minutes of [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) downtime. We completed the migration successfully with a sufficient buffer timeline for more tests. Additional tests indicated that the new cluster performed how we wanted it to.\n\n### **The trial by fire**\n\nThe new cluster performed satisfactorily during our peak performance loads in the IPL as well as the following ICC T20 Cricket World Cup. We’re excited that the new RA3 node-based [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) cluster can support our data volume growth needs without needing to increase the number of instance nodes.\n\nWe migrated from dc2 to RA3 in April 2021. The data volume has grown by 50% since then. If we had continued with dc2 instances, the cluster cost would have increased by 50%. However, because of the migration to RA3 instances, even with an increase in data volume by 50% since April 2021, the cluster cost has increased by 0.7%, which is attributed to an increase in storage cost.\n\n### **Conclusion**\n\nMigrating to the newer RA3-based [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) cluster helped us decouple our computing needs from our storage needs, and now we’re prepared for our expected data volume growth for the next few years. Moreover, we don’t need to add compute nodes if we only need storage, which is expected to bring down our costs in the long run. We did need to fine-tune some of our queries on the newer cluster. With the Simple Replay tool, we could do a direct comparison between the older and the newer cluster. You can also use the newer automated [AWS CloudFormation](https://aws.amazon.com/cn/cloudformation/?trk=cndc-detail)-based toolset if you want to follow a similar approach.\n\nWe highly recommend RA3 instances. They give you the flexibility to size your RA3 cluster based on the\namount of data stored without increasing your compute costs.\n\n#### **About the Authors**\n\n![image.png](https://dev-media.amazoncloud.cn/89add0cb37554089aa9fae71221ed6c7_image.png)\n\n**Dhanraj Gaikwad** is a Principal Data Engineer at Dream11. Dhanraj has more than 15 years of experience in the field of data and analytics. In his current role, Dhanraj is responsible for building the data platform for Dream Sports and is specialized in data warehousing, including data modeling, building data pipelines, and query optimizations. He is passionate about solving large-scale data problems and taking unique approaches to deal with them.\n\n![image.png](https://dev-media.amazoncloud.cn/51ce754b14fd44d886b840e7a4f994b6_image.png)\n\n**Sanket Raut** is a Principal Technical Account Manager at AWS based in Vasai ,India. Sanket has more than 16 years of industry experience, including roles in cloud architecture, systems engineering, and software design. He currently focuses on enabling large startups to streamline their cloud operations and optimize their cloud spend. His area of interest is in serverless technologies.","render":"<p><em>This is a guest post by Dhanraj Gaikwad, Principal Engineer on Dream11 Data Engineering team.</em></p>\\n<p><a href=\\"https://about.dream11.com/\\" target=\\"_blank\\">Dream11</a> is the world’s largest fantasy sports platform, with over 120 million users playing fantasy cricket, football, kabaddi, basketball, hockey, volleyball, handball, rugby, futsal, American football, and baseball. Dream11 is the flagship brand of Dream Sports, India’s leading Sports Technology company, and has partnerships with several national and international sports bodies and cricketers.</p>\\n<p>In this post, we look at how we supercharged our data highway, the backbone of our major analytics pipeline, by migrating our <a href=\\"http://aws.amazon.com/redshift\\" target=\\"_blank\\">Amazon Redshift</a> clusters to RA3 nodes. We also look at why we were excited about this migration, the challenges we faced during the migration and how we overcame them, as well as the benefits accrued from the migration.</p>\\n<h3><a id=\\"Background_6\\"></a><strong>Background</strong></h3>\\n<p>The Dream11 Data Engineering team runs the analytics pipelines (what we call our <a href=\\"https://aws.amazon.com/blogs/big-data/dream11s-journey-to-building-their-data-highway-on-aws/\\" target=\\"_blank\\">Data Highway</a>) across Dream Sports. In near-real time, we analyze various aspects that directly impact the end-user experience, which can have a profound business impact for Dream11.</p>\\n<p>Initially, we were analyzing upwards of terabytes of data per day with Amazon Redshift clusters that ran mainly on dc2.8xlarge nodes. However, due to a rapid increase in our user participation over the last few years, we observed that our data volumes increased multi-fold. Because we were using dc2.8xlarge clusters, this meant adding more nodes of dc2.8xlarge instance types to the Amazon Redshift clusters. Not only was this increasing our costs, it also meant that we were adding additional compute power when what we really needed was more storage. Because we anticipated significant growth during the Indian Premier League (IPL) 2021, we actively explored various options using our AWS Enterprise Support team. Additionally, we were expecting more data volume over the next few years.</p>\n<h3><a id=\\"The_solution_12\\"></a><strong>The solution</strong></h3>\\n<p>After discussions with AWS experts and the Amazon Redshift product team, we at Dream11 were recommended the most viable option of migrating our Amazon Redshift clusters from dc2.8xlarge to the newer RA3 nodes. The most obvious reason for this was the decoupled storage from compute. As a result, we could use lesser nodes and move our storage to Amazon Redshift managed storage. This allowed us to respond to data volume growth in the coming years as well as reduce our costs.</p>\n<p>To start off, we conducted a few elementary tests using an Amazon Redshift RA3 test cluster. After we were convinced that this wouldn’t require many changes in our Amazon Redshift queries, we decided to carry out a complete head-to-head performance test between the two clusters.</p>\n<h3><a id=\\"Validating_the_solution_18\\"></a><strong>Validating the solution</strong></h3>\\n<p>Because the user traffic on the Dream11 app tends to spike during big ticket tournaments like the IPL, we wanted to ensure that the RA3 clusters could handle the same traffic that we usually experience during our peak. The AWS Enterprise Support team suggested using the <a href=\\"https://aws.amazon.com/cn/blogs/big-data/supercharging-dream11s-data-highway-with-amazon-redshift-ra3-clusters/blank\\" target=\\"_blank\\">Simple Replay tool</a>, an open-sourced tool released by AWS that you can use to record and replay the queries from one [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) cluster to another. This tool allows you to capture queries on a source [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) cluster, and then replay the same queries on a destination [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) cluster (or clusters). We decided to use this tool to capture our performance test queries on the existing dc2.8xlarge clusters and replay them on a test [Amazon Redshift](https://aws.amazon.com/cn/redshift/?trk=cndc-detail) cluster composed of RA3 nodes. During this time of our experimentation, the newer version of the automated <a href=\\"http://aws.amazon.com/cloudformation\\" target=\\"_blank\\">AWS CloudFormation</a>-based toolset (now on <a href=\\"https://github.com/awslabs/amazon-redshift-utils/tree/master/src/SimpleReplay/cloudformation\\" target=\\"_blank\\">GitHub</a>), was not available.</p>\\n<h3><a id=\\"Challenges_faced_22\\"></a><strong>Challenges faced</strong></h3>\\n<p>The first challenge came up when using the Simple Replay tool because there was no easy way to compare the performance of like-to-like queries on the two types of clusters. Although Amazon Redshift provides various statistics using meta-tables about individual queries and their performance, the Simple Replay tool adds additional comments in each Amazon Redshift query on the target cluster to make it easier to know if these queries were run by the Simple Replay tool. In addition, the Simple Replay tool drops comments from the queries on the source cluster.</p>\n<p>Comparing each query performance with the Amazon Redshift performance test suite would mean writing additional scripts for easy performance comparison. An alternative would have been to modify the Simple Replay tool code, because it’s open source on GitHub. However, with the IPL 2022 beginning in just a few days, we had to explore another option urgently.</p>\n<p>After further discussions with the AWS Enterprise Support team, we decided to use two test clusters: one with the old dc2.8xlarge nodes, and another with the newer RA3 nodes. The idea was to use the Simple Replay tool to run the captured queries from our original cluster on both test clusters. This meant that the queries would be identical on both test clusters, making it easier to compare. Although this meant running an additional test cluster for a few days, we went ahead with this option. As a side note, the newer automated <a href=\\"https://aws.amazon.com/blogs/big-data/simplify-amazon-redshift-ra3-migration-evaluation-with-simple-replay-utility/\\" target=\\"_blank\\">AWS CloudFormation-based toolset</a> does exactly the same in an automated way.</p>\\n<p>After we were convinced that most of our Amazon Redshift queries performed satisfactorily, we noticed that certain queries were performing slower on the RA3-based cluster than the dc2.8xlarge cluster. We narrowed down the problem to SQL queries with full table scans. We rectified it by following proper data modelling practices in the ETL workflow. Then we were ready to migrate to the newer RA3 nodes.</p>\n<h3><a id=\\"The_migration_to_RA3_32\\"></a><strong>The migration to RA3</strong></h3>\\n<p>The migration from the old cluster to the new cluster was smoother than we thought. We used the elastic resize approach, which meant we only had a few minutes of Amazon Redshift downtime. We completed the migration successfully with a sufficient buffer timeline for more tests. Additional tests indicated that the new cluster performed how we wanted it to.</p>\n<h3><a id=\\"The_trial_by_fire_36\\"></a><strong>The trial by fire</strong></h3>\\n<p>The new cluster performed satisfactorily during our peak performance loads in the IPL as well as the following ICC T20 Cricket World Cup. We’re excited that the new RA3 node-based Amazon Redshift cluster can support our data volume growth needs without needing to increase the number of instance nodes.</p>\n<p>We migrated from dc2 to RA3 in April 2021. The data volume has grown by 50% since then. If we had continued with dc2 instances, the cluster cost would have increased by 50%. However, because of the migration to RA3 instances, even with an increase in data volume by 50% since April 2021, the cluster cost has increased by 0.7%, which is attributed to an increase in storage cost.</p>\n<h3><a id=\\"Conclusion_42\\"></a><strong>Conclusion</strong></h3>\\n<p>Migrating to the newer RA3-based Amazon Redshift cluster helped us decouple our computing needs from our storage needs, and now we’re prepared for our expected data volume growth for the next few years. Moreover, we don’t need to add compute nodes if we only need storage, which is expected to bring down our costs in the long run. We did need to fine-tune some of our queries on the newer cluster. With the Simple Replay tool, we could do a direct comparison between the older and the newer cluster. You can also use the newer automated AWS CloudFormation-based toolset if you want to follow a similar approach.</p>\n<p>We highly recommend RA3 instances. They give you the flexibility to size your RA3 cluster based on the<br />\\namount of data stored without increasing your compute costs.</p>\n<h4><a id=\\"About_the_Authors_49\\"></a><strong>About the Authors</strong></h4>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/89add0cb37554089aa9fae71221ed6c7_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>Dhanraj Gaikwad</strong> is a Principal Data Engineer at Dream11. Dhanraj has more than 15 years of experience in the field of data and analytics. In his current role, Dhanraj is responsible for building the data platform for Dream Sports and is specialized in data warehousing, including data modeling, building data pipelines, and query optimizations. He is passionate about solving large-scale data problems and taking unique approaches to deal with them.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/51ce754b14fd44d886b840e7a4f994b6_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>Sanket Raut</strong> is a Principal Technical Account Manager at AWS based in Vasai ,India. Sanket has more than 16 years of industry experience, including roles in cloud architecture, systems engineering, and software design. He currently focuses on enabling large startups to streamline their cloud operations and optimize their cloud spend. His area of interest is in serverless technologies.</p>\n"}