Deploy Amazon DMS in a multi-tiered secure network using an Nginx proxy server

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"*This is a guest post by Hemalatha Katari, Solution Architect at Accenture, in partnership with AWS Partner Solutions Architect Rohit Satyanarayana.*\n\nIn this post, we discuss how to use [AWS Data Migration Service](https://console.aws.amazon.com/dms/v2/home) (AWS DMS) to migrate your database while maintaining strict network segregation and network security requirements. We recently used this solution with a customer who had strict network security requirements.\n\nWhen regulated industries and security conscious customers must migrate their on-premises database to the cloud, they have to comply with strict network security and isolation requirements. Such customers do not allow any connections to the database except from within their internal networks application tier. Customers will have strict security compliance and zoning requirements on the AWS Cloud and on premises. Customers with strict requirements usually implement a strict three-tier zoned network:\n\n- DMZ (Demilitarized Zone) tier\n- Application tier\n- Database tier\n\nAccess to data in the database tier in each network is possible only from the application tier in the same network. Incoming and outgoing traffic to and from any external network must be through the DMZ tier in the same network only. The strict network segmentation can be implemented in AWS utilizing subnets with proper route tables for each tier, as illustrated in the following diagram.\n\n![image.png](https://dev-media.amazoncloud.cn/4c9e3317e4204587a4986e5e3c14f78b_image.png)\n\nOur customer wanted AWS DMS to be compliant with these requirements. If we consider [Amazon Virtual Private Cloud](https://console.aws.amazon.com/vpc) ([Amazon VPC](https://aws.amazon.com/cn/vpc/?trk=cndc-detail)) as the target network (labeled as AWS) and the local data center as the source network (labeled as DC), the network traffic flow initiates from the AWS DMS instance in the target network to the source database in the local data center. The network flow would look like the following diagram.\n\n![image.png](https://dev-media.amazoncloud.cn/ed0367db8a714819b06dcc391e4fc130_image.png)\n\nAWS DMS needs direct ODBC connectivity from its replication instance to both source and target databases. Traditional AWS DMS implementations look similar to the following diagram from a network connectivity perspective.\n\n![image.png](https://dev-media.amazoncloud.cn/76426b81903a4c3e837644764d7e5286_image.png)\n\nLet’s assume that the AWS DMS replication instance resides in the application tier (subnet ```App_AWS```) in the target network. In this scenario of multi-layered network segments, AWS DMS doesn’t have any components to install in the application tier (subnet ```App_DC```) in the source network. If you installed the AWS DMS replication instance in ```App_DC``` instead, then there aren’t any components to install in the target network subnet ```App_AWS```.\n\nThis implementation would violate our customer’s network security requirements in two ways:\n\n- The source database in the local data center subnet ```DB_DC``` is accessed from a different network directly. This is against the network security policy that mandates all incoming traffic into a database must directly come from the application tier subnet ```App_DC```. If the location of the replication instance is changed to local data center, the rule is still violated in the VPC network.\n- Outgoing traffic from the VPC leaves the network directly from the application subnet ```App_AWS``` and enters directly into the database tier ```DB_DC``` in the data center network. This isn’t allowed because all traffic flowing from one isolated network to another network must always leave and enter from the DMZ layer; no direct links from the other layers within the network are allowed.\n\nIn the next section, we provide a detailed solution to meet the above security requirements that uses [NGINX™ Open Source](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/) as a transparent reverse proxy to manage the network security requirements. We show how to provide end-to-end network security and perform some performance benchmarks to understand the impact of the reverse proxy layers on the overall performance of AWS DMS.\n\n#### **Solution overview**\n\nTo use AWS DMS as the migration tool, we propose a network architecture that is compliant with the customer’s security requirements. AWS DMS supports migrating data across two VPCs or between a VPC and the customer DC using [AWS Direct Connect](https://aws.amazon.com/directconnect/) or VPN connections. For more information about standard network configurations for AWS DMS, see [Setting up a network for a replication instance](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.VPC.html). We propose a solution based on Nginx as a TCP streaming proxy to change the way the AWS DMS replication instance connects to the source endpoint.\n\nWe configure AWS DMS with a Nginx proxy as the source database. When AWS DMS sends a request to the source endpoint Nginx server, the Nginx proxy relays the request to the upstream server. When a response is received from the upstream server, Nginx relays that response to the AWS DMS replication instance to facilitate the database migration from on-premises to the AWS Cloud.\n\nWe implement a chain of proxy servers running in both source and target networks to create a network architecture that is compliant with the network security requirements, as shown in the following diagram.\n\n![image.png](https://dev-media.amazoncloud.cn/15c12eec2f9e4335a5298f075de34ae3_image.png)\n\nFor this post, we utilize two different VPCs to represent source and target networks. The source environment that was originally the local data center network is simulated by a different VPC in the AWS environment. There are two VPCs in this implementation: the source VPC and target VPC, with different CIDR ranges. The two VPCs are connected via [AWS Transit Gateway](https://console.aws.amazon.com/vpc/home?#TransitGateways:sort=transitGatewayId). To keep things straightforward, we use only one security group per VPC. The following diagram illustrates the network architecture.\n\n![image.png](https://dev-media.amazoncloud.cn/70dd4be2b51b407cb7f139a3b5feb59b_image.png)\n\nIn a real-life production network, the local data center is connected to the AWS Cloud via either a Direct Connect connection or using an [AWS VPN](https://console.aws.amazon.com/vpc/home?#VpnConnections:sort=VpnConnectionId) site-to-site VPN link. We strongly recommend using individual security groups for each of the network segments in production workloads. In a production environment, you can design the security groups as follows as an example: Create a security group that is associated with each of the subnets. So the ```DMZ_AWS``` subnet will have the security group, ```DMZ_AWS_SG```. The ```App_AWS``` subnet will have a security group called ```App_AWS_SG```. And the ```DB_AWS``` subnet will have a security group called ```DB_AWS_SG``` associated with it. To secure traffic flow, you then create traffic flow rules as follows:\n\n![image.png](https://dev-media.amazoncloud.cn/9214a095ec114b82bd9a13695efc4139_image.png)\n\nYou can setup a similar set of security groups for your source network as well, if you are migrating from a different AWS account.\n\nLet’s now explore each of the component layers and the required configurations in detail, starting with the VPC setup.\n\n#### **VPC setup**\n\nWe created two VPCs to represent the source and target environment with different CIDR ranges, as shown in the following screenshot.\n\n![image.png](https://dev-media.amazoncloud.cn/b1b9962141c94da8a256c0e624fa59c7_image.png)\n\nYou establish connectivity between the two VPCs by setting up an [AWS Transit Gateway](https://aws.amazon.com/cn/transit-gateway/?trk=cndc-detail) and peering both the VPCs with the [AWS Transit Gateway](https://aws.amazon.com/cn/transit-gateway/?trk=cndc-detail).\n\n#### **Source VPC**\n\nThe source VPC has three subnets to represent the three-tier model of the customer’s network. Each tier consists of one subnet, as shown in the following screenshot. The route table associated with the DMZ subnet allows traffic to egress to other VPCs via the [AWS Transit Gateway](https://aws.amazon.com/cn/transit-gateway/?trk=cndc-detail). The route table associated with the application tier only allows traffic to either flow to the DMZ or database subnets or to machines within the application subnet. Similarly, the database subnet associated with the route table ensures traffic from the subnet can only be routed to machines within the same subnet or the application subnet. This simulates the strict three-tier network isolated segments from a traffic routing point of view.\n\n![image.png](https://dev-media.amazoncloud.cn/b4d858d8b0e647c792e98d8cca7bb5ad_image.png)\n\n#### **Target VPC**\n\nThe target VPC in this implementation represents the actual AWS target network that hosts the AWS DMS replication instance and target [Amazon Relation Database Service](http://aws.amazon.com/rds) ([Amazon RDS](https://aws.amazon.com/cn/rds/?trk=cndc-detail)) instance resides in. In this VPC, we have two subnets per tier with identical route tables. This is because the AWS DMS replication subnet group and RDS subnet group both need at least two subnets from two different Availability Zones for high availability configuration.\n\n![image.png](https://dev-media.amazoncloud.cn/1c457a54d94246148243a57db994e5e1_image.png)\n\nOnly the two subnets in the DMZ tier have a route to the other VPC via a [AWS Transit Gateway](https://aws.amazon.com/cn/transit-gateway/?trk=cndc-detail). This makes sure there is no direct connectivity from the application tier or database tier in the target VPC to the source VPC. The route table associated with the database tier only allows traffic to the application tier or systems within the database tier. Similarly, the route table associated with the application tier only allows traffic to either the DMZ tier or the database tier or systems within the application tier.\n\n#### **Security groups**\n\nFor this post, we used a single security group across all the subnets in a given VPC. In a real-life production scenario, we recommend implementing multiple security groups—at least one per subnet—to make sure that network traffic isolation can be achieved easily without needing to manage complex and large security group configurations.\n\nEvery security group contains inbound rules and outbound rules for managing ingress and egress traffic access control from the subnet. For this post, we use permissive rules to allow all traffic across the two VPCs in both inbound and outbound directions. The outbound rules also allow for HTTP and HTTPS traffic.\n\nIn a production deployment, we recommend using specific rules to make sure only authorized traffic flows between the different subnet groups from known endpoints and across the two VPCs.\n\nThe following screenshot shows the inbound rules for the source security group.\n\n![image.png](https://dev-media.amazoncloud.cn/1ebf61cae62f4250be99a182697681c4_image.png)\n\nThe following screenshot shows the outbound rules for the source security group.\n\n![image.png](https://dev-media.amazoncloud.cn/ed9b03924b50404fa0be03b07497fe43_image.png)\n\nThe following screenshot shows the inbound rules for the target security group.\n\n![image.png](https://dev-media.amazoncloud.cn/e9eef5063db94825a0659c304115146c_image.png)\n\nThe following screenshot shows the outbound rules for the target security group.\n\n![image.png](https://dev-media.amazoncloud.cn/f39e083b468f46c1998001f3bae4dc38_image.png)\n\n#### **Source database**\n\nTo simulate the source database in a local data center, we installed MySQL version 5.7 on an [Amazon Elastic Compute Cloud](http://aws.amazon.com/ec2) (Amazon EC2) instance running [Amazon Linux 2](https://aws.amazon.com/amazon-linux-2/), which is launched in the database subnet in the source VPC, as shown in the following screenshot. For installation instructions, refer to [Installing MySQL on Linux Using the MySQL Yum Repository](https://dev.mysql.com/doc/refman/5.7/en/linux-installation-yum-repo.html).\n\n![image.png](https://dev-media.amazoncloud.cn/4a58de00360a468eb486089a1c3ad0f7_image.png)\n\nFor instructions on preparing MySQL to act as the source database for AWS DMS, refer to [Using a MySQL-compatible database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html). As seen in the preceding screenshot, MySQL has the private IP address of 10.0.3.90. We use this IP address in later steps to configure the AWS DMS source database endpoint.\n\n#### **Target RDS instance**\n\nBefore you create an RDS instance, you must [create a DB subnet group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Tutorials.WebServerDB.CreateVPC.html#CHAP_Tutorials.WebServerDB.CreateVPC.DBSubnetGroup) from the two subnets created for the database tier in the target VPC, as shown in the following screenshot.\n\n![image.png](https://dev-media.amazoncloud.cn/5446c877713741b9ac46aa30cfefc54a_image.png)\n\nWe [created an RDS for MySQL instance](https://users/rohitus/Documents/MyWorkSpace/2022/Technical%20Sales/Blogs/ACRA%20-%20Secure%20DMS%20deployment/Users/rohitus/Documents/MyWorkSpace/2022/Technical%20Sales/Blogs/ACRA%20-%20Secure%20DMS%20deployment/Creating%20an%20Amazon%20RDS%20DB%20instance) with the same version as the source database in this DB subnet group in the target VPC, as shown in the following screenshot.\n\n![image.png](https://dev-media.amazoncloud.cn/d52fa7149a27452289747028d30558e9_image.png)\n\n#### **AWS DMS replication instance**\n\nBefore you create an AWS DMS replication instance, you must create an AWS DMS subnet group from the subnets in the application tier in the target VPC, as shown in the following screenshot. For instructions, see [Creating a replication subnet group](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.VPC.html#CHAP_ReplicationInstance.VPC.Subnets).\n\n![image.png](https://dev-media.amazoncloud.cn/b28f578364af485aa789a43280b818fa_image.png)\n\nNext, you must [create an AWS DMS replication instance](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Creating.html) within this subnet group. For this post, we use a t3.micro instance with 20 GB of storage. In production scenarios, follow the guidance for [choosing your instance type](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html) and [choosing your instance size](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.SizingReplicationInstance.html) to identify the right instance and storage for your replication instance. Our replication instance is a private replication instance that isn’t accessible from the internet.\n\n![image.png](https://dev-media.amazoncloud.cn/c286fea6428d43b389710088ca29d1f5_image.png)\n\n#### **Nginx proxy layers setup and configuration**\n\nBefore you configure the database endpoints in the AWS DMS replication instance, you must set up the intermediate Nginx proxy layers.\n\nWe use EC2 instances to host the Nginx server. The Nginx servers are configured to support [proxy chaining](https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/) via TCP streaming. This enables end-to-end TLS encrypted streams between the AWS DMS replication instance and the DB endpoints without the need for intermediate TLS termination on the Nginx proxy nodes. In our tests, we used the Amazon Linux 2 AMI with t2.medium as the instance type for the Nginx nodes. In a production deployment, the choice of the instance type for the Nginx node depends on the size of the database being replicated, number of daily transactions, expected replication duration, and network bandwidth, and so on.\n\nTo install and configure your Nginx instance on Amazon Linux 2 based on EC2 instances, complete the following steps:\n\n1. [Connect to your EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html) instance using a terminal interface and install Nginx.\n\n```\\nyum update -y\\namazon-linux-extras install nginx1 -y\\nyum install nginx-mod-stream -y\\nsystemctl enable nginx\\n```\n2. Update the Nginx config file (located in ```/etc/nginx/nginx.conf```) with the following settings.\n\n```\\n# For more information on configuration, see:\\n# * Official English Documentation: http://nginx.org/en/docs/\\n\\nuser nginx;\\nworker_processes auto;\\nerror_log /var/log/nginx/error.log;\\npid /run/nginx.pid;\\n\\n# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.\\ninclude /usr/share/nginx/modules/*.conf;\\n\\nevents {\\n worker_connections 1024;\\n}\\n\\nstream {\\n upstream <config> {\\n server <server name or IP>:<port>;\\n }\\n\\n server {\\n listen 8080;\\n proxy_pass <config>;\\n }\\n\\n}\\n```\n\nIn this configuration file, ```<config>``` is a name you provide for the configuration, and ```<server name or IP>```:<port> is the DNS server name or IP address that Nginx sends the traffic to.\\n\\nFor this example, we configured Nginx on port 8080, but you can run on any other port.\\n\\nRefer to the [Nginx documentation](http://nginx.org/en/docs/) for more details.\\n\\n3. Restart the Nginx service:\\n```\\nsystemctl restart nginx.service\\n```\\n\\nYou must repeat the preceding steps on each of the Nginx proxy servers in the setup.\\n\\n#### **NGINX proxy server implementation**\\n\\nFor this solution, we deploy three Nginx proxy servers using EC2 instances launched in the following subnets:\\n\\n- Target VPC:\\n- Nginx proxy A: DMZ subnet\\n- Source VPC:\\n- Nginx proxy B: DMZ subnet\\n- Nginx proxy C: application subnet\\n\\nYou must note down the private IP addresses of the Nginx proxy EC2 instances. For our Nginx configuration, the private IP addresses of the servers are as follows:\\n\\n- Nginx Proxy A: 172.16.1.55\\n- Nginx Proxy B: 10.0.0.218\\n- Nginx Proxy C: 10.0.1.167\\n\\nOn all three servers, Nginx is running on port 8080.\\n\\nAlso, the EC2 instance running the source database has the private IP address 10.0.3.90, and MySQL is running on default port 3306.\\n\\nTherefore, the traffic flow from the AWS DMS replication instance to the source database chains through Nginx proxies, as shown in the following figure.\\n\\n![image.png](https://dev-media.amazoncloud.cn/b7c15134a3c54a878337c51271b28782_image.png)\\n\\nIn this chain, each proxy forwards network traffic to the next block via a TCP stream. The following code is the Nginx configuration file for each proxy server:\\n\\n- Nginx proxy A:\\n\\n```\\nevents {\\n worker_connections 1024;\\n}\\n\\nstream {\\n upstream nginx-proxy-b {\\n server 10.0.0.218:8080;\\n }\\n\\n server {\\n listen 8080;\\n proxy_pass nginx-proxy-b;\\n }\\n\\n}\\n```\\n\\n- Nginx proxy B:\\n\\n```\\nevents {\\n worker_connections 1024;\\n}\\n\\nstream {\\n upstream nginx-proxy-c {\\n server 10.0.1.167:8080;\\n }\\n\\n server {\\n listen 8080;\\n proxy_pass nginx-proxy-c;\\n }\\n\\n}\\n\\n```\\n- Nginx proxy C:\\n\\n```\\nevents {\\n worker_connections 1024;\\n}\\nstream {\\n upstream source-database {\\n server 10.0.3.90:3306;\\n }\\n\\n server {\\n listen 8080;\\n proxy_pass source-database;\\n }\\n\\n}\\n```\\n\\nAt this point, the configuration of the Nginx proxy chain is complete. We can now configure the AWS DMS replication instance.\\n\\n#### **AWS DMS replication endpoints**\\n\\nThe next step in configuring the AWS DMS replication endpoints is to point it at the source and target databases.\\n\\n#### **Source endpoint**\\n\\nThe AWS DMS replication instance points to Nginx proxy A (the server with the IP address 172.16.1.55 and port 8080). The Nginx proxy chain implementation forwards traffic to the next step in the chain until it reaches the source database. To configure an endpoint in AWS DMS, refer to [Creating source and target endpoints](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Endpoints.Creating.html). The following screenshot shows the configuration for the source endpoint based on our setup.\\n\\n![image.png](https://dev-media.amazoncloud.cn/f379ccadabf44856ae0d0e890aa8c339_image.png)\\n\\nThe endpoint should now be successfully created, and you can run a successful endpoint connection test as shown in the following screenshot.\\n\\n![image.png](https://dev-media.amazoncloud.cn/8ce7024a75314e3fb8b489f8cb2ad76d_image.png)\\n\\n#### **Target endpoint**\\n\\nYou create the target endpoint using the database endpoint and port because the AWS DMS replication instance and target RDS instance are located in the same VPC, but different subnets. The following screenshot shows an overview of the target endpoint configuration.\\n\\n![image.png](https://dev-media.amazoncloud.cn/53847a09d7e54fe18f4d35be660e76a1_image.png)\\n\\nAfter you create the target endpoint, you should make sure that the AWS DMS replication instance can successfully connect to the target RDS instance by running an endpoint connection test, as shown in the following screenshot.\\n\\n![image.png](https://dev-media.amazoncloud.cn/cc749addf8424c0aa8cadec7f53a2297_image.png)\\n\\n#### **AWS DMS migration task**\\n\\nNow that you have configured the source and target endpoints, you can create an AWS DMS task to perform a successful migration via the Nginx proxy chain implementation.\\n\\n#### **Set up source and target databases**\\n\\nConnect to the MySQL source database via a remote shell. On the source MySQL database running on Amazon EC2, create a sample database called ```testdms``` with a simple table named ```testproxy``` containing a few rows, using the following commands:\\n\\n1. Log in to the MySQL database:\\n\\n```\\nmysql -u root – p\\n```\\n\\n2. Create a test database:\\n\\n```\\ncreate database testdms;\\n```\\n\\n3. Make the new database the active database:\\n\\n```\\nuse testdms;\\n```\\n4. Create a table:\\n\\n```\\ncreate table testproxy (a int not null primary key, b varchar(10));\\n```\\n5. Insert three rows into the table:\\n\\n```\\ninsert into testproxy values (10, ‘kamran’);\\ninsert into testproxy values (20, ‘manju’);\\ninsert into testproxy values (30, ‘igor’);\\n```\\n\\n6. List all the records in the new table:\\n\\n```\\nselect * from testproxy;\\n```\\n\\nThe following screenshot shows the series of commands and their outputs.\\n\\n![image.png](https://dev-media.amazoncloud.cn/c651da60aafa4728af2f114da66e9151_image.png)\\n\\nIf we connect to the target RDS instance via a remote shell and try to list the contents of the RDS instance using the ```show databases```; command, it responds that the ```testdms``` database doesn’t exist in the RDS instance.\\n\\n![image.png](https://dev-media.amazoncloud.cn/49ae62000a1e4be3903945b0d3565a5e_image.png)\\n\\n#### **Create an AWS DMS task**\\n\\nNow you must create an AWS DMS task to migrate data from the source (simulated on-premises) database to the target database. You can create a task that does a full load from the source database into the target server as well as handles change data capture (CDC) to manage replication of data that has been inserted, updated, and deleted after the full load. This makes sure that the source and target databases remain in sync. To configure an AWS DMS task, see Creating a task. The following screenshot shows the overview details for the AWS DMS task.\\n\\n![image.png](https://dev-media.amazoncloud.cn/c88c07c046f64ec397371555a6a3cfd3_image.png)\\n\\nWhile you’re creating the AWS DMS task, you must define a mapping rule that maps the entities in the source database you want to migrate to the target database. The following screenshot shows how to set up the mapping rule for our scenario.\\n\\n![image.png](https://dev-media.amazoncloud.cn/34204fdb7e39413b9e3cf957f05abd4d_image.png)\\n\\nMapping rules can be defined using JSON based syntax. In the example screenshot, the mapping rule is defined as follows:\\n\\n```\\n{\\n \\"rules\\": [\\n {\\n \\"rule-type\\": \\"selection\\",\\n \\"rule-id\\": \\"1\\",\\n \\"rule-name\\": \\"1\\",\\n \\"object-locator\\": {\\n \\"schema-name\\": \\"testdms\\",\\n \\"table-name\\": \\"%\\"\\n },\\n \\"rule-action\\": \\"include\\",\\n \\"filters\\": []\\n }\\n ]\\n}\\n```\\n\\nThe above rule basically defines that source objects (```“object-locator”```) are in the database (```“schema-name”```) called “testdms”. And from the source database, you are looking for objects (```“table-name”```) with a name that meet the criteria ```“%”``` – a wildcard indicating all objects. The rule will select all the objects that matches the criteria and perform the action ```“include”``` based on the value of the (```“rule-action”```) field. For a detailed understanding on how to define mapping rules in JSON, refer to [Specifying table selection and transformations rules using JSON](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.html).\\n\\nAfter you create the AWS DMS task, you can choose to run the task automatically or manually start it from the AWS DMS Dashboard page. The task runs and completes the full load replication from the source to destination database. To verify the successful replication, you can open a remote shell to the target RDS instance and use the following commands to verify the successful completion of the AWS DMS task:\\n\\n1. Log in to the MySQL database:\\n```\\nmysql -u root – p\\n```\\n\\n2. Verify the database was migrated:\\n\\n```\\nshow databases;\\n```\\n\\n3. Make the new database the active database:\\n\\n\\n```\\nuse testdms;\\n```\\n\\n4. List all the records in the new table:\\n\\n```\\nselect * from testproxy;\\n```\\n\\nThe following screenshot shows the outputs of these commands and indicates a successful full load migration.\\n\\n![image.png](https://dev-media.amazoncloud.cn/f8860bd0ef9d4b01b3187af0d3220246_image.png)\\n\\nYou can also verify that changes made on the source database after full load are successfully replicated using CDC. On the source database, add a new row in the table, as shown in the following screenshot.\\n\\n![image.png](https://dev-media.amazoncloud.cn/f98b7c556b2b4c5ab8ac5d2b44a70dff_image.png)\\n\\nAfter a few seconds, you can connect to the target database to verify that the new row has been transferred to target database.\\n\\nou can also verify that changes made on the source database after full load are successfully replicated using CDC. On the source database, add a\\n\\n#### **Using SSL with endpoints**\\n\\nWith AWS DMS, you can enable SSL for both the source and target endpoints. For instructions on configuring SSL-enabled endpoints, see Using SSL with AWS Database Migration Service. We tested this solution with TLS-enabled source and target endpoints. We only enabled TLS encryption on the source database endpoint and the target database endpoint. This makes sure that the connection is end-to-end encrypted and a malicious user with access to the intermediate proxies can’t intercept any of the network traffic. The following figure illustrates the resulting configuration with SSL termination.\\n\\nou can also verify that changes made on the source database after full load are successfully replicated using CDC. On the source database, add a\\n\\n#### **Performance considerations**\\n\\nIn one of our tests, we migrated a 2 TB database to test the performance impact of instance size. We observed there was no significant spike on CPU usage on any of the Nginx instances, but there was a spike on the Network-In and Network-Out metrics on all three instances, as shown in the following screenshot.\\n\\n![image.png](https://dev-media.amazoncloud.cn/0793e5d7586542f18abaf05261417635_image.png)\\n\\nBased on this observation, ```CPUUtilization``` was less than 2% when the task was running. But the ```Network-In``` and ```Network-Out``` metrics indicate high network throughout requirements on Nginx servers. For very large or dynamic datasets, we recommend using EC2 instances that are optimized for network throughput for the proxy servers. This can reduce the impact of the multiple Nginx proxy chain on the overall AWS DMS task throughput.\\n\\nWe also did a comparative test between using directly connected database endpoints to AWS DMS vs. endpoints that are connected via a network of proxies. We wanted to identify the resulting increased latency and impact to the AWS DMS task completion introduced through the additional Nginx proxy servers. The following screenshot shows the results of the comparative tests.\\n\\n![image.png](https://dev-media.amazoncloud.cn/274e3e95cea849cd8f74b63ffa83d434_image.png)\\n\\nWe ran the test using the same dataset, source database, and target database. The only variations in the two AWS DMS tasks were how was the source database connected to the AWS DMS replication instance—direct vs. a layer of three Nginx proxies. For the scenario with a direct connection between an AWS DMS replication instance and the source database, the entire task completed within 18:44 minutes. For the scenario where the source database was connected to the AWS DMS replication instance via a chain of proxies, the task took 20:5 minutes. The addition of three Nginx proxies into the replication traffic flow increased the latency of the AWS DMS task effectively by 13.5%.\\n\\nAlthough neither of these tests are completely scientific in their approach to benchmarking performance numbers for comparison purposes, you can conclude that careful selection of the EC2 instances for Nginx proxies is needed. You must select instance types that will provide sufficient network bandwidth and resources for the AWS DMS replication task to complete with as low a latency as possible based on the nature of the databases you are replicating.\\n\\n#### **Cleanup**\\n\\nIf you followed along the example setup, then you should clean up any resources you created to avoid additional charges in the long run. To perform a cleanup of the resources, you need to:\\n\\n1. Delete the DMS Task definition\\n2. Delete the DMS Source and Target endpoint definitions\\n3. [Delete the DMS replication instance](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Deleting.html)\\n4. [Terminate the different EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html) for Nginx proxies and source DB server\\n5. [Delete the Amazon RDS target database instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstance.html)\\n\\n#### **Conclusion**\\n\\nAWS DMS is a viable and easy-to-implement database migration solution for both homogenous and heterogenous databases from on premises to the AWS Cloud, even with strict network security and segregation requirements. A high-performance proxy server like Nginx can help meet your network isolation and security requirements. Although this solution introduces additional latency into the replication process, you can shorten the delays by selecting the appropriate instances for the intermediate Nginx proxies to maximize their network performance. You can also scale Nginx proxy performance by setting up the open-source proxy server in a high availability, load balanced architecture. This can drive down the latency introduced by the proxy layers even more.\\n\\nTo learn more about how to configure an Nginx open-source server for high availability and load balancing, see[ How to Set up Nginx High Availability with Pacemaker, Corosync, and Crmsh on Ubuntu 16.04](https://www.howtoforge.com/tutorial/how-to-set-up-nginx-high-availability-with-pacemaker-corosync-and-crmsh-on-ubuntu-1604/). If you have questions or feedback, please leave a comment for us.\\n\\n**Ready to get started?** The AABG ([Accenture AWS Business Group](https://aws.amazon.com/accentureaws/)) helps customers accelerate their pace of digital innovation and realize incremental business value from cloud adoption and transformation. Connect with our team at [accentureaws@amazon.com](accentureaws@amazon.com) to learn how to use AWS Database Migration Service (AWS DMS) in your products and services.\\n\\n#### **About the Authors**\\n\\n![image.png](https://dev-media.amazoncloud.cn/b2d696f6443b4f02a74c8c0cff669581_image.png)\\n**Rohit Satyanarayana** is a Partner Solutions Architect at AWS in Singapore and is part of the AWS GSI team working with Accenture globally. His hobbies are reading fantasy and science fiction, watching movies and listening to music.\\n\\n![image.png](https://dev-media.amazoncloud.cn/0b1aa7651da2471d89b9542750b6840a_image.png)\\n\\n**Hemalatha Katari** is a Solution Architect at Accenture. She is part of rapid prototyping team within the Accenture AWS Business Group (AABG). She helps organizations migrate and run their businesses in AWS cloud. She enjoys growing ornamental indoor plants and loves going for long nature trail walks.\\n","render":"<p><em>This is a guest post by Hemalatha Katari, Solution Architect at Accenture, in partnership with AWS Partner Solutions Architect Rohit Satyanarayana.</em></p>\\n<p>In this post, we discuss how to use <a href=\\"https://console.aws.amazon.com/dms/v2/home\\" target=\\"_blank\\">AWS Data Migration Service</a> (AWS DMS) to migrate your database while maintaining strict network segregation and network security requirements. We recently used this solution with a customer who had strict network security requirements.</p>\\n<p>When regulated industries and security conscious customers must migrate their on-premises database to the cloud, they have to comply with strict network security and isolation requirements. Such customers do not allow any connections to the database except from within their internal networks application tier. Customers will have strict security compliance and zoning requirements on the AWS Cloud and on premises. Customers with strict requirements usually implement a strict three-tier zoned network:</p>\n<ul>\\n<li>DMZ (Demilitarized Zone) tier</li>\n<li>Application tier</li>\n<li>Database tier</li>\n</ul>\\n<p>Access to data in the database tier in each network is possible only from the application tier in the same network. Incoming and outgoing traffic to and from any external network must be through the DMZ tier in the same network only. The strict network segmentation can be implemented in AWS utilizing subnets with proper route tables for each tier, as illustrated in the following diagram.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/4c9e3317e4204587a4986e5e3c14f78b_image.png\\" alt=\\"image.png\\" /></p>\n<p>Our customer wanted AWS DMS to be compliant with these requirements. If we consider <a href=\\"https://console.aws.amazon.com/vpc\\" target=\\"_blank\\">Amazon Virtual Private Cloud</a> ([Amazon VPC](https://aws.amazon.com/cn/vpc/?trk=cndc-detail)) as the target network (labeled as AWS) and the local data center as the source network (labeled as DC), the network traffic flow initiates from the AWS DMS instance in the target network to the source database in the local data center. The network flow would look like the following diagram.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/ed0367db8a714819b06dcc391e4fc130_image.png\\" alt=\\"image.png\\" /></p>\n<p>AWS DMS needs direct ODBC connectivity from its replication instance to both source and target databases. Traditional AWS DMS implementations look similar to the following diagram from a network connectivity perspective.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/76426b81903a4c3e837644764d7e5286_image.png\\" alt=\\"image.png\\" /></p>\n<p>Let’s assume that the AWS DMS replication instance resides in the application tier (subnet <code>App_AWS</code>) in the target network. In this scenario of multi-layered network segments, AWS DMS doesn’t have any components to install in the application tier (subnet <code>App_DC</code>) in the source network. If you installed the AWS DMS replication instance in <code>App_DC</code> instead, then there aren’t any components to install in the target network subnet <code>App_AWS</code>.</p>\\n<p>This implementation would violate our customer’s network security requirements in two ways:</p>\n<ul>\\n<li>The source database in the local data center subnet <code>DB_DC</code> is accessed from a different network directly. This is against the network security policy that mandates all incoming traffic into a database must directly come from the application tier subnet <code>App_DC</code>. If the location of the replication instance is changed to local data center, the rule is still violated in the VPC network.</li>\\n<li>Outgoing traffic from the VPC leaves the network directly from the application subnet <code>App_AWS</code> and enters directly into the database tier <code>DB_DC</code> in the data center network. This isn’t allowed because all traffic flowing from one isolated network to another network must always leave and enter from the DMZ layer; no direct links from the other layers within the network are allowed.</li>\\n</ul>\n<p>In the next section, we provide a detailed solution to meet the above security requirements that uses <a href=\\"https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/\\" target=\\"_blank\\">NGINX™ Open Source</a> as a transparent reverse proxy to manage the network security requirements. We show how to provide end-to-end network security and perform some performance benchmarks to understand the impact of the reverse proxy layers on the overall performance of AWS DMS.</p>\\n<h4><a id=\\"Solution_overview_31\\"></a><strong>Solution overview</strong></h4>\\n<p>To use AWS DMS as the migration tool, we propose a network architecture that is compliant with the customer’s security requirements. AWS DMS supports migrating data across two VPCs or between a VPC and the customer DC using <a href=\\"https://aws.amazon.com/directconnect/\\" target=\\"_blank\\">AWS Direct Connect</a> or VPN connections. For more information about standard network configurations for AWS DMS, see <a href=\\"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.VPC.html\\" target=\\"_blank\\">Setting up a network for a replication instance</a>. We propose a solution based on Nginx as a TCP streaming proxy to change the way the AWS DMS replication instance connects to the source endpoint.</p>\\n<p>We configure AWS DMS with a Nginx proxy as the source database. When AWS DMS sends a request to the source endpoint Nginx server, the Nginx proxy relays the request to the upstream server. When a response is received from the upstream server, Nginx relays that response to the AWS DMS replication instance to facilitate the database migration from on-premises to the AWS Cloud.</p>\n<p>We implement a chain of proxy servers running in both source and target networks to create a network architecture that is compliant with the network security requirements, as shown in the following diagram.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/15c12eec2f9e4335a5298f075de34ae3_image.png\\" alt=\\"image.png\\" /></p>\n<p>For this post, we utilize two different VPCs to represent source and target networks. The source environment that was originally the local data center network is simulated by a different VPC in the AWS environment. There are two VPCs in this implementation: the source VPC and target VPC, with different CIDR ranges. The two VPCs are connected via <a href=\\"https://console.aws.amazon.com/vpc/home?#TransitGateways:sort=transitGatewayId\\" target=\\"_blank\\">AWS Transit Gateway</a>. To keep things straightforward, we use only one security group per VPC. The following diagram illustrates the network architecture.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/70dd4be2b51b407cb7f139a3b5feb59b_image.png\\" alt=\\"image.png\\" /></p>\n<p>In a real-life production network, the local data center is connected to the AWS Cloud via either a Direct Connect connection or using an <a href=\\"https://console.aws.amazon.com/vpc/home?#VpnConnections:sort=VpnConnectionId\\" target=\\"_blank\\">AWS VPN</a> site-to-site VPN link. We strongly recommend using individual security groups for each of the network segments in production workloads. In a production environment, you can design the security groups as follows as an example: Create a security group that is associated with each of the subnets. So the <code>DMZ_AWS</code> subnet will have the security group, <code>DMZ_AWS_SG</code>. The <code>App_AWS</code> subnet will have a security group called <code>App_AWS_SG</code>. And the <code>DB_AWS</code> subnet will have a security group called <code>DB_AWS_SG</code> associated with it. To secure traffic flow, you then create traffic flow rules as follows:</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/9214a095ec114b82bd9a13695efc4139_image.png\\" alt=\\"image.png\\" /></p>\n<p>You can setup a similar set of security groups for your source network as well, if you are migrating from a different AWS account.</p>\n<p>Let’s now explore each of the component layers and the required configurations in detail, starting with the VPC setup.</p>\n<h4><a id=\\"VPC_setup_53\\"></a><strong>VPC setup</strong></h4>\\n<p>We created two VPCs to represent the source and target environment with different CIDR ranges, as shown in the following screenshot.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/b1b9962141c94da8a256c0e624fa59c7_image.png\\" alt=\\"image.png\\" /></p>\n<p>You establish connectivity between the two VPCs by setting up an AWS Transit Gateway and peering both the VPCs with the AWS Transit Gateway.</p>\n<h4><a id=\\"Source_VPC_61\\"></a><strong>Source VPC</strong></h4>\\n<p>The source VPC has three subnets to represent the three-tier model of the customer’s network. Each tier consists of one subnet, as shown in the following screenshot. The route table associated with the DMZ subnet allows traffic to egress to other VPCs via the AWS Transit Gateway. The route table associated with the application tier only allows traffic to either flow to the DMZ or database subnets or to machines within the application subnet. Similarly, the database subnet associated with the route table ensures traffic from the subnet can only be routed to machines within the same subnet or the application subnet. This simulates the strict three-tier network isolated segments from a traffic routing point of view.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/b4d858d8b0e647c792e98d8cca7bb5ad_image.png\\" alt=\\"image.png\\" /></p>\n<h4><a id=\\"Target_VPC_67\\"></a><strong>Target VPC</strong></h4>\\n<p>The target VPC in this implementation represents the actual AWS target network that hosts the AWS DMS replication instance and target <a href=\\"http://aws.amazon.com/rds\\" target=\\"_blank\\">Amazon Relation Database Service</a> ([Amazon RDS](https://aws.amazon.com/cn/rds/?trk=cndc-detail)) instance resides in. In this VPC, we have two subnets per tier with identical route tables. This is because the AWS DMS replication subnet group and RDS subnet group both need at least two subnets from two different Availability Zones for high availability configuration.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/1c457a54d94246148243a57db994e5e1_image.png\\" alt=\\"image.png\\" /></p>\n<p>Only the two subnets in the DMZ tier have a route to the other VPC via a AWS Transit Gateway. This makes sure there is no direct connectivity from the application tier or database tier in the target VPC to the source VPC. The route table associated with the database tier only allows traffic to the application tier or systems within the database tier. Similarly, the route table associated with the application tier only allows traffic to either the DMZ tier or the database tier or systems within the application tier.</p>\n<h4><a id=\\"Security_groups_75\\"></a><strong>Security groups</strong></h4>\\n<p>For this post, we used a single security group across all the subnets in a given VPC. In a real-life production scenario, we recommend implementing multiple security groups—at least one per subnet—to make sure that network traffic isolation can be achieved easily without needing to manage complex and large security group configurations.</p>\n<p>Every security group contains inbound rules and outbound rules for managing ingress and egress traffic access control from the subnet. For this post, we use permissive rules to allow all traffic across the two VPCs in both inbound and outbound directions. The outbound rules also allow for HTTP and HTTPS traffic.</p>\n<p>In a production deployment, we recommend using specific rules to make sure only authorized traffic flows between the different subnet groups from known endpoints and across the two VPCs.</p>\n<p>The following screenshot shows the inbound rules for the source security group.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/1ebf61cae62f4250be99a182697681c4_image.png\\" alt=\\"image.png\\" /></p>\n<p>The following screenshot shows the outbound rules for the source security group.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/ed9b03924b50404fa0be03b07497fe43_image.png\\" alt=\\"image.png\\" /></p>\n<p>The following screenshot shows the inbound rules for the target security group.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/e9eef5063db94825a0659c304115146c_image.png\\" alt=\\"image.png\\" /></p>\n<p>The following screenshot shows the outbound rules for the target security group.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/f39e083b468f46c1998001f3bae4dc38_image.png\\" alt=\\"image.png\\" /></p>\n<h4><a id=\\"Source_database_99\\"></a><strong>Source database</strong></h4>\\n<p>To simulate the source database in a local data center, we installed MySQL version 5.7 on an <a href=\\"http://aws.amazon.com/ec2\\" target=\\"_blank\\">Amazon Elastic Compute Cloud</a> (Amazon EC2) instance running <a href=\\"https://aws.amazon.com/amazon-linux-2/\\" target=\\"_blank\\">Amazon Linux 2</a>, which is launched in the database subnet in the source VPC, as shown in the following screenshot. For installation instructions, refer to <a href=\\"https://dev.mysql.com/doc/refman/5.7/en/linux-installation-yum-repo.html\\" target=\\"_blank\\">Installing MySQL on Linux Using the MySQL Yum Repository</a>.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/4a58de00360a468eb486089a1c3ad0f7_image.png\\" alt=\\"image.png\\" /></p>\n<p>For instructions on preparing MySQL to act as the source database for AWS DMS, refer to <a href=\\"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html\\" target=\\"_blank\\">Using a MySQL-compatible database as a source for AWS DMS</a>. As seen in the preceding screenshot, MySQL has the private IP address of 10.0.3.90. We use this IP address in later steps to configure the AWS DMS source database endpoint.</p>\\n<h4><a id=\\"Target_RDS_instance_107\\"></a><strong>Target RDS instance</strong></h4>\\n<p>Before you create an RDS instance, you must <a href=\\"https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Tutorials.WebServerDB.CreateVPC.html#CHAP_Tutorials.WebServerDB.CreateVPC.DBSubnetGroup\\" target=\\"_blank\\">create a DB subnet group</a> from the two subnets created for the database tier in the target VPC, as shown in the following screenshot.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/5446c877713741b9ac46aa30cfefc54a_image.png\\" alt=\\"image.png\\" /></p>\n<p>We <a href=\\"https://users/rohitus/Documents/MyWorkSpace/2022/Technical%20Sales/Blogs/ACRA%20-%20Secure%20DMS%20deployment/Users/rohitus/Documents/MyWorkSpace/2022/Technical%20Sales/Blogs/ACRA%20-%20Secure%20DMS%20deployment/Creating%20an%20Amazon%20RDS%20DB%20instance\\" target=\\"_blank\\">created an RDS for MySQL instance</a> with the same version as the source database in this DB subnet group in the target VPC, as shown in the following screenshot.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/d52fa7149a27452289747028d30558e9_image.png\\" alt=\\"image.png\\" /></p>\n<h4><a id=\\"AWS_DMS_replication_instance_117\\"></a><strong>AWS DMS replication instance</strong></h4>\\n<p>Before you create an AWS DMS replication instance, you must create an AWS DMS subnet group from the subnets in the application tier in the target VPC, as shown in the following screenshot. For instructions, see <a href=\\"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.VPC.html#CHAP_ReplicationInstance.VPC.Subnets\\" target=\\"_blank\\">Creating a replication subnet group</a>.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/b28f578364af485aa789a43280b818fa_image.png\\" alt=\\"image.png\\" /></p>\n<p>Next, you must <a href=\\"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Creating.html\\" target=\\"_blank\\">create an AWS DMS replication instance</a> within this subnet group. For this post, we use a t3.micro instance with 20 GB of storage. In production scenarios, follow the guidance for <a href=\\"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html\\" target=\\"_blank\\">choosing your instance type</a> and <a href=\\"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.SizingReplicationInstance.html\\" target=\\"_blank\\">choosing your instance size</a> to identify the right instance and storage for your replication instance. Our replication instance is a private replication instance that isn’t accessible from the internet.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/c286fea6428d43b389710088ca29d1f5_image.png\\" alt=\\"image.png\\" /></p>\n<h4><a id=\\"Nginx_proxy_layers_setup_and_configuration_127\\"></a><strong>Nginx proxy layers setup and configuration</strong></h4>\\n<p>Before you configure the database endpoints in the AWS DMS replication instance, you must set up the intermediate Nginx proxy layers.</p>\n<p>We use EC2 instances to host the Nginx server. The Nginx servers are configured to support <a href=\\"https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/\\" target=\\"_blank\\">proxy chaining</a> via TCP streaming. This enables end-to-end TLS encrypted streams between the AWS DMS replication instance and the DB endpoints without the need for intermediate TLS termination on the Nginx proxy nodes. In our tests, we used the Amazon Linux 2 AMI with t2.medium as the instance type for the Nginx nodes. In a production deployment, the choice of the instance type for the Nginx node depends on the size of the database being replicated, number of daily transactions, expected replication duration, and network bandwidth, and so on.</p>\\n<p>To install and configure your Nginx instance on Amazon Linux 2 based on EC2 instances, complete the following steps:</p>\n<ol>\\n<li><a href=\\"https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html\\" target=\\"_blank\\">Connect to your EC2</a> instance using a terminal interface and install Nginx.</li>\\n</ol>\n<pre><code class=\\"lang-\\">yum update -y\\namazon-linux-extras install nginx1 -y\\nyum install nginx-mod-stream -y\\nsystemctl enable nginx\\n</code></pre>\\n<ol start=\\"2\\">\\n<li>Update the Nginx config file (located in <code>/etc/nginx/nginx.conf</code>) with the following settings.</li>\\n</ol>\n<pre><code class=\\"lang-\\"># For more information on configuration, see:\\n# * Official English Documentation: http://nginx.org/en/docs/\\n\\nuser nginx;\\nworker_processes auto;\\nerror_log /var/log/nginx/error.log;\\npid /run/nginx.pid;\\n\\n# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.\\ninclude /usr/share/nginx/modules/*.conf;\\n\\nevents {\\n worker_connections 1024;\\n}\\n\\nstream {\\n upstream &lt;config&gt; {\\n server &lt;server name or IP&gt;:&lt;port&gt;;\\n }\\n\\n server {\\n listen 8080;\\n proxy_pass &lt;config&gt;;\\n }\\n\\n}\\n</code></pre>\\n<p>In this configuration file, <code>&lt;config&gt;</code> is a name you provide for the configuration, and <code>&lt;server name or IP&gt;</code>:&lt;port&gt; is the DNS server name or IP address that Nginx sends the traffic to.</p>\\n<p>For this example, we configured Nginx on port 8080, but you can run on any other port.</p>\n<p>Refer to the <a href=\\"http://nginx.org/en/docs/\\" target=\\"_blank\\">Nginx documentation</a> for more details.</p>\\n<ol start=\\"3\\">\\n<li>Restart the Nginx service:</li>\n</ol>\\n<pre><code class=\\"lang-\\">systemctl restart nginx.service\\n</code></pre>\\n<p>You must repeat the preceding steps on each of the Nginx proxy servers in the setup.</p>\n<h4><a id=\\"NGINX_proxy_server_implementation_187\\"></a><strong>NGINX proxy server implementation</strong></h4>\\n<p>For this solution, we deploy three Nginx proxy servers using EC2 instances launched in the following subnets:</p>\n<ul>\\n<li>Target VPC:</li>\n<li>Nginx proxy A: DMZ subnet</li>\n<li>Source VPC:</li>\n<li>Nginx proxy B: DMZ subnet</li>\n<li>Nginx proxy C: application subnet</li>\n</ul>\\n<p>You must note down the private IP addresses of the Nginx proxy EC2 instances. For our Nginx configuration, the private IP addresses of the servers are as follows:</p>\n<ul>\\n<li>Nginx Proxy A: 172.16.1.55</li>\n<li>Nginx Proxy B: 10.0.0.218</li>\n<li>Nginx Proxy C: 10.0.1.167</li>\n</ul>\\n<p>On all three servers, Nginx is running on port 8080.</p>\n<p>Also, the EC2 instance running the source database has the private IP address 10.0.3.90, and MySQL is running on default port 3306.</p>\n<p>Therefore, the traffic flow from the AWS DMS replication instance to the source database chains through Nginx proxies, as shown in the following figure.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/b7c15134a3c54a878337c51271b28782_image.png\\" alt=\\"image.png\\" /></p>\n<p>In this chain, each proxy forwards network traffic to the next block via a TCP stream. The following code is the Nginx configuration file for each proxy server:</p>\n<ul>\\n<li>Nginx proxy A:</li>\n</ul>\\n<pre><code class=\\"lang-\\">events {\\n worker_connections 1024;\\n}\\n\\nstream {\\n upstream nginx-proxy-b {\\n server 10.0.0.218:8080;\\n }\\n\\n server {\\n listen 8080;\\n proxy_pass nginx-proxy-b;\\n }\\n\\n}\\n</code></pre>\\n<ul>\\n<li>Nginx proxy B:</li>\n</ul>\\n<pre><code class=\\"lang-\\">events {\\n worker_connections 1024;\\n}\\n\\nstream {\\n upstream nginx-proxy-c {\\n server 10.0.1.167:8080;\\n }\\n\\n server {\\n listen 8080;\\n proxy_pass nginx-proxy-c;\\n }\\n\\n}\\n\\n</code></pre>\\n<ul>\\n<li>Nginx proxy C:</li>\n</ul>\\n<pre><code class=\\"lang-\\">events {\\n worker_connections 1024;\\n}\\nstream {\\n upstream source-database {\\n server 10.0.3.90:3306;\\n }\\n\\n server {\\n listen 8080;\\n proxy_pass source-database;\\n }\\n\\n}\\n</code></pre>\\n<p>At this point, the configuration of the Nginx proxy chain is complete. We can now configure the AWS DMS replication instance.</p>\n<h4><a id=\\"AWS_DMS_replication_endpoints_274\\"></a><strong>AWS DMS replication endpoints</strong></h4>\\n<p>The next step in configuring the AWS DMS replication endpoints is to point it at the source and target databases.</p>\n<h4><a id=\\"Source_endpoint_278\\"></a><strong>Source endpoint</strong></h4>\\n<p>The AWS DMS replication instance points to Nginx proxy A (the server with the IP address 172.16.1.55 and port 8080). The Nginx proxy chain implementation forwards traffic to the next step in the chain until it reaches the source database. To configure an endpoint in AWS DMS, refer to <a href=\\"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Endpoints.Creating.html\\" target=\\"_blank\\">Creating source and target endpoints</a>. The following screenshot shows the configuration for the source endpoint based on our setup.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/f379ccadabf44856ae0d0e890aa8c339_image.png\\" alt=\\"image.png\\" /></p>\n<p>The endpoint should now be successfully created, and you can run a successful endpoint connection test as shown in the following screenshot.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/8ce7024a75314e3fb8b489f8cb2ad76d_image.png\\" alt=\\"image.png\\" /></p>\n<h4><a id=\\"Target_endpoint_288\\"></a><strong>Target endpoint</strong></h4>\\n<p>You create the target endpoint using the database endpoint and port because the AWS DMS replication instance and target RDS instance are located in the same VPC, but different subnets. The following screenshot shows an overview of the target endpoint configuration.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/53847a09d7e54fe18f4d35be660e76a1_image.png\\" alt=\\"image.png\\" /></p>\n<p>After you create the target endpoint, you should make sure that the AWS DMS replication instance can successfully connect to the target RDS instance by running an endpoint connection test, as shown in the following screenshot.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/cc749addf8424c0aa8cadec7f53a2297_image.png\\" alt=\\"image.png\\" /></p>\n<h4><a id=\\"AWS_DMS_migration_task_298\\"></a><strong>AWS DMS migration task</strong></h4>\\n<p>Now that you have configured the source and target endpoints, you can create an AWS DMS task to perform a successful migration via the Nginx proxy chain implementation.</p>\n<h4><a id=\\"Set_up_source_and_target_databases_302\\"></a><strong>Set up source and target databases</strong></h4>\\n<p>Connect to the MySQL source database via a remote shell. On the source MySQL database running on Amazon EC2, create a sample database called <code>testdms</code> with a simple table named <code>testproxy</code> containing a few rows, using the following commands:</p>\\n<ol>\\n<li>Log in to the MySQL database:</li>\n</ol>\\n<pre><code class=\\"lang-\\">mysql -u root – p\\n</code></pre>\\n<ol start=\\"2\\">\\n<li>Create a test database:</li>\n</ol>\\n<pre><code class=\\"lang-\\">create database testdms;\\n</code></pre>\\n<ol start=\\"3\\">\\n<li>Make the new database the active database:</li>\n</ol>\\n<pre><code class=\\"lang-\\">use testdms;\\n</code></pre>\\n<ol start=\\"4\\">\\n<li>Create a table:</li>\n</ol>\\n<pre><code class=\\"lang-\\">create table testproxy (a int not null primary key, b varchar(10));\\n</code></pre>\\n<ol start=\\"5\\">\\n<li>Insert three rows into the table:</li>\n</ol>\\n<pre><code class=\\"lang-\\">insert into testproxy values (10, ‘kamran’);\\ninsert into testproxy values (20, ‘manju’);\\ninsert into testproxy values (30, ‘igor’);\\n</code></pre>\\n<ol start=\\"6\\">\\n<li>List all the records in the new table:</li>\n</ol>\\n<pre><code class=\\"lang-\\">select * from testproxy;\\n</code></pre>\\n<p>The following screenshot shows the series of commands and their outputs.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/c651da60aafa4728af2f114da66e9151_image.png\\" alt=\\"image.png\\" /></p>\n<p>If we connect to the target RDS instance via a remote shell and try to list the contents of the RDS instance using the <code>show databases</code>; command, it responds that the <code>testdms</code> database doesn’t exist in the RDS instance.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/49ae62000a1e4be3903945b0d3565a5e_image.png\\" alt=\\"image.png\\" /></p>\n<h4><a id=\\"Create_an_AWS_DMS_task_350\\"></a><strong>Create an AWS DMS task</strong></h4>\\n<p>Now you must create an AWS DMS task to migrate data from the source (simulated on-premises) database to the target database. You can create a task that does a full load from the source database into the target server as well as handles change data capture (CDC) to manage replication of data that has been inserted, updated, and deleted after the full load. This makes sure that the source and target databases remain in sync. To configure an AWS DMS task, see Creating a task. The following screenshot shows the overview details for the AWS DMS task.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/c88c07c046f64ec397371555a6a3cfd3_image.png\\" alt=\\"image.png\\" /></p>\n<p>While you’re creating the AWS DMS task, you must define a mapping rule that maps the entities in the source database you want to migrate to the target database. The following screenshot shows how to set up the mapping rule for our scenario.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/34204fdb7e39413b9e3cf957f05abd4d_image.png\\" alt=\\"image.png\\" /></p>\n<p>Mapping rules can be defined using JSON based syntax. In the example screenshot, the mapping rule is defined as follows:</p>\n<pre><code class=\\"lang-\\">{\\n &quot;rules&quot;: [\\n {\\n &quot;rule-type&quot;: &quot;selection&quot;,\\n &quot;rule-id&quot;: &quot;1&quot;,\\n &quot;rule-name&quot;: &quot;1&quot;,\\n &quot;object-locator&quot;: {\\n &quot;schema-name&quot;: &quot;testdms&quot;,\\n &quot;table-name&quot;: &quot;%&quot;\\n },\\n &quot;rule-action&quot;: &quot;include&quot;,\\n &quot;filters&quot;: []\\n }\\n ]\\n}\\n</code></pre>\\n<p>The above rule basically defines that source objects (<code>“object-locator”</code>) are in the database (<code>“schema-name”</code>) called “testdms”. And from the source database, you are looking for objects (<code>“table-name”</code>) with a name that meet the criteria <code>“%”</code> – a wildcard indicating all objects. The rule will select all the objects that matches the criteria and perform the action <code>“include”</code> based on the value of the (<code>“rule-action”</code>) field. For a detailed understanding on how to define mapping rules in JSON, refer to <a href=\\"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.html\\" target=\\"_blank\\">Specifying table selection and transformations rules using JSON</a>.</p>\\n<p>After you create the AWS DMS task, you can choose to run the task automatically or manually start it from the AWS DMS Dashboard page. The task runs and completes the full load replication from the source to destination database. To verify the successful replication, you can open a remote shell to the target RDS instance and use the following commands to verify the successful completion of the AWS DMS task:</p>\n<ol>\\n<li>Log in to the MySQL database:</li>\n</ol>\\n<pre><code class=\\"lang-\\">mysql -u root – p\\n</code></pre>\\n<ol start=\\"2\\">\\n<li>Verify the database was migrated:</li>\n</ol>\\n<pre><code class=\\"lang-\\">show databases;\\n</code></pre>\\n<ol start=\\"3\\">\\n<li>Make the new database the active database:</li>\n</ol>\\n<pre><code class=\\"lang-\\">use testdms;\\n</code></pre>\\n<ol start=\\"4\\">\\n<li>List all the records in the new table:</li>\n</ol>\\n<pre><code class=\\"lang-\\">select * from testproxy;\\n</code></pre>\\n<p>The following screenshot shows the outputs of these commands and indicates a successful full load migration.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/f8860bd0ef9d4b01b3187af0d3220246_image.png\\" alt=\\"image.png\\" /></p>\n<p>You can also verify that changes made on the source database after full load are successfully replicated using CDC. On the source database, add a new row in the table, as shown in the following screenshot.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/f98b7c556b2b4c5ab8ac5d2b44a70dff_image.png\\" alt=\\"image.png\\" /></p>\n<p>After a few seconds, you can connect to the target database to verify that the new row has been transferred to target database.</p>\n<p>ou can also verify that changes made on the source database after full load are successfully replicated using CDC. On the source database, add a</p>\n<h4><a id=\\"Using_SSL_with_endpoints_420\\"></a><strong>Using SSL with endpoints</strong></h4>\\n<p>With AWS DMS, you can enable SSL for both the source and target endpoints. For instructions on configuring SSL-enabled endpoints, see Using SSL with AWS Database Migration Service. We tested this solution with TLS-enabled source and target endpoints. We only enabled TLS encryption on the source database endpoint and the target database endpoint. This makes sure that the connection is end-to-end encrypted and a malicious user with access to the intermediate proxies can’t intercept any of the network traffic. The following figure illustrates the resulting configuration with SSL termination.</p>\n<p>ou can also verify that changes made on the source database after full load are successfully replicated using CDC. On the source database, add a</p>\n<h4><a id=\\"Performance_considerations_426\\"></a><strong>Performance considerations</strong></h4>\\n<p>In one of our tests, we migrated a 2 TB database to test the performance impact of instance size. We observed there was no significant spike on CPU usage on any of the Nginx instances, but there was a spike on the Network-In and Network-Out metrics on all three instances, as shown in the following screenshot.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/0793e5d7586542f18abaf05261417635_image.png\\" alt=\\"image.png\\" /></p>\n<p>Based on this observation, <code>CPUUtilization</code> was less than 2% when the task was running. But the <code>Network-In</code> and <code>Network-Out</code> metrics indicate high network throughout requirements on Nginx servers. For very large or dynamic datasets, we recommend using EC2 instances that are optimized for network throughput for the proxy servers. This can reduce the impact of the multiple Nginx proxy chain on the overall AWS DMS task throughput.</p>\\n<p>We also did a comparative test between using directly connected database endpoints to AWS DMS vs. endpoints that are connected via a network of proxies. We wanted to identify the resulting increased latency and impact to the AWS DMS task completion introduced through the additional Nginx proxy servers. The following screenshot shows the results of the comparative tests.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/274e3e95cea849cd8f74b63ffa83d434_image.png\\" alt=\\"image.png\\" /></p>\n<p>We ran the test using the same dataset, source database, and target database. The only variations in the two AWS DMS tasks were how was the source database connected to the AWS DMS replication instance—direct vs. a layer of three Nginx proxies. For the scenario with a direct connection between an AWS DMS replication instance and the source database, the entire task completed within 18:44 minutes. For the scenario where the source database was connected to the AWS DMS replication instance via a chain of proxies, the task took 20:5 minutes. The addition of three Nginx proxies into the replication traffic flow increased the latency of the AWS DMS task effectively by 13.5%.</p>\n<p>Although neither of these tests are completely scientific in their approach to benchmarking performance numbers for comparison purposes, you can conclude that careful selection of the EC2 instances for Nginx proxies is needed. You must select instance types that will provide sufficient network bandwidth and resources for the AWS DMS replication task to complete with as low a latency as possible based on the nature of the databases you are replicating.</p>\n<h4><a id=\\"Cleanup_442\\"></a><strong>Cleanup</strong></h4>\\n<p>If you followed along the example setup, then you should clean up any resources you created to avoid additional charges in the long run. To perform a cleanup of the resources, you need to:</p>\n<ol>\\n<li>Delete the DMS Task definition</li>\n<li>Delete the DMS Source and Target endpoint definitions</li>\n<li><a href=\\"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Deleting.html\\" target=\\"_blank\\">Delete the DMS replication instance</a></li>\\n<li><a href=\\"https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html\\" target=\\"_blank\\">Terminate the different EC2 instances</a> for Nginx proxies and source DB server</li>\\n<li><a href=\\"https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstance.html\\" target=\\"_blank\\">Delete the Amazon RDS target database instance</a></li>\\n</ol>\n<h4><a id=\\"Conclusion_452\\"></a><strong>Conclusion</strong></h4>\\n<p>AWS DMS is a viable and easy-to-implement database migration solution for both homogenous and heterogenous databases from on premises to the AWS Cloud, even with strict network security and segregation requirements. A high-performance proxy server like Nginx can help meet your network isolation and security requirements. Although this solution introduces additional latency into the replication process, you can shorten the delays by selecting the appropriate instances for the intermediate Nginx proxies to maximize their network performance. You can also scale Nginx proxy performance by setting up the open-source proxy server in a high availability, load balanced architecture. This can drive down the latency introduced by the proxy layers even more.</p>\n<p>To learn more about how to configure an Nginx open-source server for high availability and load balancing, see<a href=\\"https://www.howtoforge.com/tutorial/how-to-set-up-nginx-high-availability-with-pacemaker-corosync-and-crmsh-on-ubuntu-1604/\\" target=\\"_blank\\"> How to Set up Nginx High Availability with Pacemaker, Corosync, and Crmsh on Ubuntu 16.04</a>. If you have questions or feedback, please leave a comment for us.</p>\\n<p><strong>Ready to get started?</strong> The AABG (<a href=\\"https://aws.amazon.com/accentureaws/\\" target=\\"_blank\\">Accenture AWS Business Group</a>) helps customers accelerate their pace of digital innovation and realize incremental business value from cloud adoption and transformation. Connect with our team at <a href=\\"accentureaws@amazon.com\\" target=\\"_blank\\">accentureaws@amazon.com</a> to learn how to use [AWS Database Migration Service](https://aws.amazon.com/cn/dms/?trk=cndc-detail) (AWS DMS) in your products and services.</p>\\n<h4><a id=\\"About_the_Authors_460\\"></a><strong>About the Authors</strong></h4>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/b2d696f6443b4f02a74c8c0cff669581_image.png\\" alt=\\"image.png\\" /><br />\\n<strong>Rohit Satyanarayana</strong> is a Partner Solutions Architect at AWS in Singapore and is part of the AWS GSI team working with Accenture globally. His hobbies are reading fantasy and science fiction, watching movies and listening to music.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/0b1aa7651da2471d89b9542750b6840a_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>Hemalatha Katari</strong> is a Solution Architect at Accenture. She is part of rapid prototyping team within the Accenture AWS Business Group (AABG). She helps organizations migrate and run their businesses in AWS cloud. She enjoys growing ornamental indoor plants and loves going for long nature trail walks.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭