Running Workload on Amazon EKS in Local Zones with a failover strategy

存储
数据分析
容器
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"### **Introduction**\n**Update 08/05/22:** We updated the title and conclusion to improve the accuracy of wording.\n\nAmazon Web Services Local Zones are a type of infrastructure deployment that places compute, storage, and other select [Amazon Web Services](https://aws.amazon.com/) services close to large population and industry centers. Customers can build and deploy applications close to end users to enable real-time gaming, live streaming, augmented and virtual reality, virtual workstations, and more.\n\nOn the other hand, Local Zones come with the limitation that only single Availability Zone (AZ) is available, [Amazon EC2](https://docs.aws.amazon.com/ec2/?id=docs_gateway) instance types are limited, and only [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) (ALB) is available for load balancing. Hence, for applications on [Amazon Elastic Kubernetes Service](https://aws.amazon.com/cn/eks/?trk=cndc-detail) ([Amazon EKS](https://docs.aws.amazon.com/eks/?id=docs_gateway)), it requires additional configurations to run [Amazon EC2](https://docs.aws.amazon.com/ec2/?id=docs_gateway) worker nodes, and ALBs for [Amazon EKS](https://docs.aws.amazon.com/eks/?id=docs_gateway) clusters.\n\nIn the post, we show:\n\n- Deploy an [Amazon EKS](https://docs.aws.amazon.com/eks/?id=docs_gateway) cluster, with a self-managed node group running in the local zone, and a managed node group in the region with [Amazon EKS Blueprint](https://github.com/aws-ia/terraform-aws-eks-blueprints).\n- Consideration for shared data storage using [Amazon Elastic File System](https://aws.amazon.com/cn/efs/?trk=cndc-detail) ([Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail)), and database synchronization for Local Zones and Region using Amazon Web Services Database Migration Service ([Amazon Web Services DMS](https://docs.aws.amazon.com/dms/?id=docs_gateway))\n- Architect in an active-standby configuration using [Amazon Route 53](https://docs.aws.amazon.com/route53/?id=docs_gateway) to manage the traffic between the Local Zone and Region.\n\n\nOur goal is to provide an example for running the WordPress application workload in the Local Zone on [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail), and failover to the region in the rare event of an issue in the Local one.\n\n### **Solution Overview**\nThe following diagram shows the high-level architecture, for running a WordPress website on [Amazon EKS](https://docs.aws.amazon.com/eks/?id=docs_gateway) in the Local Zone.\n\nThe customer facing endpoint is a Route 53 domain ([demo.lindarren.com](http://demo.lindarren.com/)) and has a failover policy to the primary site in the Local Zone ([demo.primary.lindarren.com](http://demo.primary.lindarren.com/)) and backup site ([demo.backup.lindarren.com](http://demo.backup.lindarren.com/)) in the AZs in the Region.\n\nWhen the customer is connecting to the primary site (Local Zone), the request is served by the ALB in the Local Zone, and the backend servers are hosted by Kubernetes pods, running on the self-managed [Amazon EC2](https://docs.aws.amazon.com/ec2/?id=docs_gateway) nodes. The backend database in the Local Zone is an [Amazon EC2](https://docs.aws.amazon.com/ec2/?id=docs_gateway) instance with MariaDB installed.\n\nFor the backup site in the Region, there is an ALB and Kubernetes pods running on a managed node group. The backup database is running on Amazon Relational Database Service ([Amazon RDS](https://docs.aws.amazon.com/rds/?id=docs_gateway)). [Amazon RDS](https://docs.aws.amazon.com/rds/?id=docs_gateway) is a managed Database-as-a-Service (DBaaS) that makes it easy for IT administrators to set up, operate, and scale relational databases in the cloud. For data replication, we use Amazon Web Services Database Migration Service ([Amazon Web Services DMS](https://docs.aws.amazon.com/dms/index.html)) to replicate data from the [Amazon EC2](https://docs.aws.amazon.com/ec2/?id=docs_gateway) database instance in the Local Zone to the [Amazon RDS](https://docs.aws.amazon.com/rds/?id=docs_gateway) instance in the Region.\n\nFor persistent storage, the files are stored on an [Amazon EFS](https://docs.aws.amazon.com/efs/?id=docs_gateway) filesystem. Currently, it’s not supported to create [Amazon EFS](https://docs.aws.amazon.com/efs/?id=docs_gateway) mount targets in the Local Zone subnets. Consequently, it’s necessary to make a few changes to make [Amazon EFS](https://docs.aws.amazon.com/efs/?id=docs_gateway) CSI (Container Storage Interface) driver ```DaemonSet``` in the Local Zone to mount an [Amazon EFS](https://docs.aws.amazon.com/efs/?id=docs_gateway) filesystem for the pod.\n\n![image.png](https://dev-media.amazoncloud.cn/a8686aca58a945679a6f78bc7d249d04_image.png)\n\n### **Deployment in the Local Zone**\nFor the application deployment, we use the combination of Kubernetes YAML files and [Terraform](https://www.terraform.io/) modules. We use [Terraform](https://www.terraform.io/) to create Amazon Web Services resources such as [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/?id=docs_gateway), [Amazon EKS](https://docs.aws.amazon.com/eks/?id=docs_gateway), [Amazon EC2](https://docs.aws.amazon.com/ec2/?id=docs_gateway), [Amazon EFS](https://docs.aws.amazon.com/efs/?id=docs_gateway), [Amazon RDS](https://docs.aws.amazon.com/rds/?id=docs_gateway), [Amazon Web Services DMS](https://docs.aws.amazon.com/dms/index.html), [Amazon Route 53](https://docs.aws.amazon.com/route53/?id=docs_gateway), etc. For the application in Kubernetes, we use the YAML manifest files and WordPress in this post.\n\n**Prerequisites**\n\n- An Amazon Web Services account with the Administrator permissions. To use [Amazon EKS](https://docs.aws.amazon.com/eks/?id=docs_gateway) Blueprint, using Amazon Web Services Identity and Access Management ([Amazon Web Services IAM](https://docs.aws.amazon.com/iam/?id=docs_gateway)) Role is required and IAM User is not supported due to the issue. For setup details, please refer to the docs [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html).\n- Installation of the latest version Amazon Web Services Command Line Interface ([Amazon Web Services CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)) (v2 recommended), kubectl, eksctl, [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git), [Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli), jq (1.6 recommended) and [System Manager Session Plugin](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html#install-plugin-linux).\n- A domain name that you own (e.g.,lindarren.com) and a hosted zone in [Amazon Route 53](https://docs.aws.amazon.com/route53/?id=docs_gateway). The domain name is necessary because we are using [Amazon Route 53](https://docs.aws.amazon.com/route53/?id=docs_gateway) domains and TLS (Transport Layer Security) certificates.\n- A shell environment. An IDE (Integrated Development Environment) environment such as [Visual Studio Code](https://code.visualstudio.com/) or [Amazon Web Services Cloud9](https://docs.aws.amazon.com/cloud9/?id=docs_gateway) is recommended. Please make sure that you configured IAM role credentials on your own instead of [Amazon Web Services Cloud9](https://docs.aws.amazon.com/cloud9/?id=docs_gateway)’s temporary credentials. For details, please find the guide [here](https://www.eksworkshop.com/020_prerequisites/workspaceiam/).\n- Opt-in the Local Zone that you would like to run your workload in.\n- An existing TLS certificate for web hosting as a resource in [Amazon Web Services Certificate Manager](https://aws.amazon.com/certificate-manager/) (ACM).\n\n\nNow clone the source code to your working directory and configure a few aliases.\n\n```\\ngit clone https://github.com/aws-samples/eks-localzone-demo.git\\n# A few optional shorthands\\nalias tf=terraform\\nalias k=kubectl \\n```\n\n### **Walkthrough**\n#### **Step 1. Deploy VPC**\n\nThe first thing we’ll need to provision for this architecture is a VPC, containing both Local Zone and AZs for the [Amazon EKS](https://docs.aws.amazon.com/eks/?id=docs_gateway) cluster and database instances. There are three public and three private subnets in the AZs, one private, and one public subnet in the local zone.\n\nIn the ```main.tf```, we use ```vpc``` module to create the subnets in the AZs. For Local Zone subnets, we create ```aws_subnet``` resources.\n\nMarkup\n```\\n...\\n\\nresource \\"aws_subnet\\" \\"public-subnet-lz\\" {\\n vpc_id = module.vpc.vpc_id\\n availability_zone = local.lzs[0]\\n ...\\n}\\n\\nresource \\"aws_subnet\\" \\"private-subnet-lz\\" {\\n ...\\n}\\n\\nresource \\"aws_route_table_association\\" \\"public-subnet-lz-rta\\" {\\n subnet_id = aws_subnet.public-subnet-lz.id\\n route_table_id = module.vpc.public_route_table_ids[0]\\n}\\n\\nresource \\"aws_route_table_association\\" \\"private-subnet-lz-rta\\" {\\n subnet_id = aws_subnet.private-subnet-lz.id\\n route_table_id = module.vpc.private_route_table_ids[0]\\n}\\n```\n\n\nTo create the VPC, let’s review and define the input variables. The VPC is in ```us-east-1``` , the Local Zone is ```us-east-1-bos-1a```. You need provide a ```name``` and ```vpc_cidr``` for the VPC.\n\nBash\n```\\ncd deploy/01-vpc\\nvim demo.auto.tfvars\\n```\n\nXML\n```\\nname = \\"demo-vpc\\" \\nvpc_cidr = \\"10.0.0.0/16\\"\\ncluster_name = \\"lindarr-demo\\" # Name of EKS Cluster, for subnet tagging \\nregion = \\"us-east-1\\"\\nlzs = [\\"us-east-1-bos-1a\\"]\\n```\n\n\nDeploy the VPC infrastructure using ```terraform``` CLI.\n\nBash\n```\\nterraform init\\nterraform apply -auto-approve\\n```\n\nIn the output, or run terraform output to get the VPC ID and subnets IDs, including the subnets in the AZs and the local zone. For deploying additional resources like [Amazon RDS](https://docs.aws.amazon.com/rds/?id=docs_gateway), [Amazon EKS](https://docs.aws.amazon.com/eks/?id=docs_gateway), and [Amazon EFS](https://docs.aws.amazon.com/efs/index.html) in the upcoming steps, you can use the subnet IDs from the output here.\n\nXML\n```\\n➜ 01-vpc git:(main) ✗ terraform output\\nprivate_subnets = [\\n \\"subnet-04bfbdb56eab20f3f\\",\\n \\"subnet-0282d89055cab1760\\",\\n \\"subnet-0e3d213bfb21127fa\\",\\n]\\nprivate_subnets_local_zone = \\"subnet-0179a7e06585a551f\\"\\npublic_subnets = [\\n \\"subnet-0d05de32e811f03c4\\",\\n \\"subnet-0c2d26c64af1f9889\\",\\n \\"subnet-0e5495f6c4218f5aa\\",\\n]\\npublic_subnets_local_zone = \\"subnet-0b49a2a528a2d2e68\\"\\nvpc_id = \\"vpc-0c544fbcafdbbb035\\"\\nvpc_id_cidr = \\"10.0.0.0/16\\"\\n```\n\n#### **Step 2. Deploy [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) Cluster**\nNext, we use [Amazon EKS Blueprint](https://github.com/aws-ia/terraform-aws-eks-blueprints) to create an [Amazon EKS](https://docs.aws.amazon.com/eks/?id=docs_gateway) Cluster, including the Kubernetes Control Plane, a managed node group in the region, and a self-managed node group in the Local Zone.\n\nChange the working directory to ```02-eks``` and edit the variables.\n\nBash\n```\\ncd ../02-eks\\nvim demo.auto.tfvars\\n```\n\nModify the variables, mainly the resource IDs of VPC and subnets (by copying the partial output from the first module)\n\nXML\n```\\nvpc_id = \\"vpc-0c544fbcafdbbb035\\"\\nprivate_subnets = [\\n \\"subnet-04bfbdb56eab20f3f\\",\\n \\"subnet-0282d89055cab1760\\",\\n \\"subnet-0e3d213bfb21127fa\\",\\n]\\nprivate_subnets_local_zone = \\"subnet-0179a7e06585a551f\\"\\ncluster_name = \\"my-eks-demo-cluster\\"\\ndomain_name_in_route53 = \\"lindarren.com\\"\\n```\n\n\nIn the ```main.tf```, we use the module [Amazon EKS Blueprints for Terraform](https://github.com/aws-ia/terraform-aws-eks-blueprints) to create the [Amazon EKS](https://docs.aws.amazon.com/eks/?id=docs_gateway) cluster. It makes creating an [Amazon EKS cluster](https://docs.aws.amazon.com/eks/?id=docs_gateway) easier, especially for creating self-managed node groups in Local Zone subnets. Also note, in the Local Zone, the instance types is limited and the [Amazon EBS](https://docs.aws.amazon.com/ebs/?id=docs_gateway) volume type is limited to ```gp2```.\n\nIn addition, the security group rules of self-managed nodes are restrictive in the module, so we add additional rules so self-managed nodes and managed nodes can communicate without issues. These additional rules are required for mixed managed node group and self-managed node group workloads to avoid [CoreDNS queries being blocked by security group rules](https://github.com/aws-ia/terraform-aws-eks-blueprints/issues/619).\n\nBelow are some code snippet in the ```main.tf```:\n\nXML\n```\\n...\\n\\n # EKS Self-Managed Node Group in Local Zone\\n self_managed_node_groups = {\\n self_mg_4 = {\\n node_group_name = \\"self-managed-ondemand\\"\\n instance_type = \\"t3.xlarge\\" # instance types are limited in the local zone\\n launch_template_os = \\"amazonlinux2eks\\" # or bottlerocket \\n block_device_mappings = [\\n {\\n device_name = \\"/dev/xvda\\"\\n volume_type = \\"gp2\\" # Local Zone supports gp2 volumes only \\n volume_size = \\"100\\"\\n },\\n ]\\n subnet_ids = [var.local_zone_private_subnet_id]\\n },\\n }\\n\\n...\\n # https://github.com/aws-ia/terraform-aws-eks-blueprints/issues/619\\n # Allow Connection from other nodes \\n node_security_group_additional_rules = {\\n egress_all = {\\n description = \\"Node all egress\\"\\n ...\\n }\\n }\\n...\\n```\n\nXML\n```\\nresource \\"aws_security_group_rule\\" \\"allow_node_sg_to_cluster_sg\\" {\\n # Self-managed Nodegroup to Cluster API/Managed Nodegroup all traffic\\n source_security_group_id = module.eks_blueprints.worker_node_security_group_id\\n security_group_id = module.eks_blueprints.cluster_primary_security_group_id\\n ...\\n }\\n\\nresource \\"aws_security_group_rule\\" \\"allow_node_sg_from_cluster_sg\\" {\\n # Cluster API/Managed Nodegroup to Self-Managed Nodegroup all traffic\\n source_security_group_id = module.eks_blueprints.cluster_primary_security_group_id\\n security_group_id = module.eks_blueprints.worker_node_security_group_id\\n ...\\n }\\n```\n\nIn the ```eks_blueprints_kubernetes_addons``` module in the ```main.tf```, you can enable several add-ons directly, and the module creates both IAM Roles for Service Accounts and installs the helm charts. I use [Amazon EFS CSI driver](https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html), [Amazon Web Services Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) and [External DNS](https://github.com/kubernetes-sigs/external-dns), so I set the respective keys and their values to be ```true```. These add-ons will be deployed by the Terraform module after [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) cluster creation.\n\nXML\n```\\nmodule \\"eks_blueprints_kubernetes_addons\\" {\\n ...\\n enable_amazon_eks_aws_ebs_csi_driver = true\\n enable_aws_load_balancer_controller = true\\n enable_metrics_server = true\\n enable_external_dns = true\\n ...\\n}\\n```\n\nNow let’s move forward and create the EKS Cluster by running ```terraform``` commands. The EKS cluster, node group creation, and add-on installation will take approximately 20 minutes to complete.\n\nBash\n```\\nterraform init\\nterraform plan \\nterraform apply -auto-approve\\n```\n\nAfter the command completed, the EKS cluster and nodes on the local zone and add-ons have been deployed. By running ```kubectl get node``` you will be able to find node(s) in ```us-east-1-bos-1a``` zone, and running ```kubectl get pod``` you can find ```aws-load-balancer-controller``` and ```external-dns``` pods are running and ready.\n\n```\\n➜ ~ aws eks update-kubeconfig \\\\\\n--name \$(tf output eks_cluster_id | jq . -r) \\\\\\n--region us-east-1 \\nUpdated context arn:aws:eks:us-east-1:091550601287:cluster/lindarr-demo in /Users/lindarr/.kube/config\\n\\n# Some node(s) are running in local zone\\n➜ ~ kubectl get node --label-columns failure-domain.beta.kubernetes.io/zone\\nNAME STATUS ROLES AGE VERSION ZONE\\nip-10-0-11-232.ec2.internal Ready <none> 14d v1.22.6-eks-7d68063 us-east-1b\\nip-10-0-15-39.ec2.internal Ready <none> 3d17h v1.22.6-eks-7d68063 us-east-1-bos-1a\\n...\\n\\n# Amazon Web Services LB Controller and External DNS are running \\n➜ ~ kubectl get pod -n kube-system\\nNAME READY STATUS RESTARTS AGE\\naws-load-balancer-controller-75bd4dfcbd-bwdqt 1/1 Running 0 11d\\naws-load-balancer-controller-75bd4dfcbd-kx8l5 1/1 Running 0 3d16h\\naws-node-bxzkb 1/1 Running 0 11d\\naws-node-p8bm7 1/1 Running 0 3d16h\\ncoredns-7f5998f4c-886lb 1/1 Running 0 11d\\ncoredns-7f5998f4c-cv5b8 1/1 Running 0 3d16h\\nebs-csi-controller-588dffc699-vh8gb 5/5 Running 0 3d16h\\nebs-csi-controller-588dffc699-zkxxh 5/5 Running 0 3d16h\\nebs-csi-node-898nj 3/3 Running 0 11d\\nebs-csi-node-b4b5r 3/3 Running 0 3d16h\\nefs-csi-controller-9d944546-9s6cz 3/3 Running 0 11d\\nefs-csi-controller-9d944546-gtmc9 3/3 Running 0 3d16h\\nefs-csi-node-7klzk 3/3 Running 0 8d\\nefs-csi-node-wzwlc 3/3 Running 0 3d16h\\nkube-proxy-n6s4q 1/1 Running 0 14d\\nkube-proxy-vhdrx 1/1 Running 0 3d16h\\nmetrics-server-694d47d564-zxfrs \\n\\n# External DNS is running \\n➜ ~ kubectl get po -n external-dns \\nNAME READY STATUS RESTARTS AGE\\nexternal-dns-96c667c79-88zcv 1/1 Running 0 25d\\n```\n\n#### **Step 3. Deploy [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) filesystems and targets**\n\nIn the following sections, we deploy the necessary Amazon Web Services resources for WordPress, which is our demonstration application for this post. If you are going to deploy your own application and would like to know some caveats related to [Amazon EKS](https://docs.aws.amazon.com/eks/?id=docs_gateway) in the Local Zone (especially for ALB Ingress), you can fast forward to the Step 6 directly.\n\nWordPress on Kubernetes requires a persistent volume to store the application and data. [Amazon EFS](https://docs.aws.amazon.com/efs/?id=docs_gateway) is chosen for this demonstration, because we need the storage to be accessed from the Local Zone and AZ. On the other hand, if your application accesses the PV frequently and requires low latency, consider [Amazon EBS](https://aws.amazon.com/cn/ebs/?trk=cndc-detail) for the Persistent Volume, and you need to have other mechanism to replicate or backup data on [Amazon EBS](https://docs.aws.amazon.com/ebs/?id=docs_gateway) volumes from Local Zone to the Available Zones.\n\nNow let’s create [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) resources using Terraform, get the [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) filesystem, and get the access point IDs for the ```volumeHandle``` of PV.\n\nBash\n```\\ncd ../03-efs\\nvim demo.auto.tfvars # Edit the VPC and subnet IDs\\n\\ntf init \\ntf plan \\ntf apply -auto-approve\\n```\n\n\n```\\n➜ 03-efs git:(main) ✗ terraform output\\nefs_ap_id = \\"fsap-03b76858b781b84ff\\"\\nefs_id = \\"fs-08312777c25f61ee9\\"\\nvolumeHandle = \\"fs-08312777c25f61ee9::fsap-03b76858b781b84ff\\"\\n```\n\nSince the [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) mount targets are not supported in the Local Zone, we make some [tweaks](https://github.com/kubernetes-sigs/aws-efs-csi-driver/issues/240#issuecomment-676849346) on the [Amazon EFS CSI driver](https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html) so that [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) mount points in the Region can be resolved in the Local Zone worker nodes without errors. Use ```kubectl``` to patch the [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) CSI manifests and add ```hostAliases``` for the [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) mount points.\n\nYAML\n```\\n➜ 03-efs git:(main) ✗ terraform output\\nefs_ap_id = \\"fsap-046d60b356c84b394\\"\\nefs_id = \\"fs-024f950b4c448cc67\\"\\nefs_mount_target = [\\n \\"10.0.10.87\\",\\n \\"10.0.11.151\\",\\n \\"10.0.12.41\\",\\n]\\nvolumeHandle = \\"fs-024f950b4c448cc67::fsap-046d60b356c84b394\\"\\n\\n\\n➜ ~ vim efs-node-patch.yaml\\n\\nspec:\\n template:\\n spec:\\n # Add host Aliases here so that EFS mount points can be resolved on Local Zones\\n # Otherwise, DNS resolution will fail if the CoreDNS pod is running on local zone\\n # Or fail randomly, if one coredns pod is on AZ and another pod is on Local Zone \\n hostAliases:\\n - hostnames:\\n - fs-08312777c25f61ee9.efs.us-east-1.amazonaws.com\\n ip: 10.0.10.26\\n - hostnames:\\n - fs-08312777c25f61ee9.efs.us-east-1.amazonaws.com\\n ip: 10.0.12.4\\n - hostnames:\\n - fs-08312777c25f61ee9.efs.us-east-1.amazonaws.com\\n ip: 10.0.11.140\\n```\nBash\n```\\n➜ 03-efs git:(main) ✗ kubectl patch daemonset -n kube-system efs-csi-node --patch-file efs-node-patch.yaml\\nWarning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use \\"kubernetes.io/os\\" instead\\ndaemonset.apps/efs-csi-node patched\\n```\n\n#### **Step 4. Deploy [Amazon EC2 ](https://aws.amazon.com/cn/ec2/?trk=cndc-detail)Database Instances, [Amazon RDS](https://aws.amazon.com/cn/rds/?trk=cndc-detail), and Amazon Web Services DMS**\n\nFor the database tier, we run MariaDB on [Amazon EC2 ](https://aws.amazon.com/cn/ec2/?trk=cndc-detail)in Local Zone as the primary site, [Amazon RDS](https://docs.aws.amazon.com/rds/?id=docs_gateway) in the Region, and use [Amazon Web Services DMS](https://docs.aws.amazon.com/dms/index.html) to replicate the tables and records from [Amazon EC2 ](https://aws.amazon.com/cn/ec2/?trk=cndc-detail)instance to [Amazon RDS](https://aws.amazon.com/cn/rds/?trk=cndc-detail). We create the resources using the Terraform module located in ```04-database``` directory.\n\nIf you haven’t created required [Amazon Web Services DMS role](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html) before, please specify the ```create_iam_roles = true``` in the ```demo.auto.tfvars``` file. If you don’t have an SSH key pair, please refer to the [docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-key-pairs.html) to create one and replace the key name ```my_ssh_key_name``` in the ```.tfvars``` file.\n\nBash\n```\\ncd ../04-database\\nvim demo.auto.tfvars # Edit the VPC and subnet IDs\\n```\nXML\n```\\nprivate_subnets = [\\n \\"subnet-01f9037408ae338ad\\",\\n \\"subnet-0f30e01d3f9addd62\\",\\n \\"subnet-0096b2f4142dbdae2\\",\\n]\\nprivate_subnets_local_zone = \\"subnet-0f19d51410f6167ac\\"\\n\\nssh_key_name = \\"my_ssh_key_name\\" # Relace the ssh_key_name to your SSH key \\nvpc_cidr_block = \\"10.0.0.0/16\\"\\n\\nvpc_id = \\"vpc-0a65e88418d47f0ee\\"\\n \\ncreate_iam_roles = true # set to true if IAM role required are not created before\\n```\n\nBash\n```\\nterraform init \\nterraform plan \\nterraform apply -auto-approve\\n\\n...\\n\\n➜ 04-database git:(main) ✗ tf output\\ndb_ec2_instance_id = \\"i-019b9172637105e4e\\"\\ndb_ec2_instance_ip = \\"10.0.15.200\\"\\nec2_mariadb_password = <sensitive>\\nrds_endpoint = \\"demo-test-mariadb-instance.cdyids0dslnl.us-east-1.rds.amazonaws.com:3306\\"\\nrds_password = <sensitive>\\n\\n➜ 04-database git:(main) ✗ terraform output rds_password\\n\\"bbFVta-ExampleRDSPassword\\"\\n```\n\nYou can use SSH into the DB instance via a bastion host (details can be found in this [blog](https://aws.amazon.com/blogs/infrastructure-and-automation/deploy-bastion-hosts-into-private-subnets-with-aws-cdk/)), or use SSM to login the instance and configure the database. Please ensure that you have installed [Session Manager plugin](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html#install-plugin-linux) mentioned in the documentation.\n\nBash\n```\\n➜ 04-database git:(main) aws ssm start-session \\\\\\n --region us-east-1 \\\\\\n --target \$(tf output db_ec2_instance_id | jq . -r)\\n```\n\n\nAfter successfully starting the MariaDB server, we run the scripts below to create wordpress database and wordpress user for WordPress post data. Also, grant the necessary privileges for replication. Replace wordpress99 for the user database password.\n\nBash\n```\\nbash \\n\\nsudo mysql -sfu root -e \\"GRANT ALL PRIVILEGES ON wordpress.* to 'wordpress'@'%' IDENTIFIED BY 'wordpress99';\\"\\nsudo mysql -sfu root -e \\"GRANT SUPER, RELOAD, PROCESS, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO wordpress@'%';\\"\\nsudo mysql -sfu root -e \\"FLUSH PRIVILEGES;\\"\\n\\nsudo systemctl stop mariadb\\n```\n\nNext, we will use Amazon Web Services DMS to replicate data changes from MariaDB on [Amazon EC2 ](https://aws.amazon.com/cn/ec2/?trk=cndc-detail)at later stage, so we need to enable and configure ```bin-log``` for replication. Here is the modification for MariaDB on Amazon EC2.\n\nBash\n```\\nsudo tee /etc/my.cnf.d/server.cnf<<EOT\\n[mysqld]\\nlog_bin=/var/lib/mysql/bin-log\\nlog_bin_index=/var/lib/mysql/mysql-bin.index\\nexpire_logs_days= 2\\nbinlog_format= ROW\\nEOT\\n\\nsudo systemctl start mariadb\\n\\n# Ctrl^D twice to exit the shell and session \\n```\n\n\nTo achieve high-availability for the database tier, we create an [Amazon RDS](https://aws.amazon.com/cn/rds/?trk=cndc-detail) of MariaDB in region as stand-by replica and use Amazon Web Services DMS to replicate data between MariaDB on [Amazon EC2 ](https://aws.amazon.com/cn/ec2/?trk=cndc-detail)and [Amazon RDS](https://aws.amazon.com/cn/rds/?trk=cndc-detail) for MariaDB.\n\nIn the terraform module, we create the required Amazon Web Services resources, including IAM roles, Amazon Web Services DMS instance, source and target endpoints, as well as the Amazon Web Services DMS Replication task and use Full load + CDC to copy all data from MariaDB on [Amazon EC2 ](https://aws.amazon.com/cn/ec2/?trk=cndc-detail)and continuously replicate data changes to [Amazon RDS](https://aws.amazon.com/cn/rds/?trk=cndc-detail) for MariaDB.\n\nWith the below snippet of the of table-mappings JSON file, Amazon Web Services DMS replicates all tables in ```wordpress``` database from MariaDB on Amazon EC2.\n\nTypeScript\n```\\n# table-mappings.json\\n{\\n \\"rules\\": [\\n {\\n \\"rule-type\\": \\"selection\\",\\n \\"rule-id\\": \\"1\\",\\n \\"rule-name\\": \\"1\\",\\n \\"object-locator\\": {\\n \\"schema-name\\": \\"wordpress\\",\\n \\"table-name\\": \\"%\\"\\n },\\n \\"rule-action\\": \\"include\\"\\n }\\n ]\\n}\\n```\n\n#### **Step 5. Deploy [Amazon Route 53](https://aws.amazon.com/cn/route53/?trk=cndc-detail)**\nWe are going to deploy [Amazon Route 53](https://docs.aws.amazon.com/route53/?id=docs_gateway) records and [Health Checks](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html) for both ALB in the local zone and AZ with CNAME records.\n\nThe domain ```demo.primary.lindarren.com``` is used for the ALB in the Local Zone, and is registered by the [external-dns](https://github.com/kubernetes-sigs/external-dns) controller we installed in the [Amazon EKS Blueprint](https://github.com/aws-ia/terraform-aws-eks-blueprints) module. Similarly, the domain ```demo.backup.lindarren.com``` is used for the ALB in the Region.\n\nThe Heath Checks perform on both sites. When the primary site (Local Zone) is healthy, then the DNS record ```demo.lindarren.com``` resolves to ```demo.primary.lindarren.com```. On the other hand, when the primary site (Local Zone) does not return successful results, the DNS record ```demo.lindarren.com``` resolves to ```demo.backup.lindarren.com```.\n\nHere’s code snippet of how we setup our Health Check and Failover Policy using Terraform.\n\nXML\n```\\nresource \\"aws_route53_health_check\\" \\"localzone\\" {\\n fqdn = local.endpoint_local_zone\\n resource_path = \\"/\\"\\n type = \\"HTTPS\\"\\n port = 443\\n failure_threshold = 5\\n request_interval = 30\\n tags = {\\n Name = \\"Health Check for Ingress in Local Zone\\"\\n }\\n}\\n\\n...\\n\\nresource \\"aws_route53_record\\" \\"localzone\\" {\\n zone_id = data.aws_route53_zone.main.zone_id\\n name = \\"\${local.app_name}.\${local.domain_name}\\"\\n records = [local.endpoint_local_zone]\\n set_identifier = \\"primary\\"\\n type = \\"CNAME\\"\\n ttl = 60\\n health_check_id = aws_route53_health_check.localzone.id\\n failover_routing_policy {\\n type = \\"PRIMARY\\"\\n }\\n}\\n```\n\nYou can edit the ```main.tf```, modify the local variables and run ```terraform``` CLIs below to deploy the [Amazon Route 53](https://aws.amazon.com/cn/route53/?trk=cndc-detail) resources.\n\nBash\n```\\ncd ../05-route53\\nvim demo.auto.tfvars\\n``` \n\nBash\n``` \\nendpoint_local_zone = \\"demo.primary.lindarren.com\\"\\nendpoint_region = \\"demo.backup.lindarren.com\\"\\ndomain_name = \\"lindarren.com.\\"\\napp_name = \\"demo\\"\\n``` \n\n``` \\nterraform init \\nterraform apply -auto-approve\\n``` \n\n#### **Step 6. Deploy Kubernetes Application**\nAfter Amazon Web Services resources are created, we are going to deploy the Kubernetes resources for our application.\n\nWe have two deployments for WordPress app. One is in the Local Zone, with ```nodeAffinity``` requires ```topology.kubernetes.io/zone``` in the Local Zone ```us-east-1-bos-1```. Another deployment has ```NotIn``` operator, so that the pods launch in the Region. Modify the value of variable ```WORDPRESS_DB_HOST``` to be the private IP of your DB instance in the output of step 4.\n\nBash\n```\\ncd ../06-kubernetes\\nvim wordpress-deployment.yaml\\n```\n\nYAML\n```\\nkind: Deployment\\napiVersion: apps/v1\\nmetadata:\\n name: wordpress\\n ...\\nspec:\\n ...\\n spec:\\n affinity:\\n nodeAffinity:\\n requiredDuringSchedulingIgnoredDuringExecution:\\n nodeSelectorTerms:\\n - matchExpressions:\\n - key: topology.kubernetes.io/zone\\n operator: In\\n values: # Modify the value to the local zone of yours \\n - us-east-1-bos-1 \\n containers:\\n - image: wordpress:php7.1-apache\\n name: wordpress\\n env:\\n - name: WORDPRESS_DB_HOST # REPLACE IT WITH THE PRIVATE IP OF DB INSTANCE\\n value: \\"10.0.15.185\\"\\n```\n\nFor the shared storage, enter the ```volumeHandle``` with the [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) ID and [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) AccessPoint ID, in the output of step 3.\n\nYAML\n```\\n# Editing wordpress-deployment.yaml\\napiVersion: v1\\nkind: PersistentVolume\\nmetadata:\\n name: wordpress-efs-pv\\nspec:\\n...\\n csi:\\n driver: efs.csi.aws.com \\n volumeHandle: \\"fs-08312777c25f61ee9::fsap-0250aa2272226c8d4\\"\\n```\n\nFor the ingress in the Local Zone, we define the value of ```alb.ingress.kubernetes.io/subnets``` to be the public subnet ID in the Local Zone, in the ```terraform output``` of step 1. For the ingress in the region, we use the [Amazon Web Services Load Balancer Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller) to discover public subnets using tags. Configure annotation ```external-dns.alpha.kubernetes.io/hostname``` to allow External DNS Controller to register ALB to [Amazon Route 53](https://aws.amazon.com/cn/route53/?trk=cndc-detail) automatically. We use [Amazon Certificate Manager](https://docs.aws.amazon.com/acm/) (ACM) and bind it to the ALB for HTTPS listener. If you haven’t an existing ACM certificate, please refer to the Amazon Web Services ACM documentation- [Request a public certificate using the Console](https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html#request-public-console).\n\nYAML\n```\\n# Editing wordpress-deployment.yaml\\n \\napiVersion: networking.k8s.io/v1\\nkind: Ingress\\nmetadata:\\n name: wordpress-ingress\\n annotations:\\n # public subnet in local zone \\n alb.ingress.kubernetes.io/subnets: \\"subnet-0b49a2a528a2d2e68\\"\\n alb.ingress.kubernetes.io/scheme: internet-facing\\n # provide the Amazon Web Services ACM Certificate ARN\\n alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:091550601287:certificate/75cad826-f2f2-45e5-8bfe-c9b722d635d7\\n # provide the domain name\\n external-dns.alpha.kubernetes.io/hostname: demo.primary.lindarren.com\\n\\n...\\n```\n\nFor the deployment in the Region, also configure the ```DB_HOST``` environment variable to use the DB Instance in the Local Zone. We would like to make sure only one database (DB Instance in the Local Zone currently) is writable and [Amazon RDS](https://aws.amazon.com/cn/rds/?trk=cndc-detail) endpoint is connected after failover.\n\nBash\n```\\nvim wordpress-backup.yaml\\n```\n\nYAML\n```\\napiVersion: apps/v1 \\nkind: Deployment\\nmetadata:\\n name: wordpress-region\\n labels:\\n app: wordpress-region\\nspec:\\n ...\\n containers:\\n - image: wordpress:php7.1-apache\\n name: wordpress\\n env:\\n - name: WORDPRESS_DB_HOST\\n value: \\"10.0.15.185\\"\\n```\n\nFor the Ingress to register an [Amazon Route 53](https://aws.amazon.com/cn/route53/?trk=cndc-detail) record in the hosted zone and attach an ACM certificate.\n\nYAML\n```\\n# wordpress-backup.yaml\\napiVersion: networking.k8s.io/v1\\nkind: Ingress\\nmetadata:\\n name: wordpress-ingress-backup\\n annotations:\\n # provide the external domain name \\n external-dns.alpha.kubernetes.io/hostname: demo.backup.lindarren.com\\n alb.ingress.kubernetes.io/scheme: internet-facing\\n # provide the Amazon Web Services ACM Certificate ARN \\n alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:091550601287:certificate/75cad826-f2f2-45e5-8bfe-c9b722d635d7\\n```\n\nDefine the secret values of [Amazon RDS](https://aws.amazon.com/cn/rds/?trk=cndc-detail) password and DB Instance user password in the ```kustomization.yaml```\n\nYAML\n```\\nsecretGenerator:\\n- name: db-instance-pass\\n literals:\\n - password=wordpress99 \\n- name: rds-pass\\n literals:\\n - password=bbFVta-RDSExamplePassword # Replace with RDS Password in Terraform output\\n- name: mysql-pass\\n literals:\\n - password=rae2ooQu6uj6AiQu5mie0die4thuZu # This is for the mysql stateful, but not used currently \\nresources:\\n - wordpress-deployment.yaml \\n - wordpress-backup.yaml \\n - mysql-deployment.yaml \\n```\n\nDeploy the Kubernetes resources by running kubectl\n\nBash\n```\\n➜ 06-kubernetes git:(main) ✗ kubectl apply -k . \\n```\n\nAfter the resources are created, open the web browser to open WordPress website, and start the initial setup. After the setup is completed, go to https://demo.lindarren.com/ to ensure the WordPress is running.\n\n![image.png](https://dev-media.amazoncloud.cn/112ecbf79375412791ab15bc084736cb_image.png)\n\nAfter the installation are completed, the final step is starting the Amazon Web Services DMS replication task. Go to **Amazon Web Services DMS Console**, find **Endpoints** > **Source Connections**, and run test if it failed before.\n\n![image.png](https://dev-media.amazoncloud.cn/f85c953cfae5485e8a9b166259794feb_image.png)\n\nUse the commands below (or use the Amazon Web Services DMS’ Console) to start or resume the replication.\n\nBash\n```\\ncd ../04-database \\n\\nDMS_REPL_TASK_ARN=\$(tf output dms_repication_task_arn | jq . -r)\\n\\naws dms start-replication-task \\\\\\n --replication-task-arn \$DMS_REPL_TASK_ARN \\\\\\n --start-replication-task-type start-replication \\\\\\n --region us-east-1\\n \\n{\\n \\"ReplicationTask\\": {\\n \\"ReplicationTaskIdentifier\\": \\"demo-localzone-replication-task\\",\\n \\"SourceEndpointArn\\": \\"arn:aws:dms:us-east-1:091550601287:endpoint:WPOUJ3ON74LXPHYKE4DOFIYXDAZFMSNKI3Z3S3Q\\",\\n \\"TargetEndpointArn\\": \\"arn:aws:dms:us-east-1:091550601287:endpoint:YYYCLSMAYTHHLPXTINE3IM4AL4OOJRFBEZEZNKI\\",\\n \\"ReplicationInstanceArn\\": \\"arn:aws:dms:us-east-1:091550601287:rep:RCISNMDF3F7VM5IVMSRVCPFXBRYLUHCXP2BC5SQ\\",\\n \\"MigrationType\\": \\"full-load-and-cdc\\",\\n \\"TableMappings\\": \\"{\\\\n \\\\\\"rules\\\\\\": [\\\\n {\\\\n \\\\\\"rule-type\\\\\\": \\\\\\"selection\\\\\\",\\\\n \\\\\\"rule-id\\\\\\": \\\\\\"1\\\\\\",\\\\n \\\\\\"rule-name\\\\\\": \\\\\\"1\\\\\\",\\\\n \\\\\\"object-locator\\\\\\": {\\\\n \\\\\\"schema-name\\\\\\": \\\\\\"wordpress\\\\\\",\\\\n \\\\\\"table-name\\\\\\": \\\\\\"%\\\\\\"\\\\n },\\\\n \\\\\\"rule-action\\\\\\": \\\\\\"include\\\\\\"\\\\n }\\\\n ]\\\\n}\\",\\n \\"ReplicationTaskSettings\\": \\"......\\",\\n \\"Status\\": \\"starting\\",\\n \\"ReplicationTaskCreationDate\\": \\"2022-07-22T15:16:56.808000+08:00\\",\\n \\"ReplicationTaskStartDate\\": \\"2022-07-22T15:55:04.720000+08:00\\",\\n \\"ReplicationTaskArn\\": \\"arn:aws:dms:us-east-1:091550601287:task:YDR7LINDFJVSVKNB7IMGKKXFRJBWWOZFMBOGICQ\\"\\n }\\n}\\n```\n\nCheck the Amazon Web Services DMS Console and find the tables are being replicated from MariaDB Instance to [Amazon RDS](https://aws.amazon.com/cn/rds/?trk=cndc-detail).\n\n![image.png](https://dev-media.amazoncloud.cn/035f35924be748f4bf43d655c9e640c2_image.png)\n\nCheck the [Amazon Route 53](https://aws.amazon.com/cn/route53/?trk=cndc-detail) Console. The Health Checks should show that both primary site and backup site are healthy. The domain is now resolved to ALB in the Local Zone. You can use dig commands to verify the DNS resolution.\n\n![image.png](https://dev-media.amazoncloud.cn/0df06347ecce4cb39e35f574212a2cba_image.png)\n\nBash\n```\\n➜ ~ kubectl get ingress\\nNAME CLASS HOSTS ADDRESS PORTS AGE\\nwordpress-ingress alb * k8s-default-wordpres-ed46143e74-1394360021.us-east-1.elb.amazonaws.com 80 12d\\nwordpress-ingress-backup alb * k8s-default-wordpres-8d75cd8cec-1858496555.us-east-1.elb.amazonaws.com 80 9d\\n\\n➜ ~ dig k8s-default-wordpres-ed46143e74-1394360021.us-east-1.elb.amazonaws.com +short\\n68.66.115.193\\n68.66.113.46\\n\\n➜ ~ dig demo.lindarren.com +short\\ndemo.primary.lindarren.com.\\n68.66.113.46\\n68.66.115.193\\n```\n\n#### **Step 7. Failover Test from Local Zone to Region**\nNow we are going to simulate some failure on the pods in the Local Zone, and failover to the backup site in the Region. Run the commands below to disrupt the Kubernetes deployment in the Local Zone.\n\nBash\n```\\n# Scale the Pod Replica to 0 and terminate DB Instance\\nkubectl scale --replicas=0 deploy/wordpress\\n```\n\nWait for a few minutes, and you can find the [Amazon Route 53](https://aws.amazon.com/cn/route53/?trk=cndc-detail) Health Checks saying that the primary site is unhealthy.\n\n![image.png](https://dev-media.amazoncloud.cn/366d1bc40ff947eb9aaea1b162d430a7_image.png)\n\nNow run ```dig``` to resolve the domain. We are still able to open the WordPress website since the ALB, so the [Amazon EC2 ](https://aws.amazon.com/cn/ec2/?trk=cndc-detail)worker nodes, (not including database at the moment) tier are directed to the resources in the AZ when the Local Zone is not healthy.\n\nIn the post, the database failover is performed at the application level, but can be implemented with a Kubernetes service or a custom domain in the future. To switch from DB Instance to [Amazon RDS](https://aws.amazon.com/cn/rds/?trk=cndc-detail), we change the database connection endpoints in the Kubernetes Deployment and restart the deployment. Get the secret name from ```kubectl get secret``` and put the secret name in the environment variable of the ```wordpress-backup``` deployment.\n\nBash\n```\\n➜ 06-kubernetes git:(main) ✗ aws ec2 stop-instances --region us-east-1 \\\\\\n --instance-ids \$(tf -chdir=../04-database output db_ec2_instance_id | jq . -r) \\n\\n➜ 06-kubernetes git:(main) ✗ kubectl get secret\\nNAME TYPE DATA AGE\\ndb-instance-pass-95cd7tdbdf Opaque 1 2d7h\\ndefault-token-b5q6v kubernetes.io/service-account-token 3 2d8h\\nmysql-pass-ft5b2tdk5m Opaque 1 2d7h\\nrds-pass-g48k9fdbhc Opaque \\n\\n➜ 06-kubernetes git:(main) ✗ kubectl edit deploy/wordpress-region \\n 1 2d7h\\n```\n\n\nYAML\n```\\n# wordpress-backup.yaml\\n# get the RDS endpoint \\n# tf -chdir=../04-database output rds_endpoint, removing port number \\n---\\napiVersion: apps/v1 \\nkind: Deployment\\nmetadata:\\n name: wordpress-region\\n labels:\\n app: wordpress-region\\nspec:\\n ...\\n containers:\\n - image: wordpress:php7.1-apache\\n name: wordpress\\n env:\\n - name: WORDPRESS_DB_USER\\n value: \\"admin\\"\\n - name: WORDPRESS_DB_HOST\\n value: \\"demo-localzone-test-mariadb-instance.cdyids0dslnl.us-east-1.rds.amazonaws.com\\"\\n - name: WORDPRESS_DB_PASSWORD\\n valueFrom:\\n secretKeyRef:\\n name: rds-pass-g48k9fdbhc\\n key: password\\n...\\n```\n\nAfter performing the command, you can try to connect to the WordPress website and confirm that the website is still working.\n\n#### **Step 8. Destroy All the Resources**\nAfter playing with the demo application around, destroy all of the resources using the command below to save costs:\n\nBash\n```\\ncd ../06-kubernetes\\nkubectl delete -k . \\n\\ncd ../05-route53\\nterraform destroy -auto-approve\\n\\ncd ../04-database\\nterraform destroy -auto-approve\\n\\ncd ../03-efs\\nterraform destroy -auto-approve\\n\\ncd ../02-eks\\nterraform destroy -auto-approve\\n\\ncd ../01-vpc\\nterraform destroy -auto-approve\\n```\n\n### **Conclusion**\nThe goal of this post was to show how to architect an application using [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) on a Local Zone with a failover strategy.\n\nWe showed you how to:\n\n1. Deploy VPC and [Amazon EKS Cluster](https://docs.aws.amazon.com/eks/?id=docs_gateway) in the Region and Local Zone. Define self-managed node groups in the Local Zone subnet, managed node groups in the AZ subnets, and install add-ons by specifying the respective values being ```true``` in the [Amazon EKS Blueprint module](https://github.com/aws-ia/terraform-aws-eks-blueprints).\n2. Deploy an [Amazon RDS instance](https://docs.aws.amazon.com/rds/?id=docs_gateway) and a DB [Amazon EC2 ](https://aws.amazon.com/cn/ec2/?trk=cndc-detail)Instance, and how to use Amazon Web Services DMS to replicate the tables and records from Local Zone to the Region. The failover of database is performed by the customer.\n3. Create an [Amazon EFS](https://docs.aws.amazon.com/efs/index.html) filesystem and make tweaks of the [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) CSI driver so that the worker node in the Local Zone can mount [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) target without errors.\n4. Create [Amazon Route 53](https://docs.aws.amazon.com/route53/index.html) Health Checks and records and used failover policy so that the customer is connecting to the Local Zone as the primary site, and failover to backup site when the primary site is not available.\n\n\nHopefully, you will be able to follow along with this post and are now equipped to build applications for your projects on a Local Zone. For more details, check out the documentation links below. Happy building!\n\n### **Resource Links**\n- Local Zone\n[https://aws.amazon.com/about-aws/global-infrastructure/localzones/](https://aws.amazon.com/about-aws/global-infrastructure/localzones/)\n- [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) Blueprint\n[https://github.com/aws-ia/terraform-aws-eks-blueprints](https://github.com/aws-ia/terraform-aws-eks-blueprints)\n- Amazon Web Services DMS User Guide\n[https://docs.aws.amazon.com/dms/index.html](https://docs.aws.amazon.com/dms/index.html)\n\n![image.png](https://dev-media.amazoncloud.cn/46c752524d7c4f80a0fa361b3d922264_image.png)\n\n**Darren Lin**\n\nDarren Lin is a Cloud Native Specialist Solutions Architect at Amazon Web Services who focuses on domains such as Linux, Kubernetes, Container, Observability, and Open Source Technologies. In his spare time, he likes to work out and have fun with his family.\n\n![image.png](https://dev-media.amazoncloud.cn/5474bef2b7a84462b0c855539ac910f4_image.png)\n\n**CK Wang**\n\nCK Wang is a Database Specialist Solutions Architect at Amazon Web Services. He has 20+ years experiences of Data Warehouse & Large scale Database. He focuses on helping customer to build high available, cost-effective database solutions and turn data into actionable insights on Amazon Web Services.","render":"<h3><a id=\\"Introduction_0\\"></a><strong>Introduction</strong></h3>\\n<p><strong>Update 08/05/22:</strong> We updated the title and conclusion to improve the accuracy of wording.</p>\\n<p>Amazon Web Services Local Zones are a type of infrastructure deployment that places compute, storage, and other select <a href=\\"https://aws.amazon.com/\\" target=\\"_blank\\">Amazon Web Services</a> services close to large population and industry centers. Customers can build and deploy applications close to end users to enable real-time gaming, live streaming, augmented and virtual reality, virtual workstations, and more.</p>\\n<p>On the other hand, Local Zones come with the limitation that only single Availability Zone (AZ) is available, <a href=\\"https://docs.aws.amazon.com/ec2/?id=docs_gateway\\" target=\\"_blank\\">Amazon EC2</a> instance types are limited, and only <a href=\\"https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html\\" target=\\"_blank\\">Application Load Balancer</a> (ALB) is available for load balancing. Hence, for applications on [Amazon Elastic Kubernetes Service](https://aws.amazon.com/cn/eks/?trk=cndc-detail) (<a href=\\"https://docs.aws.amazon.com/eks/?id=docs_gateway\\" target=\\"_blank\\">Amazon EKS</a>), it requires additional configurations to run <a href=\\"https://docs.aws.amazon.com/ec2/?id=docs_gateway\\" target=\\"_blank\\">Amazon EC2</a> worker nodes, and ALBs for <a href=\\"https://docs.aws.amazon.com/eks/?id=docs_gateway\\" target=\\"_blank\\">Amazon EKS</a> clusters.</p>\\n<p>In the post, we show:</p>\n<ul>\\n<li>Deploy an <a href=\\"https://docs.aws.amazon.com/eks/?id=docs_gateway\\" target=\\"_blank\\">Amazon EKS</a> cluster, with a self-managed node group running in the local zone, and a managed node group in the region with <a href=\\"https://github.com/aws-ia/terraform-aws-eks-blueprints\\" target=\\"_blank\\">Amazon EKS Blueprint</a>.</li>\\n<li>Consideration for shared data storage using Amazon Elastic File System (Amazon EFS), and database synchronization for Local Zones and Region using Amazon Web Services Database Migration Service (<a href=\\"https://docs.aws.amazon.com/dms/?id=docs_gateway\\" target=\\"_blank\\">Amazon Web Services DMS</a>)</li>\\n<li>Architect in an active-standby configuration using <a href=\\"https://docs.aws.amazon.com/route53/?id=docs_gateway\\" target=\\"_blank\\">Amazon Route 53</a> to manage the traffic between the Local Zone and Region.</li>\\n</ul>\n<p>Our goal is to provide an example for running the WordPress application workload in the Local Zone on Amazon EKS, and failover to the region in the rare event of an issue in the Local one.</p>\n<h3><a id=\\"Solution_Overview_16\\"></a><strong>Solution Overview</strong></h3>\\n<p>The following diagram shows the high-level architecture, for running a WordPress website on <a href=\\"https://docs.aws.amazon.com/eks/?id=docs_gateway\\" target=\\"_blank\\">Amazon EKS</a> in the Local Zone.</p>\\n<p>The customer facing endpoint is a Route 53 domain (<a href=\\"http://demo.lindarren.com/\\" target=\\"_blank\\">demo.lindarren.com</a>) and has a failover policy to the primary site in the Local Zone (<a href=\\"http://demo.primary.lindarren.com/\\" target=\\"_blank\\">demo.primary.lindarren.com</a>) and backup site (<a href=\\"http://demo.backup.lindarren.com/\\" target=\\"_blank\\">demo.backup.lindarren.com</a>) in the AZs in the Region.</p>\\n<p>When the customer is connecting to the primary site (Local Zone), the request is served by the ALB in the Local Zone, and the backend servers are hosted by Kubernetes pods, running on the self-managed <a href=\\"https://docs.aws.amazon.com/ec2/?id=docs_gateway\\" target=\\"_blank\\">Amazon EC2</a> nodes. The backend database in the Local Zone is an <a href=\\"https://docs.aws.amazon.com/ec2/?id=docs_gateway\\" target=\\"_blank\\">Amazon EC2</a> instance with MariaDB installed.</p>\\n<p>For the backup site in the Region, there is an ALB and Kubernetes pods running on a managed node group. The backup database is running on Amazon Relational Database Service (<a href=\\"https://docs.aws.amazon.com/rds/?id=docs_gateway\\" target=\\"_blank\\">Amazon RDS</a>). <a href=\\"https://docs.aws.amazon.com/rds/?id=docs_gateway\\" target=\\"_blank\\">Amazon RDS</a> is a managed Database-as-a-Service (DBaaS) that makes it easy for IT administrators to set up, operate, and scale relational databases in the cloud. For data replication, we use Amazon Web Services Database Migration Service (<a href=\\"https://docs.aws.amazon.com/dms/index.html\\" target=\\"_blank\\">Amazon Web Services DMS</a>) to replicate data from the <a href=\\"https://docs.aws.amazon.com/ec2/?id=docs_gateway\\" target=\\"_blank\\">Amazon EC2</a> database instance in the Local Zone to the <a href=\\"https://docs.aws.amazon.com/rds/?id=docs_gateway\\" target=\\"_blank\\">Amazon RDS</a> instance in the Region.</p>\\n<p>For persistent storage, the files are stored on an <a href=\\"https://docs.aws.amazon.com/efs/?id=docs_gateway\\" target=\\"_blank\\">Amazon EFS</a> filesystem. Currently, it’s not supported to create <a href=\\"https://docs.aws.amazon.com/efs/?id=docs_gateway\\" target=\\"_blank\\">Amazon EFS</a> mount targets in the Local Zone subnets. Consequently, it’s necessary to make a few changes to make <a href=\\"https://docs.aws.amazon.com/efs/?id=docs_gateway\\" target=\\"_blank\\">Amazon EFS</a> CSI (Container Storage Interface) driver <code>DaemonSet</code> in the Local Zone to mount an <a href=\\"https://docs.aws.amazon.com/efs/?id=docs_gateway\\" target=\\"_blank\\">Amazon EFS</a> filesystem for the pod.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/a8686aca58a945679a6f78bc7d249d04_image.png\\" alt=\\"image.png\\" /></p>\n<h3><a id=\\"Deployment_in_the_Local_Zone_29\\"></a><strong>Deployment in the Local Zone</strong></h3>\\n<p>For the application deployment, we use the combination of Kubernetes YAML files and <a href=\\"https://www.terraform.io/\\" target=\\"_blank\\">Terraform</a> modules. We use <a href=\\"https://www.terraform.io/\\" target=\\"_blank\\">Terraform</a> to create Amazon Web Services resources such as <a href=\\"https://docs.aws.amazon.com/vpc/?id=docs_gateway\\" target=\\"_blank\\">Amazon Virtual Private Cloud (Amazon VPC)</a>, <a href=\\"https://docs.aws.amazon.com/eks/?id=docs_gateway\\" target=\\"_blank\\">Amazon EKS</a>, <a href=\\"https://docs.aws.amazon.com/ec2/?id=docs_gateway\\" target=\\"_blank\\">Amazon EC2</a>, <a href=\\"https://docs.aws.amazon.com/efs/?id=docs_gateway\\" target=\\"_blank\\">Amazon EFS</a>, <a href=\\"https://docs.aws.amazon.com/rds/?id=docs_gateway\\" target=\\"_blank\\">Amazon RDS</a>, <a href=\\"https://docs.aws.amazon.com/dms/index.html\\" target=\\"_blank\\">Amazon Web Services DMS</a>, <a href=\\"https://docs.aws.amazon.com/route53/?id=docs_gateway\\" target=\\"_blank\\">Amazon Route 53</a>, etc. For the application in Kubernetes, we use the YAML manifest files and WordPress in this post.</p>\\n<p><strong>Prerequisites</strong></p>\\n<ul>\\n<li>An Amazon Web Services account with the Administrator permissions. To use <a href=\\"https://docs.aws.amazon.com/eks/?id=docs_gateway\\" target=\\"_blank\\">Amazon EKS</a> Blueprint, using Amazon Web Services Identity and Access Management (<a href=\\"https://docs.aws.amazon.com/iam/?id=docs_gateway\\" target=\\"_blank\\">Amazon Web Services IAM</a>) Role is required and IAM User is not supported due to the issue. For setup details, please refer to the docs <a href=\\"https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html\\" target=\\"_blank\\">here</a>.</li>\\n<li>Installation of the latest version Amazon Web Services Command Line Interface (<a href=\\"https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html\\" target=\\"_blank\\">Amazon Web Services CLI</a>) (v2 recommended), kubectl, eksctl, <a href=\\"https://git-scm.com/book/en/v2/Getting-Started-Installing-Git\\" target=\\"_blank\\">Git</a>, <a href=\\"https://learn.hashicorp.com/tutorials/terraform/install-cli\\" target=\\"_blank\\">Terraform</a>, jq (1.6 recommended) and <a href=\\"https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html#install-plugin-linux\\" target=\\"_blank\\">System Manager Session Plugin</a>.</li>\\n<li>A domain name that you own (e.g.,lindarren.com) and a hosted zone in <a href=\\"https://docs.aws.amazon.com/route53/?id=docs_gateway\\" target=\\"_blank\\">Amazon Route 53</a>. The domain name is necessary because we are using <a href=\\"https://docs.aws.amazon.com/route53/?id=docs_gateway\\" target=\\"_blank\\">Amazon Route 53</a> domains and TLS (Transport Layer Security) certificates.</li>\\n<li>A shell environment. An IDE (Integrated Development Environment) environment such as <a href=\\"https://code.visualstudio.com/\\" target=\\"_blank\\">Visual Studio Code</a> or <a href=\\"https://docs.aws.amazon.com/cloud9/?id=docs_gateway\\" target=\\"_blank\\">Amazon Web Services Cloud9</a> is recommended. Please make sure that you configured IAM role credentials on your own instead of <a href=\\"https://docs.aws.amazon.com/cloud9/?id=docs_gateway\\" target=\\"_blank\\">Amazon Web Services Cloud9</a>’s temporary credentials. For details, please find the guide <a href=\\"https://www.eksworkshop.com/020_prerequisites/workspaceiam/\\" target=\\"_blank\\">here</a>.</li>\\n<li>Opt-in the Local Zone that you would like to run your workload in.</li>\n<li>An existing TLS certificate for web hosting as a resource in <a href=\\"https://aws.amazon.com/certificate-manager/\\" target=\\"_blank\\">Amazon Web Services Certificate Manager</a> (ACM).</li>\\n</ul>\n<p>Now clone the source code to your working directory and configure a few aliases.</p>\n<pre><code class=\\"lang-\\">git clone https://github.com/aws-samples/eks-localzone-demo.git\\n# A few optional shorthands\\nalias tf=terraform\\nalias k=kubectl \\n</code></pre>\\n<h3><a id=\\"Walkthrough_51\\"></a><strong>Walkthrough</strong></h3>\\n<h4><a id=\\"Step_1_Deploy_VPC_52\\"></a><strong>Step 1. Deploy VPC</strong></h4>\\n<p>The first thing we’ll need to provision for this architecture is a VPC, containing both Local Zone and AZs for the <a href=\\"https://docs.aws.amazon.com/eks/?id=docs_gateway\\" target=\\"_blank\\">Amazon EKS</a> cluster and database instances. There are three public and three private subnets in the AZs, one private, and one public subnet in the local zone.</p>\\n<p>In the <code>main.tf</code>, we use <code>vpc</code> module to create the subnets in the AZs. For Local Zone subnets, we create <code>aws_subnet</code> resources.</p>\\n<p>Markup</p>\n<pre><code class=\\"lang-\\">...\\n\\nresource &quot;aws_subnet&quot; &quot;public-subnet-lz&quot; {\\n vpc_id = module.vpc.vpc_id\\n availability_zone = local.lzs[0]\\n ...\\n}\\n\\nresource &quot;aws_subnet&quot; &quot;private-subnet-lz&quot; {\\n ...\\n}\\n\\nresource &quot;aws_route_table_association&quot; &quot;public-subnet-lz-rta&quot; {\\n subnet_id = aws_subnet.public-subnet-lz.id\\n route_table_id = module.vpc.public_route_table_ids[0]\\n}\\n\\nresource &quot;aws_route_table_association&quot; &quot;private-subnet-lz-rta&quot; {\\n subnet_id = aws_subnet.private-subnet-lz.id\\n route_table_id = module.vpc.private_route_table_ids[0]\\n}\\n</code></pre>\\n<p>To create the VPC, let’s review and define the input variables. The VPC is in <code>us-east-1</code> , the Local Zone is <code>us-east-1-bos-1a</code>. You need provide a <code>name</code> and <code>vpc_cidr</code> for the VPC.</p>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">cd deploy/01-vpc\\nvim demo.auto.tfvars\\n</code></pre>\\n<p>XML</p>\n<pre><code class=\\"lang-\\">name = &quot;demo-vpc&quot; \\nvpc_cidr = &quot;10.0.0.0/16&quot;\\ncluster_name = &quot;lindarr-demo&quot; # Name of EKS Cluster, for subnet tagging \\nregion = &quot;us-east-1&quot;\\nlzs = [&quot;us-east-1-bos-1a&quot;]\\n</code></pre>\\n<p>Deploy the VPC infrastructure using <code>terraform</code> CLI.</p>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">terraform init\\nterraform apply -auto-approve\\n</code></pre>\\n<p>In the output, or run terraform output to get the VPC ID and subnets IDs, including the subnets in the AZs and the local zone. For deploying additional resources like <a href=\\"https://docs.aws.amazon.com/rds/?id=docs_gateway\\" target=\\"_blank\\">Amazon RDS</a>, <a href=\\"https://docs.aws.amazon.com/eks/?id=docs_gateway\\" target=\\"_blank\\">Amazon EKS</a>, and <a href=\\"https://docs.aws.amazon.com/efs/index.html\\" target=\\"_blank\\">Amazon EFS</a> in the upcoming steps, you can use the subnet IDs from the output here.</p>\\n<p>XML</p>\n<pre><code class=\\"lang-\\">➜ 01-vpc git:(main) ✗ terraform output\\nprivate_subnets = [\\n &quot;subnet-04bfbdb56eab20f3f&quot;,\\n &quot;subnet-0282d89055cab1760&quot;,\\n &quot;subnet-0e3d213bfb21127fa&quot;,\\n]\\nprivate_subnets_local_zone = &quot;subnet-0179a7e06585a551f&quot;\\npublic_subnets = [\\n &quot;subnet-0d05de32e811f03c4&quot;,\\n &quot;subnet-0c2d26c64af1f9889&quot;,\\n &quot;subnet-0e5495f6c4218f5aa&quot;,\\n]\\npublic_subnets_local_zone = &quot;subnet-0b49a2a528a2d2e68&quot;\\nvpc_id = &quot;vpc-0c544fbcafdbbb035&quot;\\nvpc_id_cidr = &quot;10.0.0.0/16&quot;\\n</code></pre>\\n<h4><a id=\\"Step_2_Deploy_Amazon_EKS_Cluster_131\\"></a><strong>Step 2. Deploy Amazon EKS Cluster</strong></h4>\\n<p>Next, we use <a href=\\"https://github.com/aws-ia/terraform-aws-eks-blueprints\\" target=\\"_blank\\">Amazon EKS Blueprint</a> to create an <a href=\\"https://docs.aws.amazon.com/eks/?id=docs_gateway\\" target=\\"_blank\\">Amazon EKS</a> Cluster, including the Kubernetes Control Plane, a managed node group in the region, and a self-managed node group in the Local Zone.</p>\\n<p>Change the working directory to <code>02-eks</code> and edit the variables.</p>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">cd ../02-eks\\nvim demo.auto.tfvars\\n</code></pre>\\n<p>Modify the variables, mainly the resource IDs of VPC and subnets (by copying the partial output from the first module)</p>\n<p>XML</p>\n<pre><code class=\\"lang-\\">vpc_id = &quot;vpc-0c544fbcafdbbb035&quot;\\nprivate_subnets = [\\n &quot;subnet-04bfbdb56eab20f3f&quot;,\\n &quot;subnet-0282d89055cab1760&quot;,\\n &quot;subnet-0e3d213bfb21127fa&quot;,\\n]\\nprivate_subnets_local_zone = &quot;subnet-0179a7e06585a551f&quot;\\ncluster_name = &quot;my-eks-demo-cluster&quot;\\ndomain_name_in_route53 = &quot;lindarren.com&quot;\\n</code></pre>\\n<p>In the <code>main.tf</code>, we use the module <a href=\\"https://github.com/aws-ia/terraform-aws-eks-blueprints\\" target=\\"_blank\\">Amazon EKS Blueprints for Terraform</a> to create the <a href=\\"https://docs.aws.amazon.com/eks/?id=docs_gateway\\" target=\\"_blank\\">Amazon EKS</a> cluster. It makes creating an <a href=\\"https://docs.aws.amazon.com/eks/?id=docs_gateway\\" target=\\"_blank\\">Amazon EKS cluster</a> easier, especially for creating self-managed node groups in Local Zone subnets. Also note, in the Local Zone, the instance types is limited and the <a href=\\"https://docs.aws.amazon.com/ebs/?id=docs_gateway\\" target=\\"_blank\\">Amazon EBS</a> volume type is limited to <code>gp2</code>.</p>\\n<p>In addition, the security group rules of self-managed nodes are restrictive in the module, so we add additional rules so self-managed nodes and managed nodes can communicate without issues. These additional rules are required for mixed managed node group and self-managed node group workloads to avoid <a href=\\"https://github.com/aws-ia/terraform-aws-eks-blueprints/issues/619\\" target=\\"_blank\\">CoreDNS queries being blocked by security group rules</a>.</p>\\n<p>Below are some code snippet in the <code>main.tf</code>:</p>\\n<p>XML</p>\n<pre><code class=\\"lang-\\">...\\n\\n # EKS Self-Managed Node Group in Local Zone\\n self_managed_node_groups = {\\n self_mg_4 = {\\n node_group_name = &quot;self-managed-ondemand&quot;\\n instance_type = &quot;t3.xlarge&quot; # instance types are limited in the local zone\\n launch_template_os = &quot;amazonlinux2eks&quot; # or bottlerocket \\n block_device_mappings = [\\n {\\n device_name = &quot;/dev/xvda&quot;\\n volume_type = &quot;gp2&quot; # Local Zone supports gp2 volumes only \\n volume_size = &quot;100&quot;\\n },\\n ]\\n subnet_ids = [var.local_zone_private_subnet_id]\\n },\\n }\\n\\n...\\n # https://github.com/aws-ia/terraform-aws-eks-blueprints/issues/619\\n # Allow Connection from other nodes \\n node_security_group_additional_rules = {\\n egress_all = {\\n description = &quot;Node all egress&quot;\\n ...\\n }\\n }\\n...\\n</code></pre>\\n<p>XML</p>\n<pre><code class=\\"lang-\\">resource &quot;aws_security_group_rule&quot; &quot;allow_node_sg_to_cluster_sg&quot; {\\n # Self-managed Nodegroup to Cluster API/Managed Nodegroup all traffic\\n source_security_group_id = module.eks_blueprints.worker_node_security_group_id\\n security_group_id = module.eks_blueprints.cluster_primary_security_group_id\\n ...\\n }\\n\\nresource &quot;aws_security_group_rule&quot; &quot;allow_node_sg_from_cluster_sg&quot; {\\n # Cluster API/Managed Nodegroup to Self-Managed Nodegroup all traffic\\n source_security_group_id = module.eks_blueprints.cluster_primary_security_group_id\\n security_group_id = module.eks_blueprints.worker_node_security_group_id\\n ...\\n }\\n</code></pre>\\n<p>In the <code>eks_blueprints_kubernetes_addons</code> module in the <code>main.tf</code>, you can enable several add-ons directly, and the module creates both IAM Roles for Service Accounts and installs the helm charts. I use <a href=\\"https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html\\" target=\\"_blank\\">Amazon EFS CSI driver</a>, <a href=\\"https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html\\" target=\\"_blank\\">Amazon Web Services Load Balancer Controller</a> and <a href=\\"https://github.com/kubernetes-sigs/external-dns\\" target=\\"_blank\\">External DNS</a>, so I set the respective keys and their values to be <code>true</code>. These add-ons will be deployed by the Terraform module after [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) cluster creation.</p>\\n<p>XML</p>\n<pre><code class=\\"lang-\\">module &quot;eks_blueprints_kubernetes_addons&quot; {\\n ...\\n enable_amazon_eks_aws_ebs_csi_driver = true\\n enable_aws_load_balancer_controller = true\\n enable_metrics_server = true\\n enable_external_dns = true\\n ...\\n}\\n</code></pre>\\n<p>Now let’s move forward and create the EKS Cluster by running <code>terraform</code> commands. The EKS cluster, node group creation, and add-on installation will take approximately 20 minutes to complete.</p>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">terraform init\\nterraform plan \\nterraform apply -auto-approve\\n</code></pre>\\n<p>After the command completed, the EKS cluster and nodes on the local zone and add-ons have been deployed. By running <code>kubectl get node</code> you will be able to find node(s) in <code>us-east-1-bos-1a</code> zone, and running <code>kubectl get pod</code> you can find <code>aws-load-balancer-controller</code> and <code>external-dns</code> pods are running and ready.</p>\\n<pre><code class=\\"lang-\\">➜ ~ aws eks update-kubeconfig \\\\\\n--name \$(tf output eks_cluster_id | jq . -r) \\\\\\n--region us-east-1 \\nUpdated context arn:aws:eks:us-east-1:091550601287:cluster/lindarr-demo in /Users/lindarr/.kube/config\\n\\n# Some node(s) are running in local zone\\n➜ ~ kubectl get node --label-columns failure-domain.beta.kubernetes.io/zone\\nNAME STATUS ROLES AGE VERSION ZONE\\nip-10-0-11-232.ec2.internal Ready &lt;none&gt; 14d v1.22.6-eks-7d68063 us-east-1b\\nip-10-0-15-39.ec2.internal Ready &lt;none&gt; 3d17h v1.22.6-eks-7d68063 us-east-1-bos-1a\\n...\\n\\n# Amazon Web Services LB Controller and External DNS are running \\n➜ ~ kubectl get pod -n kube-system\\nNAME READY STATUS RESTARTS AGE\\naws-load-balancer-controller-75bd4dfcbd-bwdqt 1/1 Running 0 11d\\naws-load-balancer-controller-75bd4dfcbd-kx8l5 1/1 Running 0 3d16h\\naws-node-bxzkb 1/1 Running 0 11d\\naws-node-p8bm7 1/1 Running 0 3d16h\\ncoredns-7f5998f4c-886lb 1/1 Running 0 11d\\ncoredns-7f5998f4c-cv5b8 1/1 Running 0 3d16h\\nebs-csi-controller-588dffc699-vh8gb 5/5 Running 0 3d16h\\nebs-csi-controller-588dffc699-zkxxh 5/5 Running 0 3d16h\\nebs-csi-node-898nj 3/3 Running 0 11d\\nebs-csi-node-b4b5r 3/3 Running 0 3d16h\\nefs-csi-controller-9d944546-9s6cz 3/3 Running 0 11d\\nefs-csi-controller-9d944546-gtmc9 3/3 Running 0 3d16h\\nefs-csi-node-7klzk 3/3 Running 0 8d\\nefs-csi-node-wzwlc 3/3 Running 0 3d16h\\nkube-proxy-n6s4q 1/1 Running 0 14d\\nkube-proxy-vhdrx 1/1 Running 0 3d16h\\nmetrics-server-694d47d564-zxfrs \\n\\n# External DNS is running \\n➜ ~ kubectl get po -n external-dns \\nNAME READY STATUS RESTARTS AGE\\nexternal-dns-96c667c79-88zcv 1/1 Running 0 25d\\n</code></pre>\\n<h4><a id=\\"Step_3_Deploy_Amazon_EFS_filesystems_and_targets_279\\"></a><strong>Step 3. Deploy Amazon EFS filesystems and targets</strong></h4>\\n<p>In the following sections, we deploy the necessary Amazon Web Services resources for WordPress, which is our demonstration application for this post. If you are going to deploy your own application and would like to know some caveats related to <a href=\\"https://docs.aws.amazon.com/eks/?id=docs_gateway\\" target=\\"_blank\\">Amazon EKS</a> in the Local Zone (especially for ALB Ingress), you can fast forward to the Step 6 directly.</p>\\n<p>WordPress on Kubernetes requires a persistent volume to store the application and data. <a href=\\"https://docs.aws.amazon.com/efs/?id=docs_gateway\\" target=\\"_blank\\">Amazon EFS</a> is chosen for this demonstration, because we need the storage to be accessed from the Local Zone and AZ. On the other hand, if your application accesses the PV frequently and requires low latency, consider [Amazon EBS](https://aws.amazon.com/cn/ebs/?trk=cndc-detail) for the Persistent Volume, and you need to have other mechanism to replicate or backup data on <a href=\\"https://docs.aws.amazon.com/ebs/?id=docs_gateway\\" target=\\"_blank\\">Amazon EBS</a> volumes from Local Zone to the Available Zones.</p>\\n<p>Now let’s create Amazon EFS resources using Terraform, get the Amazon EFS filesystem, and get the access point IDs for the <code>volumeHandle</code> of PV.</p>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">cd ../03-efs\\nvim demo.auto.tfvars # Edit the VPC and subnet IDs\\n\\ntf init \\ntf plan \\ntf apply -auto-approve\\n</code></pre>\\n<pre><code class=\\"lang-\\">➜ 03-efs git:(main) ✗ terraform output\\nefs_ap_id = &quot;fsap-03b76858b781b84ff&quot;\\nefs_id = &quot;fs-08312777c25f61ee9&quot;\\nvolumeHandle = &quot;fs-08312777c25f61ee9::fsap-03b76858b781b84ff&quot;\\n</code></pre>\\n<p>Since the Amazon EFS mount targets are not supported in the Local Zone, we make some <a href=\\"https://github.com/kubernetes-sigs/aws-efs-csi-driver/issues/240#issuecomment-676849346\\" target=\\"_blank\\">tweaks</a> on the <a href=\\"https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html\\" target=\\"_blank\\">Amazon EFS CSI driver</a> so that [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) mount points in the Region can be resolved in the Local Zone worker nodes without errors. Use <code>kubectl</code> to patch the [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) CSI manifests and add <code>hostAliases</code> for the [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) mount points.</p>\\n<p>YAML</p>\n<pre><code class=\\"lang-\\">➜ 03-efs git:(main) ✗ terraform output\\nefs_ap_id = &quot;fsap-046d60b356c84b394&quot;\\nefs_id = &quot;fs-024f950b4c448cc67&quot;\\nefs_mount_target = [\\n &quot;10.0.10.87&quot;,\\n &quot;10.0.11.151&quot;,\\n &quot;10.0.12.41&quot;,\\n]\\nvolumeHandle = &quot;fs-024f950b4c448cc67::fsap-046d60b356c84b394&quot;\\n\\n\\n➜ ~ vim efs-node-patch.yaml\\n\\nspec:\\n template:\\n spec:\\n # Add host Aliases here so that EFS mount points can be resolved on Local Zones\\n # Otherwise, DNS resolution will fail if the CoreDNS pod is running on local zone\\n # Or fail randomly, if one coredns pod is on AZ and another pod is on Local Zone \\n hostAliases:\\n - hostnames:\\n - fs-08312777c25f61ee9.efs.us-east-1.amazonaws.com\\n ip: 10.0.10.26\\n - hostnames:\\n - fs-08312777c25f61ee9.efs.us-east-1.amazonaws.com\\n ip: 10.0.12.4\\n - hostnames:\\n - fs-08312777c25f61ee9.efs.us-east-1.amazonaws.com\\n ip: 10.0.11.140\\n</code></pre>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">➜ 03-efs git:(main) ✗ kubectl patch daemonset -n kube-system efs-csi-node --patch-file efs-node-patch.yaml\\nWarning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use &quot;kubernetes.io/os&quot; instead\\ndaemonset.apps/efs-csi-node patched\\n</code></pre>\\n<h4><a id=\\"Step_4_Deploy_Amazon_EC2_Database_Instances_Amazon_RDS_and_Amazon_Web_Services_DMS_346\\"></a><strong>Step 4. Deploy Amazon EC2 Database Instances, Amazon RDS, and Amazon Web Services DMS</strong></h4>\\n<p>For the database tier, we run MariaDB on Amazon EC2 in Local Zone as the primary site, <a href=\\"https://docs.aws.amazon.com/rds/?id=docs_gateway\\" target=\\"_blank\\">Amazon RDS</a> in the Region, and use <a href=\\"https://docs.aws.amazon.com/dms/index.html\\" target=\\"_blank\\">Amazon Web Services DMS</a> to replicate the tables and records from [Amazon EC2 ](https://aws.amazon.com/cn/ec2/?trk=cndc-detail)instance to [Amazon RDS](https://aws.amazon.com/cn/rds/?trk=cndc-detail). We create the resources using the Terraform module located in <code>04-database</code> directory.</p>\\n<p>If you haven’t created required <a href=\\"https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html\\" target=\\"_blank\\">Amazon Web Services DMS role</a> before, please specify the <code>create_iam_roles = true</code> in the <code>demo.auto.tfvars</code> file. If you don’t have an SSH key pair, please refer to the <a href=\\"https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-key-pairs.html\\" target=\\"_blank\\">docs</a> to create one and replace the key name <code>my_ssh_key_name</code> in the <code>.tfvars</code> file.</p>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">cd ../04-database\\nvim demo.auto.tfvars # Edit the VPC and subnet IDs\\n</code></pre>\\n<p>XML</p>\n<pre><code class=\\"lang-\\">private_subnets = [\\n &quot;subnet-01f9037408ae338ad&quot;,\\n &quot;subnet-0f30e01d3f9addd62&quot;,\\n &quot;subnet-0096b2f4142dbdae2&quot;,\\n]\\nprivate_subnets_local_zone = &quot;subnet-0f19d51410f6167ac&quot;\\n\\nssh_key_name = &quot;my_ssh_key_name&quot; # Relace the ssh_key_name to your SSH key \\nvpc_cidr_block = &quot;10.0.0.0/16&quot;\\n\\nvpc_id = &quot;vpc-0a65e88418d47f0ee&quot;\\n \\ncreate_iam_roles = true # set to true if IAM role required are not created before\\n</code></pre>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">terraform init \\nterraform plan \\nterraform apply -auto-approve\\n\\n...\\n\\n➜ 04-database git:(main) ✗ tf output\\ndb_ec2_instance_id = &quot;i-019b9172637105e4e&quot;\\ndb_ec2_instance_ip = &quot;10.0.15.200&quot;\\nec2_mariadb_password = &lt;sensitive&gt;\\nrds_endpoint = &quot;demo-test-mariadb-instance.cdyids0dslnl.us-east-1.rds.amazonaws.com:3306&quot;\\nrds_password = &lt;sensitive&gt;\\n\\n➜ 04-database git:(main) ✗ terraform output rds_password\\n&quot;bbFVta-ExampleRDSPassword&quot;\\n</code></pre>\\n<p>You can use SSH into the DB instance via a bastion host (details can be found in this <a href=\\"https://aws.amazon.com/blogs/infrastructure-and-automation/deploy-bastion-hosts-into-private-subnets-with-aws-cdk/\\" target=\\"_blank\\">blog</a>), or use SSM to login the instance and configure the database. Please ensure that you have installed <a href=\\"https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html#install-plugin-linux\\" target=\\"_blank\\">Session Manager plugin</a> mentioned in the documentation.</p>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">➜ 04-database git:(main) aws ssm start-session \\\\\\n --region us-east-1 \\\\\\n --target \$(tf output db_ec2_instance_id | jq . -r)\\n</code></pre>\\n<p>After successfully starting the MariaDB server, we run the scripts below to create wordpress database and wordpress user for WordPress post data. Also, grant the necessary privileges for replication. Replace wordpress99 for the user database password.</p>\n<p>Bash</p>\n<pre><code class=\\"lang-\\">bash \\n\\nsudo mysql -sfu root -e &quot;GRANT ALL PRIVILEGES ON wordpress.* to 'wordpress'@'%' IDENTIFIED BY 'wordpress99';&quot;\\nsudo mysql -sfu root -e &quot;GRANT SUPER, RELOAD, PROCESS, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO wordpress@'%';&quot;\\nsudo mysql -sfu root -e &quot;FLUSH PRIVILEGES;&quot;\\n\\nsudo systemctl stop mariadb\\n</code></pre>\\n<p>Next, we will use Amazon Web Services DMS to replicate data changes from MariaDB on Amazon EC2 at later stage, so we need to enable and configure <code>bin-log</code> for replication. Here is the modification for MariaDB on Amazon EC2.</p>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">sudo tee /etc/my.cnf.d/server.cnf&lt;&lt;EOT\\n[mysqld]\\nlog_bin=/var/lib/mysql/bin-log\\nlog_bin_index=/var/lib/mysql/mysql-bin.index\\nexpire_logs_days= 2\\nbinlog_format= ROW\\nEOT\\n\\nsudo systemctl start mariadb\\n\\n# Ctrl^D twice to exit the shell and session \\n</code></pre>\\n<p>To achieve high-availability for the database tier, we create an Amazon RDS of MariaDB in region as stand-by replica and use Amazon Web Services DMS to replicate data between MariaDB on Amazon EC2 and Amazon RDS for MariaDB.</p>\n<p>In the terraform module, we create the required Amazon Web Services resources, including IAM roles, Amazon Web Services DMS instance, source and target endpoints, as well as the Amazon Web Services DMS Replication task and use Full load + CDC to copy all data from MariaDB on Amazon EC2 and continuously replicate data changes to Amazon RDS for MariaDB.</p>\n<p>With the below snippet of the of table-mappings JSON file, Amazon Web Services DMS replicates all tables in <code>wordpress</code> database from MariaDB on Amazon EC2.</p>\\n<p>TypeScript</p>\n<pre><code class=\\"lang-\\"># table-mappings.json\\n{\\n &quot;rules&quot;: [\\n {\\n &quot;rule-type&quot;: &quot;selection&quot;,\\n &quot;rule-id&quot;: &quot;1&quot;,\\n &quot;rule-name&quot;: &quot;1&quot;,\\n &quot;object-locator&quot;: {\\n &quot;schema-name&quot;: &quot;wordpress&quot;,\\n &quot;table-name&quot;: &quot;%&quot;\\n },\\n &quot;rule-action&quot;: &quot;include&quot;\\n }\\n ]\\n}\\n</code></pre>\\n<h4><a id=\\"Step_5_Deploy_Amazon_Route_53_459\\"></a><strong>Step 5. Deploy Amazon Route 53</strong></h4>\\n<p>We are going to deploy <a href=\\"https://docs.aws.amazon.com/route53/?id=docs_gateway\\" target=\\"_blank\\">Amazon Route 53</a> records and <a href=\\"https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html\\" target=\\"_blank\\">Health Checks</a> for both ALB in the local zone and AZ with CNAME records.</p>\\n<p>The domain <code>demo.primary.lindarren.com</code> is used for the ALB in the Local Zone, and is registered by the <a href=\\"https://github.com/kubernetes-sigs/external-dns\\" target=\\"_blank\\">external-dns</a> controller we installed in the <a href=\\"https://github.com/aws-ia/terraform-aws-eks-blueprints\\" target=\\"_blank\\">Amazon EKS Blueprint</a> module. Similarly, the domain <code>demo.backup.lindarren.com</code> is used for the ALB in the Region.</p>\\n<p>The Heath Checks perform on both sites. When the primary site (Local Zone) is healthy, then the DNS record <code>demo.lindarren.com</code> resolves to <code>demo.primary.lindarren.com</code>. On the other hand, when the primary site (Local Zone) does not return successful results, the DNS record <code>demo.lindarren.com</code> resolves to <code>demo.backup.lindarren.com</code>.</p>\\n<p>Here’s code snippet of how we setup our Health Check and Failover Policy using Terraform.</p>\n<p>XML</p>\n<pre><code class=\\"lang-\\">resource &quot;aws_route53_health_check&quot; &quot;localzone&quot; {\\n fqdn = local.endpoint_local_zone\\n resource_path = &quot;/&quot;\\n type = &quot;HTTPS&quot;\\n port = 443\\n failure_threshold = 5\\n request_interval = 30\\n tags = {\\n Name = &quot;Health Check for Ingress in Local Zone&quot;\\n }\\n}\\n\\n...\\n\\nresource &quot;aws_route53_record&quot; &quot;localzone&quot; {\\n zone_id = data.aws_route53_zone.main.zone_id\\n name = &quot;\${local.app_name}.\${local.domain_name}&quot;\\n records = [local.endpoint_local_zone]\\n set_identifier = &quot;primary&quot;\\n type = &quot;CNAME&quot;\\n ttl = 60\\n health_check_id = aws_route53_health_check.localzone.id\\n failover_routing_policy {\\n type = &quot;PRIMARY&quot;\\n }\\n}\\n</code></pre>\\n<p>You can edit the <code>main.tf</code>, modify the local variables and run <code>terraform</code> CLIs below to deploy the [Amazon Route 53](https://aws.amazon.com/cn/route53/?trk=cndc-detail) resources.</p>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">cd ../05-route53\\nvim demo.auto.tfvars\\n</code></pre>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">endpoint_local_zone = &quot;demo.primary.lindarren.com&quot;\\nendpoint_region = &quot;demo.backup.lindarren.com&quot;\\ndomain_name = &quot;lindarren.com.&quot;\\napp_name = &quot;demo&quot;\\n</code></pre>\\n<pre><code class=\\"lang-\\">terraform init \\nterraform apply -auto-approve\\n</code></pre>\\n<h4><a id=\\"Step_6_Deploy_Kubernetes_Application_519\\"></a><strong>Step 6. Deploy Kubernetes Application</strong></h4>\\n<p>After Amazon Web Services resources are created, we are going to deploy the Kubernetes resources for our application.</p>\n<p>We have two deployments for WordPress app. One is in the Local Zone, with <code>nodeAffinity</code> requires <code>topology.kubernetes.io/zone</code> in the Local Zone <code>us-east-1-bos-1</code>. Another deployment has <code>NotIn</code> operator, so that the pods launch in the Region. Modify the value of variable <code>WORDPRESS_DB_HOST</code> to be the private IP of your DB instance in the output of step 4.</p>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">cd ../06-kubernetes\\nvim wordpress-deployment.yaml\\n</code></pre>\\n<p>YAML</p>\n<pre><code class=\\"lang-\\">kind: Deployment\\napiVersion: apps/v1\\nmetadata:\\n name: wordpress\\n ...\\nspec:\\n ...\\n spec:\\n affinity:\\n nodeAffinity:\\n requiredDuringSchedulingIgnoredDuringExecution:\\n nodeSelectorTerms:\\n - matchExpressions:\\n - key: topology.kubernetes.io/zone\\n operator: In\\n values: # Modify the value to the local zone of yours \\n - us-east-1-bos-1 \\n containers:\\n - image: wordpress:php7.1-apache\\n name: wordpress\\n env:\\n - name: WORDPRESS_DB_HOST # REPLACE IT WITH THE PRIVATE IP OF DB INSTANCE\\n value: &quot;10.0.15.185&quot;\\n</code></pre>\\n<p>For the shared storage, enter the <code>volumeHandle</code> with the [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) ID and [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) AccessPoint ID, in the output of step 3.</p>\\n<p>YAML</p>\n<pre><code class=\\"lang-\\"># Editing wordpress-deployment.yaml\\napiVersion: v1\\nkind: PersistentVolume\\nmetadata:\\n name: wordpress-efs-pv\\nspec:\\n...\\n csi:\\n driver: efs.csi.aws.com \\n volumeHandle: &quot;fs-08312777c25f61ee9::fsap-0250aa2272226c8d4&quot;\\n</code></pre>\\n<p>For the ingress in the Local Zone, we define the value of <code>alb.ingress.kubernetes.io/subnets</code> to be the public subnet ID in the Local Zone, in the <code>terraform output</code> of step 1. For the ingress in the region, we use the <a href=\\"https://kubernetes-sigs.github.io/aws-load-balancer-controller\\" target=\\"_blank\\">Amazon Web Services Load Balancer Controller</a> to discover public subnets using tags. Configure annotation <code>external-dns.alpha.kubernetes.io/hostname</code> to allow External DNS Controller to register ALB to [Amazon Route 53](https://aws.amazon.com/cn/route53/?trk=cndc-detail) automatically. We use <a href=\\"https://docs.aws.amazon.com/acm/\\" target=\\"_blank\\">Amazon Certificate Manager</a> (ACM) and bind it to the ALB for HTTPS listener. If you haven’t an existing ACM certificate, please refer to the Amazon Web Services ACM documentation- <a href=\\"https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html#request-public-console\\" target=\\"_blank\\">Request a public certificate using the Console</a>.</p>\\n<p>YAML</p>\n<pre><code class=\\"lang-\\"># Editing wordpress-deployment.yaml\\n \\napiVersion: networking.k8s.io/v1\\nkind: Ingress\\nmetadata:\\n name: wordpress-ingress\\n annotations:\\n # public subnet in local zone \\n alb.ingress.kubernetes.io/subnets: &quot;subnet-0b49a2a528a2d2e68&quot;\\n alb.ingress.kubernetes.io/scheme: internet-facing\\n # provide the Amazon Web Services ACM Certificate ARN\\n alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:091550601287:certificate/75cad826-f2f2-45e5-8bfe-c9b722d635d7\\n # provide the domain name\\n external-dns.alpha.kubernetes.io/hostname: demo.primary.lindarren.com\\n\\n...\\n</code></pre>\\n<p>For the deployment in the Region, also configure the <code>DB_HOST</code> environment variable to use the DB Instance in the Local Zone. We would like to make sure only one database (DB Instance in the Local Zone currently) is writable and [Amazon RDS](https://aws.amazon.com/cn/rds/?trk=cndc-detail) endpoint is connected after failover.</p>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">vim wordpress-backup.yaml\\n</code></pre>\\n<p>YAML</p>\n<pre><code class=\\"lang-\\">apiVersion: apps/v1 \\nkind: Deployment\\nmetadata:\\n name: wordpress-region\\n labels:\\n app: wordpress-region\\nspec:\\n ...\\n containers:\\n - image: wordpress:php7.1-apache\\n name: wordpress\\n env:\\n - name: WORDPRESS_DB_HOST\\n value: &quot;10.0.15.185&quot;\\n</code></pre>\\n<p>For the Ingress to register an Amazon Route 53 record in the hosted zone and attach an ACM certificate.</p>\n<p>YAML</p>\n<pre><code class=\\"lang-\\"># wordpress-backup.yaml\\napiVersion: networking.k8s.io/v1\\nkind: Ingress\\nmetadata:\\n name: wordpress-ingress-backup\\n annotations:\\n # provide the external domain name \\n external-dns.alpha.kubernetes.io/hostname: demo.backup.lindarren.com\\n alb.ingress.kubernetes.io/scheme: internet-facing\\n # provide the Amazon Web Services ACM Certificate ARN \\n alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:091550601287:certificate/75cad826-f2f2-45e5-8bfe-c9b722d635d7\\n</code></pre>\\n<p>Define the secret values of Amazon RDS password and DB Instance user password in the <code>kustomization.yaml</code></p>\\n<p>YAML</p>\n<pre><code class=\\"lang-\\">secretGenerator:\\n- name: db-instance-pass\\n literals:\\n - password=wordpress99 \\n- name: rds-pass\\n literals:\\n - password=bbFVta-RDSExamplePassword # Replace with RDS Password in Terraform output\\n- name: mysql-pass\\n literals:\\n - password=rae2ooQu6uj6AiQu5mie0die4thuZu # This is for the mysql stateful, but not used currently \\nresources:\\n - wordpress-deployment.yaml \\n - wordpress-backup.yaml \\n - mysql-deployment.yaml \\n</code></pre>\\n<p>Deploy the Kubernetes resources by running kubectl</p>\n<p>Bash</p>\n<pre><code class=\\"lang-\\">➜ 06-kubernetes git:(main) ✗ kubectl apply -k . \\n</code></pre>\\n<p>After the resources are created, open the web browser to open WordPress website, and start the initial setup. After the setup is completed, go to https://demo.lindarren.com/ to ensure the WordPress is running.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/112ecbf79375412791ab15bc084736cb_image.png\\" alt=\\"image.png\\" /></p>\n<p>After the installation are completed, the final step is starting the Amazon Web Services DMS replication task. Go to <strong>Amazon Web Services DMS Console</strong>, find <strong>Endpoints</strong> &gt; <strong>Source Connections</strong>, and run test if it failed before.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/f85c953cfae5485e8a9b166259794feb_image.png\\" alt=\\"image.png\\" /></p>\n<p>Use the commands below (or use the Amazon Web Services DMS’ Console) to start or resume the replication.</p>\n<p>Bash</p>\n<pre><code class=\\"lang-\\">cd ../04-database \\n\\nDMS_REPL_TASK_ARN=\$(tf output dms_repication_task_arn | jq . -r)\\n\\naws dms start-replication-task \\\\\\n --replication-task-arn \$DMS_REPL_TASK_ARN \\\\\\n --start-replication-task-type start-replication \\\\\\n --region us-east-1\\n \\n{\\n &quot;ReplicationTask&quot;: {\\n &quot;ReplicationTaskIdentifier&quot;: &quot;demo-localzone-replication-task&quot;,\\n &quot;SourceEndpointArn&quot;: &quot;arn:aws:dms:us-east-1:091550601287:endpoint:WPOUJ3ON74LXPHYKE4DOFIYXDAZFMSNKI3Z3S3Q&quot;,\\n &quot;TargetEndpointArn&quot;: &quot;arn:aws:dms:us-east-1:091550601287:endpoint:YYYCLSMAYTHHLPXTINE3IM4AL4OOJRFBEZEZNKI&quot;,\\n &quot;ReplicationInstanceArn&quot;: &quot;arn:aws:dms:us-east-1:091550601287:rep:RCISNMDF3F7VM5IVMSRVCPFXBRYLUHCXP2BC5SQ&quot;,\\n &quot;MigrationType&quot;: &quot;full-load-and-cdc&quot;,\\n &quot;TableMappings&quot;: &quot;{\\\\n \\\\&quot;rules\\\\&quot;: [\\\\n {\\\\n \\\\&quot;rule-type\\\\&quot;: \\\\&quot;selection\\\\&quot;,\\\\n \\\\&quot;rule-id\\\\&quot;: \\\\&quot;1\\\\&quot;,\\\\n \\\\&quot;rule-name\\\\&quot;: \\\\&quot;1\\\\&quot;,\\\\n \\\\&quot;object-locator\\\\&quot;: {\\\\n \\\\&quot;schema-name\\\\&quot;: \\\\&quot;wordpress\\\\&quot;,\\\\n \\\\&quot;table-name\\\\&quot;: \\\\&quot;%\\\\&quot;\\\\n },\\\\n \\\\&quot;rule-action\\\\&quot;: \\\\&quot;include\\\\&quot;\\\\n }\\\\n ]\\\\n}&quot;,\\n &quot;ReplicationTaskSettings&quot;: &quot;......&quot;,\\n &quot;Status&quot;: &quot;starting&quot;,\\n &quot;ReplicationTaskCreationDate&quot;: &quot;2022-07-22T15:16:56.808000+08:00&quot;,\\n &quot;ReplicationTaskStartDate&quot;: &quot;2022-07-22T15:55:04.720000+08:00&quot;,\\n &quot;ReplicationTaskArn&quot;: &quot;arn:aws:dms:us-east-1:091550601287:task:YDR7LINDFJVSVKNB7IMGKKXFRJBWWOZFMBOGICQ&quot;\\n }\\n}\\n</code></pre>\\n<p>Check the Amazon Web Services DMS Console and find the tables are being replicated from MariaDB Instance to Amazon RDS.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/035f35924be748f4bf43d655c9e640c2_image.png\\" alt=\\"image.png\\" /></p>\n<p>Check the Amazon Route 53 Console. The Health Checks should show that both primary site and backup site are healthy. The domain is now resolved to ALB in the Local Zone. You can use dig commands to verify the DNS resolution.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/0df06347ecce4cb39e35f574212a2cba_image.png\\" alt=\\"image.png\\" /></p>\n<p>Bash</p>\n<pre><code class=\\"lang-\\">➜ ~ kubectl get ingress\\nNAME CLASS HOSTS ADDRESS PORTS AGE\\nwordpress-ingress alb * k8s-default-wordpres-ed46143e74-1394360021.us-east-1.elb.amazonaws.com 80 12d\\nwordpress-ingress-backup alb * k8s-default-wordpres-8d75cd8cec-1858496555.us-east-1.elb.amazonaws.com 80 9d\\n\\n➜ ~ dig k8s-default-wordpres-ed46143e74-1394360021.us-east-1.elb.amazonaws.com +short\\n68.66.115.193\\n68.66.113.46\\n\\n➜ ~ dig demo.lindarren.com +short\\ndemo.primary.lindarren.com.\\n68.66.113.46\\n68.66.115.193\\n</code></pre>\\n<h4><a id=\\"Step_7_Failover_Test_from_Local_Zone_to_Region_727\\"></a><strong>Step 7. Failover Test from Local Zone to Region</strong></h4>\\n<p>Now we are going to simulate some failure on the pods in the Local Zone, and failover to the backup site in the Region. Run the commands below to disrupt the Kubernetes deployment in the Local Zone.</p>\n<p>Bash</p>\n<pre><code class=\\"lang-\\"># Scale the Pod Replica to 0 and terminate DB Instance\\nkubectl scale --replicas=0 deploy/wordpress\\n</code></pre>\\n<p>Wait for a few minutes, and you can find the Amazon Route 53 Health Checks saying that the primary site is unhealthy.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/366d1bc40ff947eb9aaea1b162d430a7_image.png\\" alt=\\"image.png\\" /></p>\n<p>Now run <code>dig</code> to resolve the domain. We are still able to open the WordPress website since the ALB, so the [Amazon EC2 ](https://aws.amazon.com/cn/ec2/?trk=cndc-detail)worker nodes, (not including database at the moment) tier are directed to the resources in the AZ when the Local Zone is not healthy.</p>\\n<p>In the post, the database failover is performed at the application level, but can be implemented with a Kubernetes service or a custom domain in the future. To switch from DB Instance to Amazon RDS, we change the database connection endpoints in the Kubernetes Deployment and restart the deployment. Get the secret name from <code>kubectl get secret</code> and put the secret name in the environment variable of the <code>wordpress-backup</code> deployment.</p>\\n<p>Bash</p>\n<pre><code class=\\"lang-\\">➜ 06-kubernetes git:(main) ✗ aws ec2 stop-instances --region us-east-1 \\\\\\n --instance-ids \$(tf -chdir=../04-database output db_ec2_instance_id | jq . -r) \\n\\n➜ 06-kubernetes git:(main) ✗ kubectl get secret\\nNAME TYPE DATA AGE\\ndb-instance-pass-95cd7tdbdf Opaque 1 2d7h\\ndefault-token-b5q6v kubernetes.io/service-account-token 3 2d8h\\nmysql-pass-ft5b2tdk5m Opaque 1 2d7h\\nrds-pass-g48k9fdbhc Opaque \\n\\n➜ 06-kubernetes git:(main) ✗ kubectl edit deploy/wordpress-region \\n 1 2d7h\\n</code></pre>\\n<p>YAML</p>\n<pre><code class=\\"lang-\\"># wordpress-backup.yaml\\n# get the RDS endpoint \\n# tf -chdir=../04-database output rds_endpoint, removing port number \\n---\\napiVersion: apps/v1 \\nkind: Deployment\\nmetadata:\\n name: wordpress-region\\n labels:\\n app: wordpress-region\\nspec:\\n ...\\n containers:\\n - image: wordpress:php7.1-apache\\n name: wordpress\\n env:\\n - name: WORDPRESS_DB_USER\\n value: &quot;admin&quot;\\n - name: WORDPRESS_DB_HOST\\n value: &quot;demo-localzone-test-mariadb-instance.cdyids0dslnl.us-east-1.rds.amazonaws.com&quot;\\n - name: WORDPRESS_DB_PASSWORD\\n valueFrom:\\n secretKeyRef:\\n name: rds-pass-g48k9fdbhc\\n key: password\\n...\\n</code></pre>\\n<p>After performing the command, you can try to connect to the WordPress website and confirm that the website is still working.</p>\n<h4><a id=\\"Step_8_Destroy_All_the_Resources_793\\"></a><strong>Step 8. Destroy All the Resources</strong></h4>\\n<p>After playing with the demo application around, destroy all of the resources using the command below to save costs:</p>\n<p>Bash</p>\n<pre><code class=\\"lang-\\">cd ../06-kubernetes\\nkubectl delete -k . \\n\\ncd ../05-route53\\nterraform destroy -auto-approve\\n\\ncd ../04-database\\nterraform destroy -auto-approve\\n\\ncd ../03-efs\\nterraform destroy -auto-approve\\n\\ncd ../02-eks\\nterraform destroy -auto-approve\\n\\ncd ../01-vpc\\nterraform destroy -auto-approve\\n</code></pre>\\n<h3><a id=\\"Conclusion_817\\"></a><strong>Conclusion</strong></h3>\\n<p>The goal of this post was to show how to architect an application using Amazon EKS on a Local Zone with a failover strategy.</p>\n<p>We showed you how to:</p>\n<ol>\\n<li>Deploy VPC and <a href=\\"https://docs.aws.amazon.com/eks/?id=docs_gateway\\" target=\\"_blank\\">Amazon EKS Cluster</a> in the Region and Local Zone. Define self-managed node groups in the Local Zone subnet, managed node groups in the AZ subnets, and install add-ons by specifying the respective values being <code>true</code> in the <a href=\\"https://github.com/aws-ia/terraform-aws-eks-blueprints\\" target=\\"_blank\\">Amazon EKS Blueprint module</a>.</li>\\n<li>Deploy an <a href=\\"https://docs.aws.amazon.com/rds/?id=docs_gateway\\" target=\\"_blank\\">Amazon RDS instance</a> and a DB [Amazon EC2 ](https://aws.amazon.com/cn/ec2/?trk=cndc-detail)Instance, and how to use Amazon Web Services DMS to replicate the tables and records from Local Zone to the Region. The failover of database is performed by the customer.</li>\\n<li>Create an <a href=\\"https://docs.aws.amazon.com/efs/index.html\\" target=\\"_blank\\">Amazon EFS</a> filesystem and make tweaks of the [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) CSI driver so that the worker node in the Local Zone can mount [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) target without errors.</li>\\n<li>Create <a href=\\"https://docs.aws.amazon.com/route53/index.html\\" target=\\"_blank\\">Amazon Route 53</a> Health Checks and records and used failover policy so that the customer is connecting to the Local Zone as the primary site, and failover to backup site when the primary site is not available.</li>\\n</ol>\n<p>Hopefully, you will be able to follow along with this post and are now equipped to build applications for your projects on a Local Zone. For more details, check out the documentation links below. Happy building!</p>\n<h3><a id=\\"Resource_Links_830\\"></a><strong>Resource Links</strong></h3>\\n<ul>\\n<li>Local Zone<br />\\n<a href=\\"https://aws.amazon.com/about-aws/global-infrastructure/localzones/\\" target=\\"_blank\\">https://aws.amazon.com/about-aws/global-infrastructure/localzones/</a></li>\\n<li>Amazon EKS Blueprint<br />\\n<a href=\\"https://github.com/aws-ia/terraform-aws-eks-blueprints\\" target=\\"_blank\\">https://github.com/aws-ia/terraform-aws-eks-blueprints</a></li>\\n<li>Amazon Web Services DMS User Guide<br />\\n<a href=\\"https://docs.aws.amazon.com/dms/index.html\\" target=\\"_blank\\">https://docs.aws.amazon.com/dms/index.html</a></li>\\n</ul>\n<p><img src=\\"https://dev-media.amazoncloud.cn/46c752524d7c4f80a0fa361b3d922264_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>Darren Lin</strong></p>\\n<p>Darren Lin is a Cloud Native Specialist Solutions Architect at Amazon Web Services who focuses on domains such as Linux, Kubernetes, Container, Observability, and Open Source Technologies. In his spare time, he likes to work out and have fun with his family.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/5474bef2b7a84462b0c855539ac910f4_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>CK Wang</strong></p>\\n<p>CK Wang is a Database Specialist Solutions Architect at Amazon Web Services. He has 20+ years experiences of Data Warehouse &amp; Large scale Database. He focuses on helping customer to build high available, cost-effective database solutions and turn data into actionable insights on Amazon Web Services.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭