Stretching your on-premises environment to Amazon using Amazon ECS Anywhere

存储
容器
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"[Amazon Elastic Container Service (Amazon ECS)](https://aws.amazon.com/ecs/) allows customers to run container workloads in AWS on AWS-managed infrastructure as well as on customer-managed infrastructure using [Amazon ECS Anywhere](https://aws.amazon.com/ecs/anywhere/). Whether on premises or in the cloud, customers have a consistent cluster management, workload scheduling, and monitoring experience with [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail). [Amazon ECS Anywhere](https://aws.amazon.com/cn/ecs/anywhere/?trk=cndc-detail) lets you have a consistent [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) developer experience across your on-premises data center and AWS.\n\nCustomers have several reasons to stretch their on-premises environment to AWS or the other way around:\n\n- Maintaining data sovereignty\n- Running services nearer the customer, for example low-latency workloads at edge locations\n- Running analytics or business intelligence processes closer to the data source or filtering data before sending it to the cloud for further analysis\n- Using it as a path of migration to AWS\n- Bursting into AWS for compute resources during large events\n- Making use of existing capital investments\n\n\nFor customers looking to understand how to design workloads that are stretched across AWS and on premises, this blog post presents a sample architecture. The post also shares complementary AWS services that you can consider to improve operational and developer efficiency.\n\n### **Solution overview**\nTo help position the sample architecture in this post, we refer to an example workload that has the frontend components and APIs deployed on AWS and an order processing component running on premises due to data processing requirements. A hybrid workload like this allows the customer to benefit from the availability, cost optimization, and resilience that AWS offers while meeting the requirement to process orders on premises.\n\nThe following diagram shows the sample architecture.\n\n![image.png](https://dev-media.amazoncloud.cn/fa84144988e748e9bf68b1a648de3e8b_image.png)\n\nBy deploying the sample workload to [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail), we create an [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) cluster and three [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) services. On the cloud, two [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) services support the web UI and an orders API, which publishes orders to an [Amazon Simple Queue Service (Amazon SQS)](https://aws.amazon.com/sqs/) queue. In the customer data center, an [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) service runs the order processing service, which polls the [Amazon SQS](https://aws.amazon.com/cn/sqs/?trk=cndc-detail) queue for work.\n\nThe [Amazon SQS](https://aws.amazon.com/cn/sqs/?trk=cndc-detail) queue acts as a boundary, decoupling the network connectivity between AWS and the on-premises environment in case of network partitions. The [Amazon Elastic Container Registry (Amazon ECR)](https://aws.amazon.com/ecr/) is used to store and [scan container images](https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning.html) to help identify software vulnerabilities. By having the public components on AWS, the frontend components can be exposed by an [AWS Application Load Balancer](https://aws.amazon.com/elasticloadbalancing/application-load-balancer/) with [Amazon CloudFront](https://aws.amazon.com/cloudfront/) to provide a low-latency experience for customers.\n\nTo complement the sample architecture, customers look for tools to support their continuous integration and continuous delivery (CI/CD) approach to software release management. Because there is a single [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) control plane shared between the on-premises environment and AWS, this enables deployments to be easily standardized from either [AWS CodePipeline](https://aws.amazon.com/codepipeline/) or a CI/CD tool (for example, Jenkins, GitLab, GitHub Actions, and so on). By standardizing deployments across your AWS and on-premises environment, it reduces the additional overheads (such as the testing and maintenance of deployment scripts) for on-premises and AWS based deployments. For the sample architecture in this blog, I will be using CodePipeline, but the concepts can be easily transferred to other CI/CD tools. To support DevOps practices, it is recommended that each service has its own application pipeline, enabling application components to be built, tested, and deployed independently.\n\nThe following diagram shows the example components for a pipeline with CodePipeline:\n\n![image.png](https://dev-media.amazoncloud.cn/e8a46fe40fd54564b19eccc848768d81_image.png)\n\nFor detailed steps on how to create a pipeline with CodePipeline, read this user guide: [Tutorial: Amazon ECS Standard Deployment with CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/ecs-cd-pipeline.html).\n\nAs part of the solution, there are some additional design considerations.\n\n**Integrate with other AWS services:** An external ECS launch type, like other [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) launch types, supports AWS Identity and [Access Management (IAM)](https://aws.amazon.com/iam/) roles for tasks. This enables your application to integrate with AWS services using temporary AWS access and secret keys automatically acquired from AWS Security Token Service (AWS STS), removing IAM access key management from your responsibility. You are only responsible for defining least privileged IAM roles for your service.\n\n**Infrastructure as code/pipelines as code:** You can use the [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/), [CloudFormation](https://aws.amazon.com/cloudformation/), or other third-party tools to define your application pipeline and infrastructure as code. You are responsible for bootstrapping your external machines to connect to the [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) control plane.\n\n**Scaling:** [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail), A[mazon Elastic Compute Cloud (Amazon EC2)](https://aws.amazon.com/pm/ec2/), and [AWS Fargate](https://aws.amazon.com/fargate/) can scale in the cloud, helping you meet customer demand and benefit from the elasticity AWS offers. [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) with an external launch type can scale on premises, but the on-premises limitation would be the available compute. In this post’s example, you may choose to scale the [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) order processing service based on the ```ApproximateNumberOfMessages``` in the [Amazon SQS](https://aws.amazon.com/cn/sqs/?trk=cndc-detail) queue. If scaling exceeds the available compute for the on-premises environment, there could be a delay in processing all the data. If there is a longer-term compute constraint, you will need to look at adding more compute or moving elsewhere. You should set a sensible maximum task scaling count to ensure your workload is not impacted by compute constraints. By stretching into AWS, you can benefit from enabling a global reach for your application.\n\n**Application secrets and configuration:** [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) service types (such as Fargate, Amazon EC2, and external) can use [AWS Systems Manager Parameter Store and AWS Secrets Manager](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html) for injecting environment specific secrets or configuring your application.\n\n**Application monitoring and logging:** ECS services with an external launch type can also publish application logs to AWS CloudWatch logs using the ```awslogs``` log driver, providing a consistent experience and location for logs across your AWS and on-premises environment. In addition, your application can integrate with [AWS X-Ray](https://aws.amazon.com/xray/) for tracing requests through your application microservices from AWS to on premises, also providing a single viewpoint for analyzing and debugging application issues. See [Getting started with AWS X-Ray](https://docs.aws.amazon.com/xray/latest/devguide/xray-gettingstarted.html) for more details. It is recommended to have a highly available network connection when using these AWS services, because, for example, the [AWS X-Ray](https://aws.amazon.com/cn/xray/?trk=cndc-detail) [daemon](https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon.html) uses the UDP protocol, and traces will be dropped on network interruptions. You need to design and test your architecture, if reliant on AWS Cloud services, to withstand network interruption.\n\n**Fleet management:** It can be difficult to keep an inventory of resources across AWS and on-premises environments. By registering your bare metal/virtual machines (on premises or on Amazon EC2) with [Systems Manager](https://aws.amazon.com/systems-manager/), you will have a single inventory of machines across AWS and the on-premises environment. Using this inventory, you can leverage [AWS Systems Manager Patch Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html) to automate host tasks, such as patching. In addition, you can use [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) for centralized remote access, without the need to manage bastion hosts or SSH keys. You are responsible for testing patches and any impacts on your workload. See [Setting up AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html) for hybrid environments for more details.\n\n### **Conclusion**\nA sample stretched architecture that leverages [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) and [Amazon ECS Anywhere](https://aws.amazon.com/cn/ecs/anywhere/?trk=cndc-detail) alongside CodePipeline provides a consistent developer experience between on-premises and AWS deployments. This allows you to meet a range of requirements: data residency, compliance, low-latency computing, amortizing existing assets, and bursting into AWS. In addition, the architecture can handle network partitions by using [Amazon SQS](https://aws.amazon.com/cn/sqs/?trk=cndc-detail) to decouple the services deployed on AWS and on premises. By using other services, such as Systems Manager, it reduces the heavy lifting required to create and manage a stretched architecture.\n\nFor a hands-on walkthrough of deploying a similar architecture, check out [this Amazon ECS Anywhere tutorial on GitHub](https://github.com/aws-containers/ecs-anywhere-tutorial).\n\n![image.png](https://dev-media.amazoncloud.cn/e583ffcb2115414fba259c5b5d272467_image.png)\n\n**Robert Northard**\n\nRobert Northard is a Sr. Containers Specialist Solutions Architect at AWS. He has expertise in Container Technologies and DevOps practices.\n\n","render":"<p><a href=\\"https://aws.amazon.com/ecs/\\" target=\\"_blank\\">Amazon Elastic Container Service (Amazon ECS)</a> allows customers to run container workloads in AWS on AWS-managed infrastructure as well as on customer-managed infrastructure using <a href=\\"https://aws.amazon.com/ecs/anywhere/\\" target=\\"_blank\\">Amazon ECS Anywhere</a>. Whether on premises or in the cloud, customers have a consistent cluster management, workload scheduling, and monitoring experience with [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail). [Amazon ECS Anywhere](https://aws.amazon.com/cn/ecs/anywhere/?trk=cndc-detail) lets you have a consistent [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) developer experience across your on-premises data center and AWS.</p>\\n<p>Customers have several reasons to stretch their on-premises environment to AWS or the other way around:</p>\n<ul>\\n<li>Maintaining data sovereignty</li>\n<li>Running services nearer the customer, for example low-latency workloads at edge locations</li>\n<li>Running analytics or business intelligence processes closer to the data source or filtering data before sending it to the cloud for further analysis</li>\n<li>Using it as a path of migration to AWS</li>\n<li>Bursting into AWS for compute resources during large events</li>\n<li>Making use of existing capital investments</li>\n</ul>\\n<p>For customers looking to understand how to design workloads that are stretched across AWS and on premises, this blog post presents a sample architecture. The post also shares complementary AWS services that you can consider to improve operational and developer efficiency.</p>\n<h3><a id=\\"Solution_overview_14\\"></a><strong>Solution overview</strong></h3>\\n<p>To help position the sample architecture in this post, we refer to an example workload that has the frontend components and APIs deployed on AWS and an order processing component running on premises due to data processing requirements. A hybrid workload like this allows the customer to benefit from the availability, cost optimization, and resilience that AWS offers while meeting the requirement to process orders on premises.</p>\n<p>The following diagram shows the sample architecture.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/fa84144988e748e9bf68b1a648de3e8b_image.png\\" alt=\\"image.png\\" /></p>\n<p>By deploying the sample workload to Amazon ECS, we create an Amazon ECS cluster and three Amazon ECS services. On the cloud, two Amazon ECS services support the web UI and an orders API, which publishes orders to an <a href=\\"https://aws.amazon.com/sqs/\\" target=\\"_blank\\">Amazon Simple Queue Service (Amazon SQS)</a> queue. In the customer data center, an [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) service runs the order processing service, which polls the [Amazon SQS](https://aws.amazon.com/cn/sqs/?trk=cndc-detail) queue for work.</p>\\n<p>The Amazon SQS queue acts as a boundary, decoupling the network connectivity between AWS and the on-premises environment in case of network partitions. The <a href=\\"https://aws.amazon.com/ecr/\\" target=\\"_blank\\">Amazon Elastic Container Registry (Amazon ECR)</a> is used to store and <a href=\\"https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning.html\\" target=\\"_blank\\">scan container images</a> to help identify software vulnerabilities. By having the public components on AWS, the frontend components can be exposed by an <a href=\\"https://aws.amazon.com/elasticloadbalancing/application-load-balancer/\\" target=\\"_blank\\">AWS Application Load Balancer</a> with <a href=\\"https://aws.amazon.com/cloudfront/\\" target=\\"_blank\\">Amazon CloudFront</a> to provide a low-latency experience for customers.</p>\\n<p>To complement the sample architecture, customers look for tools to support their continuous integration and continuous delivery (CI/CD) approach to software release management. Because there is a single Amazon ECS control plane shared between the on-premises environment and AWS, this enables deployments to be easily standardized from either <a href=\\"https://aws.amazon.com/codepipeline/\\" target=\\"_blank\\">AWS CodePipeline</a> or a CI/CD tool (for example, Jenkins, GitLab, GitHub Actions, and so on). By standardizing deployments across your AWS and on-premises environment, it reduces the additional overheads (such as the testing and maintenance of deployment scripts) for on-premises and AWS based deployments. For the sample architecture in this blog, I will be using CodePipeline, but the concepts can be easily transferred to other CI/CD tools. To support DevOps practices, it is recommended that each service has its own application pipeline, enabling application components to be built, tested, and deployed independently.</p>\\n<p>The following diagram shows the example components for a pipeline with CodePipeline:</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/e8a46fe40fd54564b19eccc848768d81_image.png\\" alt=\\"image.png\\" /></p>\n<p>For detailed steps on how to create a pipeline with CodePipeline, read this user guide: <a href=\\"https://docs.aws.amazon.com/codepipeline/latest/userguide/ecs-cd-pipeline.html\\" target=\\"_blank\\">Tutorial: Amazon ECS Standard Deployment with CodePipeline</a>.</p>\\n<p>As part of the solution, there are some additional design considerations.</p>\n<p><strong>Integrate with other AWS services:</strong> An external ECS launch type, like other [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) launch types, supports AWS Identity and <a href=\\"https://aws.amazon.com/iam/\\" target=\\"_blank\\">Access Management (IAM)</a> roles for tasks. This enables your application to integrate with AWS services using temporary AWS access and secret keys automatically acquired from AWS Security Token Service (AWS STS), removing IAM access key management from your responsibility. You are only responsible for defining least privileged IAM roles for your service.</p>\\n<p><strong>Infrastructure as code/pipelines as code:</strong> You can use the <a href=\\"https://aws.amazon.com/cdk/\\" target=\\"_blank\\">AWS Cloud Development Kit (AWS CDK)</a>, <a href=\\"https://aws.amazon.com/cloudformation/\\" target=\\"_blank\\">CloudFormation</a>, or other third-party tools to define your application pipeline and infrastructure as code. You are responsible for bootstrapping your external machines to connect to the [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) control plane.</p>\\n<p><strong>Scaling:</strong> [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail), A<a href=\\"https://aws.amazon.com/pm/ec2/\\" target=\\"_blank\\">mazon Elastic Compute Cloud (Amazon EC2)</a>, and <a href=\\"https://aws.amazon.com/fargate/\\" target=\\"_blank\\">AWS Fargate</a> can scale in the cloud, helping you meet customer demand and benefit from the elasticity AWS offers. [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) with an external launch type can scale on premises, but the on-premises limitation would be the available compute. In this post’s example, you may choose to scale the [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) order processing service based on the <code>ApproximateNumberOfMessages</code> in the [Amazon SQS](https://aws.amazon.com/cn/sqs/?trk=cndc-detail) queue. If scaling exceeds the available compute for the on-premises environment, there could be a delay in processing all the data. If there is a longer-term compute constraint, you will need to look at adding more compute or moving elsewhere. You should set a sensible maximum task scaling count to ensure your workload is not impacted by compute constraints. By stretching into AWS, you can benefit from enabling a global reach for your application.</p>\\n<p><strong>Application secrets and configuration:</strong> [Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail) service types (such as Fargate, Amazon EC2, and external) can use <a href=\\"https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html\\" target=\\"_blank\\">AWS Systems Manager Parameter Store and AWS Secrets Manager</a> for injecting environment specific secrets or configuring your application.</p>\\n<p><strong>Application monitoring and logging:</strong> ECS services with an external launch type can also publish application logs to AWS CloudWatch logs using the <code>awslogs</code> log driver, providing a consistent experience and location for logs across your AWS and on-premises environment. In addition, your application can integrate with <a href=\\"https://aws.amazon.com/xray/\\" target=\\"_blank\\">AWS X-Ray</a> for tracing requests through your application microservices from AWS to on premises, also providing a single viewpoint for analyzing and debugging application issues. See <a href=\\"https://docs.aws.amazon.com/xray/latest/devguide/xray-gettingstarted.html\\" target=\\"_blank\\">Getting started with AWS X-Ray</a> for more details. It is recommended to have a highly available network connection when using these AWS services, because, for example, the [AWS X-Ray](https://aws.amazon.com/cn/xray/?trk=cndc-detail) <a href=\\"https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon.html\\" target=\\"_blank\\">daemon</a> uses the UDP protocol, and traces will be dropped on network interruptions. You need to design and test your architecture, if reliant on AWS Cloud services, to withstand network interruption.</p>\\n<p><strong>Fleet management:</strong> It can be difficult to keep an inventory of resources across AWS and on-premises environments. By registering your bare metal/virtual machines (on premises or on Amazon EC2) with <a href=\\"https://aws.amazon.com/systems-manager/\\" target=\\"_blank\\">Systems Manager</a>, you will have a single inventory of machines across AWS and the on-premises environment. Using this inventory, you can leverage <a href=\\"https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html\\" target=\\"_blank\\">AWS Systems Manager Patch Manager</a> to automate host tasks, such as patching. In addition, you can use <a href=\\"https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html\\" target=\\"_blank\\">AWS Systems Manager Session Manager</a> for centralized remote access, without the need to manage bastion hosts or SSH keys. You are responsible for testing patches and any impacts on your workload. See <a href=\\"https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html\\" target=\\"_blank\\">Setting up AWS Systems Manager</a> for hybrid environments for more details.</p>\\n<h3><a id=\\"Conclusion_47\\"></a><strong>Conclusion</strong></h3>\\n<p>A sample stretched architecture that leverages Amazon ECS and Amazon ECS Anywhere alongside CodePipeline provides a consistent developer experience between on-premises and AWS deployments. This allows you to meet a range of requirements: data residency, compliance, low-latency computing, amortizing existing assets, and bursting into AWS. In addition, the architecture can handle network partitions by using Amazon SQS to decouple the services deployed on AWS and on premises. By using other services, such as Systems Manager, it reduces the heavy lifting required to create and manage a stretched architecture.</p>\n<p>For a hands-on walkthrough of deploying a similar architecture, check out <a href=\\"https://github.com/aws-containers/ecs-anywhere-tutorial\\" target=\\"_blank\\">this Amazon ECS Anywhere tutorial on GitHub</a>.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/e583ffcb2115414fba259c5b5d272467_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>Robert Northard</strong></p>\\n<p>Robert Northard is a Sr. Containers Specialist Solutions Architect at AWS. He has expertise in Container Technologies and DevOps practices.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭