Parallel and dynamic SaaS deployments with Amazon CDK Pipelines

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Software as a Service (SaaS) is an increasingly popular business model for independent software vendors (ISVs), including benefits such as a pay-as-you-go pricing model, scalability, and availability.\n\nSaaS services can be built by using numerous [architectural models](https://docs.aws.amazon.com/wellarchitected/latest/saas-lens/silo-pool-and-bridge-models.html). The silo model provides each tenant with dedicated resources and a shared-nothing architecture. Silo deployments also provide isolation between tenants’ compute resources and their data, and they help eliminate the noisy-neighbor problem. On the other hand, the pool model offers several benefits, such as lower maintenance overhead, simplified management and operations, and cost-saving opportunities, all due to a more efficient utilization of computing resources and capacity. In the bridge model, both silo and pool models are utilized side-by-side. The bridge model is a hybrid model, where parts of the system can be in a silo model, and parts in a pool.\n\nEnd-customers benefit from SaaS delivery in numerous ways. For example, the service can be available from multiple locations, letting the customer choose what is best for them. The tenant onboarding process is often real-time and frictionless. To realize these benefits for their end-customers, SaaS providers need methods for reliable, fast, and multi-region capable provisioning and software lifecycle management.\n\nThis post will describe a deployment system for automating the provision and lifecycle management of workload components in pool or silo deployment models by using [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/) and [CDK Pipelines](https://aws.amazon.com/blogs/developer/cdk-pipelines-continuous-delivery-for-aws-cdk-applications/). We will explore the system’s dynamic and database driven deployment model, as well as its multi-account and multi-region capabilities, and we will provision demo deployments of workload components in both the silo and pool models.\n\n### **AWS Cloud Development Kit and CDK Pipelines**\n\nFor this solution, we utilized [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/) and its CDK Pipelines construct library. AWS CDK is an open-source software development framework for modeling and provisioning cloud application resources by using familiar programming languages. AWS CDK lets you define your infrastructure as code and provision it through [AWS CloudFormation](https://aws.amazon.com/cloudformation/).\n\nCDK Pipelines is a high-level construct library with an opinionated implementation of a continuous deployment pipeline for your CDK applications. It is powered by [AWS CodePipeline](http://aws.amazon.com/codepipeline), a fully managed [continuous delivery](https://aws.amazon.com/devops/continuous-delivery/) service that helps automate your release pipelines for fast and reliable application as well as infrastructure updates. No servers need to be provisioned or setup, and you only pay for what you use. This solution utilizes the recently released and stable CDK Pipelines modern API.\n\n### **Business Scenario**\nAs a baseline use case, we have selected the consideration of a fictitious ISV called Unicorn that wants to implement an SaaS business model.\n\nUnicorn operates in several countries, and requires the storing of customer data within the customers’ chosen region. Currently, Unicorn needs two regions in order to satisfy its main customer base: one in EU and one in US. Unicorn expects rapid growth, and it needs a solution that can scale to thousands of tenants. Unicorn plans to have different tenant tiers with different isolation requirements. Their planned deployment model has the majority of tenants in shared pool instances, but they also plan to support dedicated silo instances for the tenants requiring it. The solution must also be easily extendable to new Regions as Unicorn’s business expands.\n\nUnicorn is starting small with just a single development team responsible for currently the only component in their SaaS workload architecture. Following industry best practices, Unicorn has designed its workload architecture so that each component has a clear technical ownership boundary. The chosen solution must grow together with Unicorn, and support multiple independently developed and deployed components in the future.\n\n### **Solution Overview**\nToday, many customers utilize AWS CodePipeline to build, test, and deploy their cloud applications. For an SaaS provider such as Unicorn, considering utilizing a single pipeline for managing every deployment presented concerns. At the scale that Unicorn requires, a single pipeline with potentially hundreds of actions runs the risk of becoming throughput limited. Moreover, a single pipeline would offer Unicorn limited control over how changes are released.\n\nOur solution addresses this problem by having a separate dynamically provisioned pipeline for each pool and silo deployment. The solution is designed to manage multiple deployments of Unicorn’s single workload component, thereby aligning with their current needs — and with small changes, including future needs.\n\n[CDK Best Practices](https://docs.aws.amazon.com/cdk/latest/guide/best-practices.html#best-practices-code) state that an AWS CDK application maps to a component as [defined](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html#introduction) by the AWS Well-Architected Framework. A component is the code, configuration, and AWS Resources that together deliver against a workload requirement. And this is typically the unit of technical ownership. A component usually includes logical units (e.g., api, database), and can have a continuous deployment pipeline.\n\nUtilizing CDK Pipelines provides a significant benefit: with no additional code, we can deploy cross-account and cross-region just as easily as we would to a single account and region. CDK Pipelines automatically creates and manages the required cross-account encryption keys and cross-region replication buckets. Furthermore, we only need to establish a trust relationship between the accounts during the [CDK bootstrapping process](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html).\n\nThe following diagram illustrates the solution architecture:\n\n![image.png](https://dev-media.amazoncloud.cn/fe23c5824ee349759281570d4eb1eafe_image.png)\n\n::: hljs-center\n\n*Figure 1: Solution architecture*\n\n:::\nLet’s look closer at the two primary high level solution flows: silo and pool pipeline provisioning (1 and 2), and component code deployment (3 and 4).\n\nProvisioning is separated into a dedicated flow, so that code deployments do not interfere with tenant onboarding, and vice versa. At the heart of the provisioning flow is the deployment database (1), which is implemented by using an [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) table.\n\nUtilizing DynamoDB Streams and [AWS Lambda](https://aws.amazon.com/lambda/) Triggers, a new [AWS CodeBuild](https://aws.amazon.com/codedeploy/) provisioning project build (2) is automatically started after a record is inserted into the deployment database. The provisioning project directly provisions new silo and pool pipelines by using the “cdk deploy” command. Provisioning events are processed in parallel, so that the solution can handle possible bursts in Unicorn’s tenant onboarding volumes.\n\nCDK best practices suggest that infrastructure and runtime code live in the same package. A single [AWS CodeCommit](https://aws.amazon.com/codecommit/) repository (3) contains everything needed: the CI/CD pipeline definitions as well as the workload component code. This repository is the source artifact for every CodePipeline pipeline and CodeBuild project. The chapter “Managing application resources as code” describes related implementation details.\n\nThe CI/CD pipeline (4) is a CDK Pipelines pipeline, and it is responsible for the component’s Software Development Life Cycle (SDLC) activities. In addition to implementing the update release process, it is expected that most SaaS providers will also implement additional activities. This includes a variety of tests and pre-production environment deployments. The chapter “Controlling deployment updates” dives deeper into this topic.\n\nDeployments have two parts: The pipeline (5) and the component resource stack(s) (6) that it manages. The pipelines are deployed to the central toolchain account and region, whereas the component resources are deployed to the AWS Account and Region, as specified in the deployments’ record in the deployment database.\n\nSample code for the solution is available in [GitHub](https://github.com/aws-samples/aws-saas-parallel-deployments). The sample code is intended for utilization in conjunction with this post. Our solution is implemented in TypeScript.\n\n### **Deployment Database**\nOur deployment database is an Amazon DynamoDB table, with the following structure:\n\n![image.png](https://dev-media.amazoncloud.cn/5f86fa5b88c245be933cdf6645ccaad5_image.png)\n\n::: hljs-center\n\n*Figure 2: DynamoDB table*\n\n:::\n- ‘id’ is a unique identifier for each deployment.\n- ‘account’ is the AWS account ID for the component resources.\n- ‘region’ is the AWS region ID for the component resources.\n- ‘type’ is either ‘silo’ or ‘pool’, which defines the deployment model.\n\nThis design supports tenant deployment to multiple silo and pool deployments. Each of these can target any available and bootstrapped AWS Account and Region. For example, different pools can support tenants in different regions, with select tenants deployed to dedicated silos. As pools may be limited to how many tenants they can serve, the design also supports having multiple pools within a region, and it can easily be extended with an additional attribute to support the tiers concept.\n\nNote that the deployment database does not contain tenant information. It is expected that such mapping is maintained in a separate tenant database, where each tenant record can map to the ID of the deployment that it is associated with.\n\nNow that we have looked at our solution design and architecture, let’s move to the hands-on section, starting with the deployment requirements for the solution.\n### **Prerequisites**\nThe following tools are required to deploy the solution:\n\n- NodeJS version [compatible with AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_prerequisites) version 1.124.0\n- The [AWS Command Line Interface](https://aws.amazon.com/cli/) (AWS CLI)\n- Git with[ git-remote-codecommit](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-git-remote-codecommit.html) extension\n\n\nTo follow this tutorial completely, you should have administrator access to at least one, but preferably two AWS accounts:\n\n- **Toolchain**: Account for the SDLC toolchain: the pipelines, the provisioning project, the repository, and the deployment database.\n- **Workload** (optional): Account for the component resources.\n\nIf you have only a single account, then the toolchain account can be used for both purposes. Credentials for the account(s) are assumed to be [configured in AWS CLI profile(s)](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html).\n\nThe instructions in this post use the following placeholders, which you must replace with your specific values:\n- <TOOLCHAIN_ACCOUNT_ID>: The AWS Account ID for the toolchain account\n- <TOOLCHAIN_PROFILE_NAME>: The AWS CLI profile name for the toolchain account credentials\n- <WORKLOAD_ACCOUNT_ID>: The AWS Account ID for the workload account\n- <WORKLOAD_PROFILE_NAME>: The AWS CLI profile name for the workload account credentials\n\n### **Bootstrapping**\nThe toolchain account, and all workload account(s), must be bootstrapped prior to first-time deployment.\n\nAWS CDK and our solutions’ dependencies must be installed to start with. The easiest way to do this is to install them locally with npm. First, we need to download our sample code, so that the we have the package.json configuration file available for npm.\n\nNote that throughout these instructions, many commands are broken over multiple lines for readability. Take care to execute the commands completely. It is always safe to execute each code block as a whole.\n\nClone the sample code repository from GitHub, and then install the dependencies by using npm:\n\nBash\n```\ngit clone https://github.com/aws-samples/aws-saas-parallel-deployments\ncd aws-saas-parallel-deployments\nnpm ci\n```\nCDK Pipelines requires use of modern bootstrapping. To ensure that this is enabled, start by setting the related environment variable:\n\nBash\n```\nexport CDK_NEW_BOOTSTRAP=1\n```\nThen, bootstrap the toolchain account. You must bootstrap both the region where the toolchain stack is deployed, as well as every target region for component resources. Here, we will first bootstrap only the us-east-1 region, and later you can optionally bootstrap additional region(s).\n\nTo bootstrap, we use npx to execute the locally installed version of AWS CDK:\n\nBash\n```\nnpx cdk bootstrap <TOOLCHAIN_ACCOUNT_ID>/us-east-1 --profile <TOOLCHAIN_PROFILE_NAME>\n```\n\nIf you have a workload account that is separate from the toolchain account, then that account must also be bootstrapped. When bootstrapping the workload account, we will establish a trust relationship with the toolchain account. Skip this step if you don’t have a separate workload account.\n\nThe workload account boostrappings follows the security best practice of least privilege. First create an execution policy with the minimum permissions required to deploy our demo component resources. We provide a sample policy file in the solution repository for this purpose. Then, use that policy as the execution policy for the trust relationship between the toolchain account and the workload account\n\nBash\n```\naws iam create-policy \\\n --profile <WORKLOAD_PROFILE_NAME> \\\n --policy-name CDK-Exec-Policy \\\n --policy-document file://policies/workload-cdk-exec-policy.json\nnpx cdk bootstrap <WORKLOAD_ACCOUNT_ID>/us-east-1 \\\n --profile <WORKLOAD_PROFILE_NAME> \\\n --trust <TOOLCHAIN_ACCOUNT_ID> \\\n --cloudformation-execution-policies arn:aws:iam::<WORKLOAD_ACCOUNT_ID>:policy/CDK-Exec-Policy\n```\n\n### **Toolchain deployment**\nPrior to being able to deploy for the first time, you must create an AWS CodeCommit repository for the solution. Create this repository in the toolchain account:\n\nBash\n```\naws codecommit create-repository \\\n --profile <TOOLCHAIN_PROFILE_NAME> \\\n --region us-east-1 \\\n --repository-name unicorn-repository\n```\nNext, you must push the contents to the CodeCommit repository. For this, use the git command together with the git-remote-codecommit extension in order to authenticate to the repository with your AWS CLI credentials. Our pipelines are configured to use the main branch.\n\nBash\n```\ngit remote add unicorn codecommit::us-east-1://<TOOLCHAIN_PROFILE_NAME>@unicorn-repository\ngit push unicorn main\n```\nNow we are ready to deploy the toolchain stack:\n\nBash\n```\nexport AWS_REGION=us-east-1\nnpx cdk deploy --profile <TOOLCHAIN_PROFILE_NAME>\n```\n\n### **Workload deployments**\nAt this point, our CI/CD pipeline, provisioning project, and deployment database have been created. The database is initially empty.\n\nNote that the DynamoDB command line interface demonstrated below is not intended to be the SaaS providers provisioning interface for production use. SaaS providers typically have online registration portals, wherein the customer signs up for the service. When new deployments are needed, then a record should automatically be inserted into the solution’s deployment database.\n\nTo demonstrate the solution’s capabilities, first we will provision two deployments, with an optional third cross-region deployment:\n\n1. A silo deployment (silo1) in the us-east-1 region.\n2. A pool deployment (pool1) in the us-east-1 region.\n3. A pool deployment (pool2) in the eu-west-1 region (optional).\nTo start, configure the AWS CLI environment variables:\n\nBash\n```\nexport AWS_REGION=us-east-1\nexport AWS_PROFILE=<TOOLCHAIN_PROFILE_NAME>\n```\nAdd the deployment database records for the first two deployments:\n\nBash\n```\naws dynamodb put-item \\\n --table-name unicorn-deployments \\\n --item '{\n \"id\": {\"S\":\"silo1\"},\n \"type\": {\"S\":\"silo\"},\n \"account\": {\"S\":\"<WORKLOAD_ACCOUNT_ID>\"},\n \"region\": {\"S\":\"us-east-1\"}\n }'\naws dynamodb put-item \\\n --table-name unicorn-deployments \\\n --item '{\n \"id\": {\"S\":\"pool1\"},\n \"type\": {\"S\":\"pool\"},\n \"account\": {\"S\":\"<WORKLOAD_ACCOUNT_ID>\"},\n \"region\": {\"S\":\"us-east-1\"}\n }'\n```\n\nThis will trigger two parallel builds of the provisioning CodeBuild project. Use the CodeBuild Console in order to observe the status and progress of each build.\n\n#### **Cross-region deployment (optional)**\nOptionally, also try a cross-region deployment. Skip this part if a cross-region deployment is not relevant for your use case.\n\nFirst, you must bootstrap the target region in the toolchain and the workload accounts. Bootstrapping of eu-west-1 here is identical to the bootstrapping of the us-east-1 region earlier. First bootstrap the toolchain account:\n\nBash\n```\nnpx cdk bootstrap <TOOLCHAIN_ACCOUNT_ID>/eu-west-1 --profile <TOOLCHAIN_PROFILE_NAME>\n```\nIf you have a separate workload account, then we must also bootstrap it for the new region. Again, please skip this if you have only a single account:\n\nBash\n```\nnpx cdk bootstrap <WORKLOAD_ACCOUNT_ID>/eu-west-1 \\\n --profile <WORKLOAD_PROFILE_NAME> \\\n --trust <TOOLCHAIN_ACCOUNT_ID> \\\n --cloudformation-execution-policies arn:aws:iam::<WORKLOAD_ACCOUNT_ID>:policy/CDK-Exec-Policy\n```\nThen, add the cross-region deployment:\n\nBash\n```\naws dynamodb put-item \\\n --table-name unicorn-deployments \\\n --item '{\n \"id\": {\"S\":\"pool2\"},\n \"type\": {\"S\":\"pool\"},\n \"account\": {\"S\":\"<WORKLOAD_ACCOUNT_ID>\"},\n \"region\": {\"S\":\"eu-west-1\"}\n }'\n```\n\n#### **Validation of deployments**\nAfter the builds have completed, use the CodePipeline console to verify that the deployment pipelines were successfully created in the toolchain account:\n\n![image.png](https://dev-media.amazoncloud.cn/75a42e7068e14d3f96846268e9ea21ef_image.png)\n\n::: hljs-center\n\n*Figure 3: CodePipeline console*\n\n:::\nSimilarly, in the workload account, stacks containing your component resources will have been deployed to each configured region for the deployments. In this demo, we are deploying a single “hello world” container application utilizing AWS App Runner as runtime environment. Successful deployment can be verified by using CloudFormation Console:\n\n![image.png](https://dev-media.amazoncloud.cn/a29852fda3e24616a0a9f44cc4077e12_image.png)\n\n::: hljs-center\n\n*Figure 4: CloudFormation console*\n\n:::\nNow that we have successfully finished with our demo deployments, let’s look at how updates to the pipelines and the component resources can be managed.\n\n### **Managing application resources as code**\nAs highlighted earlier in the Solution Overview, every aspect of our solution shares a single source repository. With all of our code in a single source, we can easily deliver complex changes impacting multiple aspects of our solution. And all of this can be packaged, tested, and released as a single change set. For example, a change can introduce a new stage to the CI/CD pipeline, modify an existing stage in the silo and pool pipelines, and/or make code and resource changes to the component resources.\n\nManaging the pipeline definitions is made simple by the self-mutate capability of the CDK Pipelines. Once initially deployed, each CDK Pipelines pipeline can update its own definition. This is implemented by using a separate SelfMutate stage in the pipeline definition. This stage is executed before any deployment actions, thereby ensuring that the pipeline always executes the latest version that is defined by the source code.\n\nManaging how and when the pipelines trigger to execute also required attention. CDK Pipelines configures pipelines by default to utilize event-based polling of the source repository. While this is a reasonable default, and it is great for the CI/CD pipeline, it is undesired for our silo and pool pipelines. If all of these pipelines would execute automatically on code commits to the source repository, the CI/CD pipeline could not manage the release flow. To address this, we have configured the silo and pool pipelines with the trigger in the ```CodeCommitSourceOptions``` to NONE.\n\n### **Controlling deployment updates**\nA key aspect of SaaS delivery is controlling how you roll out changes to tenants. Significant business risk can arise if changes are released to all tenants all-at-once in a single big bang.\n\nThis risk can be managed by utilizing a combination of silo and pool deployments. Reduce your risk by spreading tenants into multiple pools, and gradually rolling out your changes to these pools. Based on business needs and/or risk assessment, select customers can be provisioned into dedicated silo deployments, thereby allowing update control for those customers separately. Note that while all of a pool’s tenants get the same underlying update simultaneously, you can utilize feature flags to selectively enable new features only for specific tenants in the deployment.\n\nIn the demo solution, the CI/CD pipeline contains only a single custom stage “UpdateDeployments”. This CodeBuild action implements a simple “one-at-a-time” strategy. The code has been purposely written so that it is simple and provides you with a starting point to implement your own more complex strategy, as based on your unique business needs. In the default implementation, every silo and pool pipeline tracks the same “main” branch of the repository. Releases are governed by controlling when each pipeline executes to update its resources.\n\nWhen designing your release strategy, look into how the planned process helps implement releases and changes with high quality and frequency. A typical starting point is a CI/CD pipeline with continuous automated deployments via multiple test and staging environments in order to validate your changes prior to deployment to any production tenants.\n\nFurthermore, consider if utilizing a canary release strategy would help identify potential issues with your changes prior to rolling them out across all deployments in production. In a canary release, each change is first deployed only to a small subset of your deployments. Once you are satisfied with the change quality, then the change can either automatically or manually be released to the rest of your deployments. [As an example, an AWS Step Functions state machine could be combined with the solution](https://aws.amazon.com/blogs/aws/new-building-a-continuous-integration-workflow-with-step-functions-and-aws-codebuild/?), and then utilized to control the release flow, execute validation tests, implement approval steps (either manual or automatic), and even conduct rollback if necessary.\n\n### **Further considerations**\nThe example in this post provisions every silo and pool deployment to a single AWS account. However, the solution is not limited to a single account, and it can deploy equally easily to multiple AWS accounts. When operating at scale, it is best-practice to spread your workloads to several accounts. The [Organizing Your AWS Environment using Multiple Accounts](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/organizing-your-aws-environment.pdf) whitepaper has in-depth guidance on strategies for spreading your workloads.\n\nIf combined with an AWS account-vending machine implementation, such as an [AWS Control Tower](https://aws.amazon.com/controltower/) Landing Zone, then the demo solution could be adapted so that new AWS accounts are provisioned automatically. This would be useful if your business requires full account-level deployment isolation, and you also want automated provisioning.\n\nTo meet Unicorn’s future needs for spreading their solution architecture over multiple separate components, the deployment database and associated lambda function could be decoupled from the rest of the toolchain components in order to provide a central deployment service. When provisioned as standalone, and amended with Amazon Simple Notification Service-based notifications sent to the component deployment systems for example, this central deployment service could be utilized for managing the deployments for multiple components.\n\nIn addition, you should analyze your deployment lifecycle transitions, and then consider what action should be taken when a tenant is disabled and/or deleted. Implementing a deployment archival/deletion process is not in the scope of this post.\n\n### **Cleanup**\nTo cleanup every resource deployed in this post, conduct the following actions:\n\n1. In the workload account:\n\na.In us-east-1 Region, delete CloudFormation stacks named “pool-pool1-resources” and “silo-silo1-resources” and the CDK bootstrap stack “CDKToolKit”.\n\nb.In eu-west-1 Region, delete CloudFormation stack named “pool-pool2-resources” and the CDK Bootstrap stack “CDKToolKit”\n\n2. In the toolchain account:\n\n\na.In us-east-1 Region, delete CloudFormation stacks “toolchain”, “pool-pool1-pipeline”, “pool-pool2-pipeline”, “silo-silo1-pipeline” and the CDK bootstrap stack “CDKToolKit”.\n\nb.In eu-west-1 Region, delete CloudFormation stack “pool-pool2-pipeline-support-eu-west-1” and the CDK bootstrap stack “CDKToolKit”\n\nc.Cleanup and delete S3 buckets “toolchain-*”, “pool-pool1-pipeline-*”, “pool-pool2-pipeline-*”, and “silo-silo1-pipeline-*”.\n\n### **Conclusion**\nThis solution demonstrated an implementation of an automated SaaS application component deployment factory. We covered how an ISV venturing into the SaaS model can utilize AWS CDK and CDK Pipelines in order to avoid a multitude of undifferentiated heavy lifting by leveraging and combining AWS CDK’s cross-region and cross-account capabilities with CDK Pipelines’ self-mutating deployment pipelines. Furthermore, we demonstrated how all of this can be written, managed, and released just like any other code you write. We also demonstrated how a single dynamic provisioning system can be utilized to operate in a mixed mode, with both silo and pool deployments.\n\nVisit the [AWS SaaS Factory Program](https://aws.amazon.com/partners/programs/saas-factory/) page for further information on how AWS can help you on your SaaS journey — regardless of the stage you are currently in.\n\n#### **About the authors**\n\n![image.png](https://dev-media.amazoncloud.cn/061ad2317ea04e13b6f5412b642e4406_image.png)\n\n**Jani Muuriaisniemi**\nJani is a Principal Solutions Architect at Amazon Web Services based out of Helsinki, Finland. With more than 20 years of industry experience, he works as a trusted advisor with a broad range of customers across different industries and segments, helping the customers on their cloud journey.\n\n![image.png](https://dev-media.amazoncloud.cn/35f0b701c7ef453bab5365ee63881135_image.png)\n\n**Jose Juhala**\nJose is a Solutions Architect at Amazon Web Services based out of Tampere, Finland. He works with customers in Nordic and Baltic, from different industries, and guides them in their technical implementations architectural questions.","render":"<p>Software as a Service (SaaS) is an increasingly popular business model for independent software vendors (ISVs), including benefits such as a pay-as-you-go pricing model, scalability, and availability.</p>\n<p>SaaS services can be built by using numerous <a href=\"https://docs.aws.amazon.com/wellarchitected/latest/saas-lens/silo-pool-and-bridge-models.html\" target=\"_blank\">architectural models</a>. The silo model provides each tenant with dedicated resources and a shared-nothing architecture. Silo deployments also provide isolation between tenants’ compute resources and their data, and they help eliminate the noisy-neighbor problem. On the other hand, the pool model offers several benefits, such as lower maintenance overhead, simplified management and operations, and cost-saving opportunities, all due to a more efficient utilization of computing resources and capacity. In the bridge model, both silo and pool models are utilized side-by-side. The bridge model is a hybrid model, where parts of the system can be in a silo model, and parts in a pool.</p>\n<p>End-customers benefit from SaaS delivery in numerous ways. For example, the service can be available from multiple locations, letting the customer choose what is best for them. The tenant onboarding process is often real-time and frictionless. To realize these benefits for their end-customers, SaaS providers need methods for reliable, fast, and multi-region capable provisioning and software lifecycle management.</p>\n<p>This post will describe a deployment system for automating the provision and lifecycle management of workload components in pool or silo deployment models by using <a href=\"https://aws.amazon.com/cdk/\" target=\"_blank\">AWS Cloud Development Kit (AWS CDK)</a> and <a href=\"https://aws.amazon.com/blogs/developer/cdk-pipelines-continuous-delivery-for-aws-cdk-applications/\" target=\"_blank\">CDK Pipelines</a>. We will explore the system’s dynamic and database driven deployment model, as well as its multi-account and multi-region capabilities, and we will provision demo deployments of workload components in both the silo and pool models.</p>\n<h3><a id=\"AWS_Cloud_Development_Kit_and_CDK_Pipelines_8\"></a><strong>AWS Cloud Development Kit and CDK Pipelines</strong></h3>\n<p>For this solution, we utilized <a href=\"https://aws.amazon.com/cdk/\" target=\"_blank\">AWS Cloud Development Kit (AWS CDK)</a> and its CDK Pipelines construct library. AWS CDK is an open-source software development framework for modeling and provisioning cloud application resources by using familiar programming languages. AWS CDK lets you define your infrastructure as code and provision it through <a href=\"https://aws.amazon.com/cloudformation/\" target=\"_blank\">AWS CloudFormation</a>.</p>\n<p>CDK Pipelines is a high-level construct library with an opinionated implementation of a continuous deployment pipeline for your CDK applications. It is powered by <a href=\"http://aws.amazon.com/codepipeline\" target=\"_blank\">AWS CodePipeline</a>, a fully managed <a href=\"https://aws.amazon.com/devops/continuous-delivery/\" target=\"_blank\">continuous delivery</a> service that helps automate your release pipelines for fast and reliable application as well as infrastructure updates. No servers need to be provisioned or setup, and you only pay for what you use. This solution utilizes the recently released and stable CDK Pipelines modern API.</p>\n<h3><a id=\"Business_Scenario_14\"></a><strong>Business Scenario</strong></h3>\n<p>As a baseline use case, we have selected the consideration of a fictitious ISV called Unicorn that wants to implement an SaaS business model.</p>\n<p>Unicorn operates in several countries, and requires the storing of customer data within the customers’ chosen region. Currently, Unicorn needs two regions in order to satisfy its main customer base: one in EU and one in US. Unicorn expects rapid growth, and it needs a solution that can scale to thousands of tenants. Unicorn plans to have different tenant tiers with different isolation requirements. Their planned deployment model has the majority of tenants in shared pool instances, but they also plan to support dedicated silo instances for the tenants requiring it. The solution must also be easily extendable to new Regions as Unicorn’s business expands.</p>\n<p>Unicorn is starting small with just a single development team responsible for currently the only component in their SaaS workload architecture. Following industry best practices, Unicorn has designed its workload architecture so that each component has a clear technical ownership boundary. The chosen solution must grow together with Unicorn, and support multiple independently developed and deployed components in the future.</p>\n<h3><a id=\"Solution_Overview_21\"></a><strong>Solution Overview</strong></h3>\n<p>Today, many customers utilize AWS CodePipeline to build, test, and deploy their cloud applications. For an SaaS provider such as Unicorn, considering utilizing a single pipeline for managing every deployment presented concerns. At the scale that Unicorn requires, a single pipeline with potentially hundreds of actions runs the risk of becoming throughput limited. Moreover, a single pipeline would offer Unicorn limited control over how changes are released.</p>\n<p>Our solution addresses this problem by having a separate dynamically provisioned pipeline for each pool and silo deployment. The solution is designed to manage multiple deployments of Unicorn’s single workload component, thereby aligning with their current needs — and with small changes, including future needs.</p>\n<p><a href=\"https://docs.aws.amazon.com/cdk/latest/guide/best-practices.html#best-practices-code\" target=\"_blank\">CDK Best Practices</a> state that an AWS CDK application maps to a component as <a href=\"https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html#introduction\" target=\"_blank\">defined</a> by the AWS Well-Architected Framework. A component is the code, configuration, and AWS Resources that together deliver against a workload requirement. And this is typically the unit of technical ownership. A component usually includes logical units (e.g., api, database), and can have a continuous deployment pipeline.</p>\n<p>Utilizing CDK Pipelines provides a significant benefit: with no additional code, we can deploy cross-account and cross-region just as easily as we would to a single account and region. CDK Pipelines automatically creates and manages the required cross-account encryption keys and cross-region replication buckets. Furthermore, we only need to establish a trust relationship between the accounts during the <a href=\"https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html\" target=\"_blank\">CDK bootstrapping process</a>.</p>\n<p>The following diagram illustrates the solution architecture:</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/fe23c5824ee349759281570d4eb1eafe_image.png\" alt=\"image.png\" /></p>\n<div class=\"hljs-center\">\n<p><em>Figure 1: Solution architecture</em></p>\n</div>\n<p>Let’s look closer at the two primary high level solution flows: silo and pool pipeline provisioning (1 and 2), and component code deployment (3 and 4).</p>\n<p>Provisioning is separated into a dedicated flow, so that code deployments do not interfere with tenant onboarding, and vice versa. At the heart of the provisioning flow is the deployment database (1), which is implemented by using an <a href=\"https://aws.amazon.com/dynamodb/\" target=\"_blank\">Amazon DynamoDB</a> table.</p>\n<p>Utilizing DynamoDB Streams and <a href=\"https://aws.amazon.com/lambda/\" target=\"_blank\">AWS Lambda</a> Triggers, a new <a href=\"https://aws.amazon.com/codedeploy/\" target=\"_blank\">AWS CodeBuild</a> provisioning project build (2) is automatically started after a record is inserted into the deployment database. The provisioning project directly provisions new silo and pool pipelines by using the “cdk deploy” command. Provisioning events are processed in parallel, so that the solution can handle possible bursts in Unicorn’s tenant onboarding volumes.</p>\n<p>CDK best practices suggest that infrastructure and runtime code live in the same package. A single <a href=\"https://aws.amazon.com/codecommit/\" target=\"_blank\">AWS CodeCommit</a> repository (3) contains everything needed: the CI/CD pipeline definitions as well as the workload component code. This repository is the source artifact for every CodePipeline pipeline and CodeBuild project. The chapter “Managing application resources as code” describes related implementation details.</p>\n<p>The CI/CD pipeline (4) is a CDK Pipelines pipeline, and it is responsible for the component’s Software Development Life Cycle (SDLC) activities. In addition to implementing the update release process, it is expected that most SaaS providers will also implement additional activities. This includes a variety of tests and pre-production environment deployments. The chapter “Controlling deployment updates” dives deeper into this topic.</p>\n<p>Deployments have two parts: The pipeline (5) and the component resource stack(s) (6) that it manages. The pipelines are deployed to the central toolchain account and region, whereas the component resources are deployed to the AWS Account and Region, as specified in the deployments’ record in the deployment database.</p>\n<p>Sample code for the solution is available in <a href=\"https://github.com/aws-samples/aws-saas-parallel-deployments\" target=\"_blank\">GitHub</a>. The sample code is intended for utilization in conjunction with this post. Our solution is implemented in TypeScript.</p>\n<h3><a id=\"Deployment_Database_53\"></a><strong>Deployment Database</strong></h3>\n<p>Our deployment database is an Amazon DynamoDB table, with the following structure:</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/5f86fa5b88c245be933cdf6645ccaad5_image.png\" alt=\"image.png\" /></p>\n<div class=\"hljs-center\">\n<p><em>Figure 2: DynamoDB table</em></p>\n</div>\n<ul>\n<li>‘id’ is a unique identifier for each deployment.</li>\n<li>‘account’ is the AWS account ID for the component resources.</li>\n<li>‘region’ is the AWS region ID for the component resources.</li>\n<li>‘type’ is either ‘silo’ or ‘pool’, which defines the deployment model.</li>\n</ul>\n<p>This design supports tenant deployment to multiple silo and pool deployments. Each of these can target any available and bootstrapped AWS Account and Region. For example, different pools can support tenants in different regions, with select tenants deployed to dedicated silos. As pools may be limited to how many tenants they can serve, the design also supports having multiple pools within a region, and it can easily be extended with an additional attribute to support the tiers concept.</p>\n<p>Note that the deployment database does not contain tenant information. It is expected that such mapping is maintained in a separate tenant database, where each tenant record can map to the ID of the deployment that it is associated with.</p>\n<p>Now that we have looked at our solution design and architecture, let’s move to the hands-on section, starting with the deployment requirements for the solution.</p>\n<h3><a id=\"Prerequisites_73\"></a><strong>Prerequisites</strong></h3>\n<p>The following tools are required to deploy the solution:</p>\n<ul>\n<li>NodeJS version <a href=\"https://docs.aws.amazon.com/cdk/latest/guide/getting_started.html#getting_started_prerequisites\" target=\"_blank\">compatible with AWS CDK</a> version 1.124.0</li>\n<li>The <a href=\"https://aws.amazon.com/cli/\" target=\"_blank\">AWS Command Line Interface</a> (AWS CLI)</li>\n<li>Git with<a href=\"https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-git-remote-codecommit.html\" target=\"_blank\"> git-remote-codecommit</a> extension</li>\n</ul>\n<p>To follow this tutorial completely, you should have administrator access to at least one, but preferably two AWS accounts:</p>\n<ul>\n<li><strong>Toolchain</strong>: Account for the SDLC toolchain: the pipelines, the provisioning project, the repository, and the deployment database.</li>\n<li><strong>Workload</strong> (optional): Account for the component resources.</li>\n</ul>\n<p>If you have only a single account, then the toolchain account can be used for both purposes. Credentials for the account(s) are assumed to be <a href=\"https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html\" target=\"_blank\">configured in AWS CLI profile(s)</a>.</p>\n<p>The instructions in this post use the following placeholders, which you must replace with your specific values:</p>\n<ul>\n<li>&lt;TOOLCHAIN_ACCOUNT_ID&gt;: The AWS Account ID for the toolchain account</li>\n<li>&lt;TOOLCHAIN_PROFILE_NAME&gt;: The AWS CLI profile name for the toolchain account credentials</li>\n<li>&lt;WORKLOAD_ACCOUNT_ID&gt;: The AWS Account ID for the workload account</li>\n<li>&lt;WORKLOAD_PROFILE_NAME&gt;: The AWS CLI profile name for the workload account credentials</li>\n</ul>\n<h3><a id=\"Bootstrapping_94\"></a><strong>Bootstrapping</strong></h3>\n<p>The toolchain account, and all workload account(s), must be bootstrapped prior to first-time deployment.</p>\n<p>AWS CDK and our solutions’ dependencies must be installed to start with. The easiest way to do this is to install them locally with npm. First, we need to download our sample code, so that the we have the package.json configuration file available for npm.</p>\n<p>Note that throughout these instructions, many commands are broken over multiple lines for readability. Take care to execute the commands completely. It is always safe to execute each code block as a whole.</p>\n<p>Clone the sample code repository from GitHub, and then install the dependencies by using npm:</p>\n<p>Bash</p>\n<pre><code class=\"lang-\">git clone https://github.com/aws-samples/aws-saas-parallel-deployments\ncd aws-saas-parallel-deployments\nnpm ci\n</code></pre>\n<p>CDK Pipelines requires use of modern bootstrapping. To ensure that this is enabled, start by setting the related environment variable:</p>\n<p>Bash</p>\n<pre><code class=\"lang-\">export CDK_NEW_BOOTSTRAP=1\n</code></pre>\n<p>Then, bootstrap the toolchain account. You must bootstrap both the region where the toolchain stack is deployed, as well as every target region for component resources. Here, we will first bootstrap only the us-east-1 region, and later you can optionally bootstrap additional region(s).</p>\n<p>To bootstrap, we use npx to execute the locally installed version of AWS CDK:</p>\n<p>Bash</p>\n<pre><code class=\"lang-\">npx cdk bootstrap &lt;TOOLCHAIN_ACCOUNT_ID&gt;/us-east-1 --profile &lt;TOOLCHAIN_PROFILE_NAME&gt;\n</code></pre>\n<p>If you have a workload account that is separate from the toolchain account, then that account must also be bootstrapped. When bootstrapping the workload account, we will establish a trust relationship with the toolchain account. Skip this step if you don’t have a separate workload account.</p>\n<p>The workload account boostrappings follows the security best practice of least privilege. First create an execution policy with the minimum permissions required to deploy our demo component resources. We provide a sample policy file in the solution repository for this purpose. Then, use that policy as the execution policy for the trust relationship between the toolchain account and the workload account</p>\n<p>Bash</p>\n<pre><code class=\"lang-\">aws iam create-policy \\\n --profile &lt;WORKLOAD_PROFILE_NAME&gt; \\\n --policy-name CDK-Exec-Policy \\\n --policy-document file://policies/workload-cdk-exec-policy.json\nnpx cdk bootstrap &lt;WORKLOAD_ACCOUNT_ID&gt;/us-east-1 \\\n --profile &lt;WORKLOAD_PROFILE_NAME&gt; \\\n --trust &lt;TOOLCHAIN_ACCOUNT_ID&gt; \\\n --cloudformation-execution-policies arn:aws:iam::&lt;WORKLOAD_ACCOUNT_ID&gt;:policy/CDK-Exec-Policy\n</code></pre>\n<h3><a id=\"Toolchain_deployment_140\"></a><strong>Toolchain deployment</strong></h3>\n<p>Prior to being able to deploy for the first time, you must create an AWS CodeCommit repository for the solution. Create this repository in the toolchain account:</p>\n<p>Bash</p>\n<pre><code class=\"lang-\">aws codecommit create-repository \\\n --profile &lt;TOOLCHAIN_PROFILE_NAME&gt; \\\n --region us-east-1 \\\n --repository-name unicorn-repository\n</code></pre>\n<p>Next, you must push the contents to the CodeCommit repository. For this, use the git command together with the git-remote-codecommit extension in order to authenticate to the repository with your AWS CLI credentials. Our pipelines are configured to use the main branch.</p>\n<p>Bash</p>\n<pre><code class=\"lang-\">git remote add unicorn codecommit::us-east-1://&lt;TOOLCHAIN_PROFILE_NAME&gt;@unicorn-repository\ngit push unicorn main\n</code></pre>\n<p>Now we are ready to deploy the toolchain stack:</p>\n<p>Bash</p>\n<pre><code class=\"lang-\">export AWS_REGION=us-east-1\nnpx cdk deploy --profile &lt;TOOLCHAIN_PROFILE_NAME&gt;\n</code></pre>\n<h3><a id=\"Workload_deployments_165\"></a><strong>Workload deployments</strong></h3>\n<p>At this point, our CI/CD pipeline, provisioning project, and deployment database have been created. The database is initially empty.</p>\n<p>Note that the DynamoDB command line interface demonstrated below is not intended to be the SaaS providers provisioning interface for production use. SaaS providers typically have online registration portals, wherein the customer signs up for the service. When new deployments are needed, then a record should automatically be inserted into the solution’s deployment database.</p>\n<p>To demonstrate the solution’s capabilities, first we will provision two deployments, with an optional third cross-region deployment:</p>\n<ol>\n<li>A silo deployment (silo1) in the us-east-1 region.</li>\n<li>A pool deployment (pool1) in the us-east-1 region.</li>\n<li>A pool deployment (pool2) in the eu-west-1 region (optional).<br />\nTo start, configure the AWS CLI environment variables:</li>\n</ol>\n<p>Bash</p>\n<pre><code class=\"lang-\">export AWS_REGION=us-east-1\nexport AWS_PROFILE=&lt;TOOLCHAIN_PROFILE_NAME&gt;\n</code></pre>\n<p>Add the deployment database records for the first two deployments:</p>\n<p>Bash</p>\n<pre><code class=\"lang-\">aws dynamodb put-item \\\n --table-name unicorn-deployments \\\n --item '{\n &quot;id&quot;: {&quot;S&quot;:&quot;silo1&quot;},\n &quot;type&quot;: {&quot;S&quot;:&quot;silo&quot;},\n &quot;account&quot;: {&quot;S&quot;:&quot;&lt;WORKLOAD_ACCOUNT_ID&gt;&quot;},\n &quot;region&quot;: {&quot;S&quot;:&quot;us-east-1&quot;}\n }'\naws dynamodb put-item \\\n --table-name unicorn-deployments \\\n --item '{\n &quot;id&quot;: {&quot;S&quot;:&quot;pool1&quot;},\n &quot;type&quot;: {&quot;S&quot;:&quot;pool&quot;},\n &quot;account&quot;: {&quot;S&quot;:&quot;&lt;WORKLOAD_ACCOUNT_ID&gt;&quot;},\n &quot;region&quot;: {&quot;S&quot;:&quot;us-east-1&quot;}\n }'\n</code></pre>\n<p>This will trigger two parallel builds of the provisioning CodeBuild project. Use the CodeBuild Console in order to observe the status and progress of each build.</p>\n<h4><a id=\"Crossregion_deployment_optional_206\"></a><strong>Cross-region deployment (optional)</strong></h4>\n<p>Optionally, also try a cross-region deployment. Skip this part if a cross-region deployment is not relevant for your use case.</p>\n<p>First, you must bootstrap the target region in the toolchain and the workload accounts. Bootstrapping of eu-west-1 here is identical to the bootstrapping of the us-east-1 region earlier. First bootstrap the toolchain account:</p>\n<p>Bash</p>\n<pre><code class=\"lang-\">npx cdk bootstrap &lt;TOOLCHAIN_ACCOUNT_ID&gt;/eu-west-1 --profile &lt;TOOLCHAIN_PROFILE_NAME&gt;\n</code></pre>\n<p>If you have a separate workload account, then we must also bootstrap it for the new region. Again, please skip this if you have only a single account:</p>\n<p>Bash</p>\n<pre><code class=\"lang-\">npx cdk bootstrap &lt;WORKLOAD_ACCOUNT_ID&gt;/eu-west-1 \\\n --profile &lt;WORKLOAD_PROFILE_NAME&gt; \\\n --trust &lt;TOOLCHAIN_ACCOUNT_ID&gt; \\\n --cloudformation-execution-policies arn:aws:iam::&lt;WORKLOAD_ACCOUNT_ID&gt;:policy/CDK-Exec-Policy\n</code></pre>\n<p>Then, add the cross-region deployment:</p>\n<p>Bash</p>\n<pre><code class=\"lang-\">aws dynamodb put-item \\\n --table-name unicorn-deployments \\\n --item '{\n &quot;id&quot;: {&quot;S&quot;:&quot;pool2&quot;},\n &quot;type&quot;: {&quot;S&quot;:&quot;pool&quot;},\n &quot;account&quot;: {&quot;S&quot;:&quot;&lt;WORKLOAD_ACCOUNT_ID&gt;&quot;},\n &quot;region&quot;: {&quot;S&quot;:&quot;eu-west-1&quot;}\n }'\n</code></pre>\n<h4><a id=\"Validation_of_deployments_238\"></a><strong>Validation of deployments</strong></h4>\n<p>After the builds have completed, use the CodePipeline console to verify that the deployment pipelines were successfully created in the toolchain account:</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/75a42e7068e14d3f96846268e9ea21ef_image.png\" alt=\"image.png\" /></p>\n<div class=\"hljs-center\">\n<p><em>Figure 3: CodePipeline console</em></p>\n</div>\n<p>Similarly, in the workload account, stacks containing your component resources will have been deployed to each configured region for the deployments. In this demo, we are deploying a single “hello world” container application utilizing AWS App Runner as runtime environment. Successful deployment can be verified by using CloudFormation Console:</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/a29852fda3e24616a0a9f44cc4077e12_image.png\" alt=\"image.png\" /></p>\n<div class=\"hljs-center\">\n<p><em>Figure 4: CloudFormation console</em></p>\n</div>\n<p>Now that we have successfully finished with our demo deployments, let’s look at how updates to the pipelines and the component resources can be managed.</p>\n<h3><a id=\"Managing_application_resources_as_code_259\"></a><strong>Managing application resources as code</strong></h3>\n<p>As highlighted earlier in the Solution Overview, every aspect of our solution shares a single source repository. With all of our code in a single source, we can easily deliver complex changes impacting multiple aspects of our solution. And all of this can be packaged, tested, and released as a single change set. For example, a change can introduce a new stage to the CI/CD pipeline, modify an existing stage in the silo and pool pipelines, and/or make code and resource changes to the component resources.</p>\n<p>Managing the pipeline definitions is made simple by the self-mutate capability of the CDK Pipelines. Once initially deployed, each CDK Pipelines pipeline can update its own definition. This is implemented by using a separate SelfMutate stage in the pipeline definition. This stage is executed before any deployment actions, thereby ensuring that the pipeline always executes the latest version that is defined by the source code.</p>\n<p>Managing how and when the pipelines trigger to execute also required attention. CDK Pipelines configures pipelines by default to utilize event-based polling of the source repository. While this is a reasonable default, and it is great for the CI/CD pipeline, it is undesired for our silo and pool pipelines. If all of these pipelines would execute automatically on code commits to the source repository, the CI/CD pipeline could not manage the release flow. To address this, we have configured the silo and pool pipelines with the trigger in the <code>CodeCommitSourceOptions</code> to NONE.</p>\n<h3><a id=\"Controlling_deployment_updates_266\"></a><strong>Controlling deployment updates</strong></h3>\n<p>A key aspect of SaaS delivery is controlling how you roll out changes to tenants. Significant business risk can arise if changes are released to all tenants all-at-once in a single big bang.</p>\n<p>This risk can be managed by utilizing a combination of silo and pool deployments. Reduce your risk by spreading tenants into multiple pools, and gradually rolling out your changes to these pools. Based on business needs and/or risk assessment, select customers can be provisioned into dedicated silo deployments, thereby allowing update control for those customers separately. Note that while all of a pool’s tenants get the same underlying update simultaneously, you can utilize feature flags to selectively enable new features only for specific tenants in the deployment.</p>\n<p>In the demo solution, the CI/CD pipeline contains only a single custom stage “UpdateDeployments”. This CodeBuild action implements a simple “one-at-a-time” strategy. The code has been purposely written so that it is simple and provides you with a starting point to implement your own more complex strategy, as based on your unique business needs. In the default implementation, every silo and pool pipeline tracks the same “main” branch of the repository. Releases are governed by controlling when each pipeline executes to update its resources.</p>\n<p>When designing your release strategy, look into how the planned process helps implement releases and changes with high quality and frequency. A typical starting point is a CI/CD pipeline with continuous automated deployments via multiple test and staging environments in order to validate your changes prior to deployment to any production tenants.</p>\n<p>Furthermore, consider if utilizing a canary release strategy would help identify potential issues with your changes prior to rolling them out across all deployments in production. In a canary release, each change is first deployed only to a small subset of your deployments. Once you are satisfied with the change quality, then the change can either automatically or manually be released to the rest of your deployments. <a href=\"https://aws.amazon.com/blogs/aws/new-building-a-continuous-integration-workflow-with-step-functions-and-aws-codebuild/?\" target=\"_blank\">As an example, an AWS Step Functions state machine could be combined with the solution</a>, and then utilized to control the release flow, execute validation tests, implement approval steps (either manual or automatic), and even conduct rollback if necessary.</p>\n<h3><a id=\"Further_considerations_277\"></a><strong>Further considerations</strong></h3>\n<p>The example in this post provisions every silo and pool deployment to a single AWS account. However, the solution is not limited to a single account, and it can deploy equally easily to multiple AWS accounts. When operating at scale, it is best-practice to spread your workloads to several accounts. The <a href=\"https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/organizing-your-aws-environment.pdf\" target=\"_blank\">Organizing Your AWS Environment using Multiple Accounts</a> whitepaper has in-depth guidance on strategies for spreading your workloads.</p>\n<p>If combined with an AWS account-vending machine implementation, such as an <a href=\"https://aws.amazon.com/controltower/\" target=\"_blank\">AWS Control Tower</a> Landing Zone, then the demo solution could be adapted so that new AWS accounts are provisioned automatically. This would be useful if your business requires full account-level deployment isolation, and you also want automated provisioning.</p>\n<p>To meet Unicorn’s future needs for spreading their solution architecture over multiple separate components, the deployment database and associated lambda function could be decoupled from the rest of the toolchain components in order to provide a central deployment service. When provisioned as standalone, and amended with Amazon Simple Notification Service-based notifications sent to the component deployment systems for example, this central deployment service could be utilized for managing the deployments for multiple components.</p>\n<p>In addition, you should analyze your deployment lifecycle transitions, and then consider what action should be taken when a tenant is disabled and/or deleted. Implementing a deployment archival/deletion process is not in the scope of this post.</p>\n<h3><a id=\"Cleanup_286\"></a><strong>Cleanup</strong></h3>\n<p>To cleanup every resource deployed in this post, conduct the following actions:</p>\n<ol>\n<li>In the workload account:</li>\n</ol>\n<p>a.In us-east-1 Region, delete CloudFormation stacks named “pool-pool1-resources” and “silo-silo1-resources” and the CDK bootstrap stack “CDKToolKit”.</p>\n<p>b.In eu-west-1 Region, delete CloudFormation stack named “pool-pool2-resources” and the CDK Bootstrap stack “CDKToolKit”</p>\n<ol start=\"2\">\n<li>In the toolchain account:</li>\n</ol>\n<p>a.In us-east-1 Region, delete CloudFormation stacks “toolchain”, “pool-pool1-pipeline”, “pool-pool2-pipeline”, “silo-silo1-pipeline” and the CDK bootstrap stack “CDKToolKit”.</p>\n<p>b.In eu-west-1 Region, delete CloudFormation stack “pool-pool2-pipeline-support-eu-west-1” and the CDK bootstrap stack “CDKToolKit”</p>\n<p>c.Cleanup and delete S3 buckets “toolchain-<em>”, “pool-pool1-pipeline-</em>”, “pool-pool2-pipeline-<em>”, and “silo-silo1-pipeline-</em>”.</p>\n<h3><a id=\"Conclusion_304\"></a><strong>Conclusion</strong></h3>\n<p>This solution demonstrated an implementation of an automated SaaS application component deployment factory. We covered how an ISV venturing into the SaaS model can utilize AWS CDK and CDK Pipelines in order to avoid a multitude of undifferentiated heavy lifting by leveraging and combining AWS CDK’s cross-region and cross-account capabilities with CDK Pipelines’ self-mutating deployment pipelines. Furthermore, we demonstrated how all of this can be written, managed, and released just like any other code you write. We also demonstrated how a single dynamic provisioning system can be utilized to operate in a mixed mode, with both silo and pool deployments.</p>\n<p>Visit the <a href=\"https://aws.amazon.com/partners/programs/saas-factory/\" target=\"_blank\">AWS SaaS Factory Program</a> page for further information on how AWS can help you on your SaaS journey — regardless of the stage you are currently in.</p>\n<h4><a id=\"About_the_authors_309\"></a><strong>About the authors</strong></h4>\n<p><img src=\"https://dev-media.amazoncloud.cn/061ad2317ea04e13b6f5412b642e4406_image.png\" alt=\"image.png\" /></p>\n<p><strong>Jani Muuriaisniemi</strong><br />\nJani is a Principal Solutions Architect at Amazon Web Services based out of Helsinki, Finland. With more than 20 years of industry experience, he works as a trusted advisor with a broad range of customers across different industries and segments, helping the customers on their cloud journey.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/35f0b701c7ef453bab5365ee63881135_image.png\" alt=\"image.png\" /></p>\n<p><strong>Jose Juhala</strong><br />\nJose is a Solutions Architect at Amazon Web Services based out of Tampere, Finland. He works with customers in Nordic and Baltic, from different industries, and guides them in their technical implementations architectural questions.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭