Jenkins high availability and disaster recovery on Amazon

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"We often hear from customers about their challenges architecting [Jenkins](https://www.jenkins.io/) for scale and high availability (HA). Jenkins was originally built as a continuous integration (CI) system to test software before it was committed to a repository. Since its beginning, Jenkins has grown out of necessity versus grand master plan. Developers who extended Jenkins favored speed of creating functionality over performance or scalability of the entire system. This is not to say that it’s impossible to scale Jenkins, it’s only mentioned here to highlight the challenges and technical debt that has accumulated because of the prioritization of features versus developing towards a specific architecture. In this post, we discuss these challenges and our proposed solution.\n\n### **Challenges with Jenkins at scale and HA**\nBusiness and customer demand are forcing organizations to increase the speed and agility at which they release features and functionality. As organizations make this transition, the usage of continuous integration and continuous delivery (CI/CD) increases, which drives the need to scale Jenkins. Overlay this with an organization that commits hundreds of changes per day and works around the clock, with developers dispersed globally, and you end up with an operational situation where there is no room for downtime. To mitigate the risk of impacting an organization’s ability to release when they need it, developers require a system that not only scales but is also highly available.\n\nThe ability to scale Jenkins and provide HA comes down to two problems. One is the ability to scale compute to handle additional jobs, and the second is storage. To scale compute, we typically do it in one of two ways, horizontally or vertically. Horizontally means we scale Jenkins to add additional compute nodes. Scaling vertically means we scale Jenkins by adding more resources to the compute node.\n\nLet’s start with the storage problem. Jenkins is designed around the local file system. Anyone who has spent time around Jenkins is aware that logs, cloned repos, plugins, and build artifacts are stored into JENKINS_HOME. Local file systems, while good for single-server designs, tend to be a challenge when HA comes into the picture. In on-premises designs, administrators have often used Network File System (NFS) and Storage Area Networks (SAN) to achieve some scale and resiliency. This type of design comes with a trade-off of performance and doesn’t provide the true HA and inherent disaster recovery (DR) required to meet the demands of the business.\n\nBecause of the local file system constraint, there are two native families of storage available in AWS: [Amazon Elastic Block Store](https://aws.amazon.com/ebs/) ([Amazon EBS](https://aws.amazon.com/cn/ebs/?trk=cndc-detail)) and [Amazon Elastic File System](https://aws.amazon.com/efs/) ([Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail)). [Amazon EBS](https://aws.amazon.com/cn/ebs/?trk=cndc-detail) is great for a single-server design in a single [Availability Zone](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/#Availability_Zones). The challenge is trying to scale a single-server design to support HA. Because of the requirement to assign an EBS volume to a specific Availability Zone, you can’t automatically transition the EBS volume to another Availability Zone and attach it to a Jenkins instance. If you don’t mind having an impact on Recovery Time Objective (RTO) and Recovery Point Objective (RPO), a solution using [Amazon EBS](https://aws.amazon.com/cn/ebs/?trk=cndc-detail) snapshots copied to additional Availability Zones might work. Although EBS snapshot copy is possible, it’s not a recommended solution because it doesn’t scale and has complexities in building and maintaining this type of solution.\n\n[Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) as an alternative has worked well for customers that don’t have high usage patterns of Jenkins. All Jenkins instances within the Region can access the [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) file system and data durably stored in multiple Availability Zones. If a single Availability Zone experiences an outage, the Jenkins file system is still accessible from other Availability Zones providing HA for the storage layer. This solution is not recommended for high-usage systems due to the way that Jenkins reads and writes data. Jenkins’s access pattern is skewed towards writing data such as logs, cloned repos, and building artifacts versus reading data. [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail), on the other hand, is designed for workloads that read more than they write. On high-usage workloads, customers have experienced Jenkins build slowness and Jenkins page load latency. This is why [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) isn’t recommended for high-usage Jenkins systems.\n\n### **Solution for Jenkins at scale and HA**\nSolving the compute problem is relatively straightforward by using [Amazon Elastic Kubernetes Service](https://aws.amazon.com/eks/) ([Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail)). In the context of Jenkins, an organization would run Jenkins in an [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) cluster that spans multiple Availability Zones, as shown in the following diagram.\n\n![image.png](https://dev-media.amazoncloud.cn/4eadee38b25545b3b6e7c3bcdb9042cf_image.png)\n\nFigure 1 –Jenkins deployment in [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) with multiple availability zones.\n\nJenkins Controller and Agent would run in an Availability Zone as a Kubernetes pod. [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) is designed around Desired State Configuration (DSC), which means that it continuously make sure that the running environment matches the configuration that has been applied to [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail). In practice, when [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) is told that you want a single pod of Jenkins running, it monitors and makes sure that pod is always running. If an Availability Zone is unavailable, [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) launches a new node in another Availability Zone and deploys all pods to meet any necessary constraints defined in [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail). With this option, we still need to have the data in other Availability Zones, which we cover later in this post.\n\nThe only option of scaling Jenkins controllers is vertical. Scaling Jenkins horizontally could lead to an undesirable state because the system wasn’t designed to have multiple instances of Jenkins attached to the same storage layer. There is no exclusive file locking mechanism to ensure data consistency. For organizations that have exhausted the limits with vertical scaling, the recommendation is to run multiple independent Jenkins controllers and separate them per team or group. Vertical scaling of Jenkins is simpler in [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail). Node sizes and container memory are controlled by configuration. Increasing memory size is as simple as changing a container’s memory setting. Due to the ease of changing configuration, it’s best to start with a lower memory setting, monitor performance, and increase as necessary. You want to find a good balance between price and performance.\n\nFor Jenkins agents, there are many options to scale the compute. In the context of scale and HA, the best options are to use [AWS CodeBuild](https://aws.amazon.com/codebuild/), [AWS Fargate for Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html), or [Amazon EKS managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html). With CodeBuild, you don’t need to provision, manage, or scale your build servers. CodeBuild scales continuously and processes multiple builds concurrently. You can use the [Jenkins plugin for CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/jenkins-plugin.html) to integrate CodeBuild with Jenkins. Fargate is a good option but has some challenges if you’re trying to build container images within a container due to permissions necessary that aren’t exposed in Fargate. For additional information on how to overcome this challenge with Jenkins, refer to [How to build container images with Amazon EKS on Fargate](https://aws.amazon.com/blogs/containers/how-to-build-container-images-with-amazon-eks-on-fargate/).\n\nNow let’s look at the storage layer and see how [LINBIT](https://linbit.com/) is helping organizations solve this problem with [LINSTOR](https://github.com/LINBIT/linstor-server). LINBIT’s LINSTOR is an open-source management tool designed to manage block storage devices. Its primary use case is to provide Linux block storage for Kubernetes and other public and private cloud platforms. LINBIT also provides [enterprise subscription](https://linbit.com/sla/) for LINSTOR, which include technical support with SLA.\n\nThe following diagram illustrates a LINSTOR storage solution running on [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) using multiple Availability Zones and [Amazon Simple Storage Service](http://aws.amazon.com/s3) ([Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail)) for snapshots.\n\n![image.png](https://dev-media.amazoncloud.cn/9649412100524726b3ffe220f5985024_image.png)\n\nFigure 2. LINSTOR storage solution running on [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) using multiple availability zones and S3 for snapshot.\n\nLINSTOR is composed of a control plane and a data plane. The control plane consists of a set of containers deployed into [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) and is responsible for managing the data plane. The data plane consists of a collection of open-source block storage software, most importantly LINBIT’s [Distributed Replicated Storage System (DRBD)](https://linbit.com/drbd/) software. DRBD is responsible for provisioning and synchronously replicating storage between [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) worker instances in different Availability Zones.\n\nLINSTOR is deployed via [Helm](https://helm.sh/) into [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail), and the LINSTOR cluster is initialized by the LINSTOR Operator. Once deployed, LINSTOR volumes and volume snapshots are managed via Kubernetes [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) and [Snapshot Classes](https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/) in a Kubernetes native fashion. LINSTOR volumes are backed by LINSTOR objects known as storage pools, which are composed of one or more EBS volumes attached to each [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) worker instance.\n\nLINSTOR volumes layer DRBD on top of the worker’s attached EBS volume to enable synchronous replication between peers in the [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) cluster. This ensures that you have an identical copy of your persistent volume on the EBS volumes in each Availability Zone. In the event of an Availability Zone outage or planned migration, [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) moves the Jenkins deployment to another Availability Zone where the persistent volume copy is available. In terms of scaling, LINBIT DRDB supports up to 32 replicas per volume, with a maximum size of 1 PiB per volume. LINSTOR node itself can scale beyond hundreds of nodes, as shown in this case study.\n\nLINSTOR also provides an HA Controller component in its control plane to speed up failover times during outages. LINSTOR’s HA Controller looks for pods with a specific label, and if LINSTOR’s persistent volumes replication network becomes interrupted (like during an Availability Zone outage), LINSTOR reschedules the pod sooner than the default Kubernetes ```pod-eviction-timeout```.\n\nLINBIT provides a detailed full installation for [Jenkins HA in AWS](https://linbit.com/direct-ha-jenkins-aws/). A sample of LINSTOR’s helm values supporting these features is as follows:\n\n```\\noperator:\\n satelliteSet:\\n storagePools:\\n lvmThinPools:\\n - name: lvm-thin\\n thinVolume: thinpool\\n volumeGroup: \\"\\"\\n devicePaths:\\n - /dev/nvme1n1\\n kernelModuleInjectionMode: Compile\\nstork:\\n enabled: false\\ncsi:\\n enableTopology: true\\netcd:\\n replicas: 3\\nhaController:\\n replicas: 3\\n```\n\nAfter LINSTOR is deployed, you create a Kubernetes StorageClass supporting persistent volumes with three replicas using the following example:\n\n```\\napiVersion: storage.k8s.io/v1\\nkind: StorageClass\\nmetadata:\\n name: \\"linstor-csi-lvm-thin-r3\\"\\nprovisioner: linstor.csi.linbit.com\\nparameters:\\n allowRemoteVolumeAccess: \\"false\\"\\n autoPlace: \\"3\\"\\n storagePool: \\"lvm-thin\\"\\n DrbdOptions/Disk/disk-flushes: \\"no\\"\\n DrbdOptions/Disk/md-flushes: \\"no\\"\\n DrbdOptions/Net/max-buffers: \\"10000\\"\\nreclaimPolicy: Retain\\nallowVolumeExpansion: true\\nvolumeBindingMode: WaitForFirstConsumer\\n```\n\nFinally, [Jenkins helm charts](https://github.com/jenkinsci/helm-charts) are deployed into [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) with the following Helm values to request a PV from the LINSTOR StorageClass:\n\n```\\npersistence:\\n storageClass: linstor-csi-lvm-thin-r3\\n size: \\"200Gi\\"\\ncontroller:\\n serviceType: LoadBalancer\\n podLabels:\\n linstor.csi.linbit.com/on-storage-lost: remove\\n```\n\nTo protect against entire AWS Region outages and provide disaster recovery, LINSTOR takes volume snapshots and replicates it cross-Region using [Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail). LINSTOR requires read and write access to the target S3 bucket using AWS credentials provided as Kubernetes secrets:\n\n```\\nkind: Secret\\napiVersion: v1\\nmetadata:\\n name: linstor-csi-s3-access\\n namespace: default\\ntype: linstor.csi.linbit.com/s3-credentials.v1\\nimmutable: true\\nstringData:\\n access-key: REDACTED\\n secret-key: REDACTED\\n```\n\nThe target S3 bucket is referenced as a snapshot shipping target using a LINSTOR S3 VolumeSnapshotClass. The following example shows a VolumeSnapshotClass referencing the S3 bucket’s secret and additional configuration for the target S3 bucket:\n\n```\\nkind: VolumeSnapshotClass\\napiVersion: snapshot.storage.k8s.io/v1\\nmetadata:\\n name: linstor-csi-snapshot-class-s3\\ndriver: linstor.csi.linbit.com\\ndeletionPolicy: Delete\\nparameters:\\n snap.linstor.csi.linbit.com/type: S3\\n snap.linstor.csi.linbit.com/remote-name: s3-us-west-2\\n snap.linstor.csi.linbit.com/allow-incremental: \\"false\\"\\n snap.linstor.csi.linbit.com/s3-bucket: name-of-bucket-123\\n snap.linstor.csi.linbit.com/s3-endpoint: http://s3.us-west-2.amazonaws.com\\n snap.linstor.csi.linbit.com/s3-signing-region: us-west-2\\n snap.linstor.csi.linbit.com/s3-use-path-style: \\"false\\"\\n # Secret to store access credentials\\n csi.storage.k8s.io/snapshotter-secret-name: linstor-csi-s3-access\\n csi.storage.k8s.io/snapshotter-secret-namespace: default\\n```\n\nJenkins deployment persistent volume claim (PVC) is stored as a snapshot in [Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail) by using a standard Kubernetes volumeSnapshot definition with LINSTOR’s snapshot class for [Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail):\n\n```\\napiVersion: snapshot.storage.k8s.io/v1\\nkind: VolumeSnapshot\\nmetadata:\\n name: jenkins-dr-snapshot-0\\nspec:\\n volumeSnapshotClassName: linstor-csi-snapshot-class-s3\\n source:\\n persistentVolumeClaimName: <jenkins-pvc-name>\\n```\n\n### **Conclusion**\nIn this post, we explained the challenges to scale Jenkins for HA and DR. We also reviewed Jenkins storage architecture with [Amazon EBS](https://aws.amazon.com/cn/ebs/?trk=cndc-detail) and [Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail) and where to apply these. We demonstrated how you can use [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) to scale Jenkins compute for HA and how AWS partner solutions such as LINBIT LINSTOR can help scale Jenkins storage for HA and DR. Combining both solutions can help organizations maintain their ability to deploy software with speed and agility. We hope you found this post useful as you think through building your CI/CD infrastructure in AWS. To learn more about running Jenkins in [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail), check out [Orchestrate Jenkins Workloads using Dynamic Pod Autoscaling with Amazon EKS](https://aws.amazon.com/blogs/devops/orchestrate-jenkins-workloads-using-dynamic-pod-autoscaling-with-amazon-eks/). To find out more information about LINBIT’s LINSTOR, check the Jenkins [technical guide](https://linbit.com/direct-ha-jenkins-aws/).\n\n#### **Authors:**\n\n![image.png](https://dev-media.amazoncloud.cn/8838497f9ce44fe3a959eff7dc14c37b_image.png)\n\n**James Bland**\nJames is a 25+ year veteran in the IT industry helping organizations from startups to ultra large enterprises achieve their business objectives. He has held various leadership roles in software development, worldwide infrastructure automation, and enterprise architecture. James has been\npracticing DevOps long before the term became popularized. He holds a doctorate in computer science with a focus on leveraging machine learning algorithms for scaling systems. In his current role at AWS as the APN Global Tech Lead for DevOps, he works with partners to help shape the future of technology.\n\n![image.png](https://dev-media.amazoncloud.cn/ca72120a77fe47c6999bf976938f95ae_image.png)\n\n**Welly Siauw**\nWelly Siauw is a Sr. Partner Solution Architect at Amazon Web Services (AWS). He spends his day working with customers and partners, solving architectural challenges. He is passionate about service integration and orchestration, serverless and artificial intelligence (AI) and machine learning (ML). He authored several AWS blogs and actively leading AWS Immersion Days and Activation Days. Welly spends his free time tinkering with espresso machine and outdoor hiking.\n\n\n![image.png](https://dev-media.amazoncloud.cn/75e4446aae444bd282051e4f6d02fdca_image.png)\n\n**Matt Kereczman**\nMatt Kereczman is a Solutions Architect at LINBIT with a long history of Linux System Administration and Linux System Engineering. Matt is a cornerstone in LINBIT’s technical team, and plays an important role in making LINBIT and LINBIT’s customer’s solutions great. Matt was President of the GNU/Linux Club at Northampton Area Community College prior to graduating with Honors from Pennsylvania College of Technology with a BS in Information Security. Open Source Software and Hardware are at the core of most of Matt’s hobbies.\n\n\n","render":"<p>We often hear from customers about their challenges architecting <a href=\\"https://www.jenkins.io/\\" target=\\"_blank\\">Jenkins</a> for scale and high availability (HA). Jenkins was originally built as a continuous integration (CI) system to test software before it was committed to a repository. Since its beginning, Jenkins has grown out of necessity versus grand master plan. Developers who extended Jenkins favored speed of creating functionality over performance or scalability of the entire system. This is not to say that it’s impossible to scale Jenkins, it’s only mentioned here to highlight the challenges and technical debt that has accumulated because of the prioritization of features versus developing towards a specific architecture. In this post, we discuss these challenges and our proposed solution.</p>\\n<h3><a id=\\"Challenges_with_Jenkins_at_scale_and_HA_2\\"></a><strong>Challenges with Jenkins at scale and HA</strong></h3>\\n<p>Business and customer demand are forcing organizations to increase the speed and agility at which they release features and functionality. As organizations make this transition, the usage of continuous integration and continuous delivery (CI/CD) increases, which drives the need to scale Jenkins. Overlay this with an organization that commits hundreds of changes per day and works around the clock, with developers dispersed globally, and you end up with an operational situation where there is no room for downtime. To mitigate the risk of impacting an organization’s ability to release when they need it, developers require a system that not only scales but is also highly available.</p>\n<p>The ability to scale Jenkins and provide HA comes down to two problems. One is the ability to scale compute to handle additional jobs, and the second is storage. To scale compute, we typically do it in one of two ways, horizontally or vertically. Horizontally means we scale Jenkins to add additional compute nodes. Scaling vertically means we scale Jenkins by adding more resources to the compute node.</p>\n<p>Let’s start with the storage problem. Jenkins is designed around the local file system. Anyone who has spent time around Jenkins is aware that logs, cloned repos, plugins, and build artifacts are stored into JENKINS_HOME. Local file systems, while good for single-server designs, tend to be a challenge when HA comes into the picture. In on-premises designs, administrators have often used Network File System (NFS) and Storage Area Networks (SAN) to achieve some scale and resiliency. This type of design comes with a trade-off of performance and doesn’t provide the true HA and inherent disaster recovery (DR) required to meet the demands of the business.</p>\n<p>Because of the local file system constraint, there are two native families of storage available in AWS: <a href=\\"https://aws.amazon.com/ebs/\\" target=\\"_blank\\">Amazon Elastic Block Store</a> ([Amazon EBS](https://aws.amazon.com/cn/ebs/?trk=cndc-detail)) and <a href=\\"https://aws.amazon.com/efs/\\" target=\\"_blank\\">Amazon Elastic File System</a> ([Amazon EFS](https://aws.amazon.com/cn/efs/?trk=cndc-detail)). [Amazon EBS](https://aws.amazon.com/cn/ebs/?trk=cndc-detail) is great for a single-server design in a single <a href=\\"https://aws.amazon.com/about-aws/global-infrastructure/regions_az/#Availability_Zones\\" target=\\"_blank\\">Availability Zone</a>. The challenge is trying to scale a single-server design to support HA. Because of the requirement to assign an EBS volume to a specific Availability Zone, you can’t automatically transition the EBS volume to another Availability Zone and attach it to a Jenkins instance. If you don’t mind having an impact on Recovery Time Objective (RTO) and Recovery Point Objective (RPO), a solution using [Amazon EBS](https://aws.amazon.com/cn/ebs/?trk=cndc-detail) snapshots copied to additional Availability Zones might work. Although EBS snapshot copy is possible, it’s not a recommended solution because it doesn’t scale and has complexities in building and maintaining this type of solution.</p>\\n<p>Amazon EFS as an alternative has worked well for customers that don’t have high usage patterns of Jenkins. All Jenkins instances within the Region can access the Amazon EFS file system and data durably stored in multiple Availability Zones. If a single Availability Zone experiences an outage, the Jenkins file system is still accessible from other Availability Zones providing HA for the storage layer. This solution is not recommended for high-usage systems due to the way that Jenkins reads and writes data. Jenkins’s access pattern is skewed towards writing data such as logs, cloned repos, and building artifacts versus reading data. Amazon EFS, on the other hand, is designed for workloads that read more than they write. On high-usage workloads, customers have experienced Jenkins build slowness and Jenkins page load latency. This is why Amazon EFS isn’t recommended for high-usage Jenkins systems.</p>\n<h3><a id=\\"Solution_for_Jenkins_at_scale_and_HA_13\\"></a><strong>Solution for Jenkins at scale and HA</strong></h3>\\n<p>Solving the compute problem is relatively straightforward by using <a href=\\"https://aws.amazon.com/eks/\\" target=\\"_blank\\">Amazon Elastic Kubernetes Service</a> ([Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail)). In the context of Jenkins, an organization would run Jenkins in an [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) cluster that spans multiple Availability Zones, as shown in the following diagram.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/4eadee38b25545b3b6e7c3bcdb9042cf_image.png\\" alt=\\"image.png\\" /></p>\n<p>Figure 1 –Jenkins deployment in Amazon EKS with multiple availability zones.</p>\n<p>Jenkins Controller and Agent would run in an Availability Zone as a Kubernetes pod. Amazon EKS is designed around Desired State Configuration (DSC), which means that it continuously make sure that the running environment matches the configuration that has been applied to Amazon EKS. In practice, when Amazon EKS is told that you want a single pod of Jenkins running, it monitors and makes sure that pod is always running. If an Availability Zone is unavailable, Amazon EKS launches a new node in another Availability Zone and deploys all pods to meet any necessary constraints defined in Amazon EKS. With this option, we still need to have the data in other Availability Zones, which we cover later in this post.</p>\n<p>The only option of scaling Jenkins controllers is vertical. Scaling Jenkins horizontally could lead to an undesirable state because the system wasn’t designed to have multiple instances of Jenkins attached to the same storage layer. There is no exclusive file locking mechanism to ensure data consistency. For organizations that have exhausted the limits with vertical scaling, the recommendation is to run multiple independent Jenkins controllers and separate them per team or group. Vertical scaling of Jenkins is simpler in Amazon EKS. Node sizes and container memory are controlled by configuration. Increasing memory size is as simple as changing a container’s memory setting. Due to the ease of changing configuration, it’s best to start with a lower memory setting, monitor performance, and increase as necessary. You want to find a good balance between price and performance.</p>\n<p>For Jenkins agents, there are many options to scale the compute. In the context of scale and HA, the best options are to use <a href=\\"https://aws.amazon.com/codebuild/\\" target=\\"_blank\\">AWS CodeBuild</a>, <a href=\\"https://docs.aws.amazon.com/eks/latest/userguide/fargate.html\\" target=\\"_blank\\">AWS Fargate for Amazon EKS</a>, or <a href=\\"https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html\\" target=\\"_blank\\">Amazon EKS managed node groups</a>. With CodeBuild, you don’t need to provision, manage, or scale your build servers. CodeBuild scales continuously and processes multiple builds concurrently. You can use the <a href=\\"https://docs.aws.amazon.com/codebuild/latest/userguide/jenkins-plugin.html\\" target=\\"_blank\\">Jenkins plugin for CodeBuild</a> to integrate CodeBuild with Jenkins. Fargate is a good option but has some challenges if you’re trying to build container images within a container due to permissions necessary that aren’t exposed in Fargate. For additional information on how to overcome this challenge with Jenkins, refer to <a href=\\"https://aws.amazon.com/blogs/containers/how-to-build-container-images-with-amazon-eks-on-fargate/\\" target=\\"_blank\\">How to build container images with Amazon EKS on Fargate</a>.</p>\\n<p>Now let’s look at the storage layer and see how <a href=\\"https://linbit.com/\\" target=\\"_blank\\">LINBIT</a> is helping organizations solve this problem with <a href=\\"https://github.com/LINBIT/linstor-server\\" target=\\"_blank\\">LINSTOR</a>. LINBIT’s LINSTOR is an open-source management tool designed to manage block storage devices. Its primary use case is to provide Linux block storage for Kubernetes and other public and private cloud platforms. LINBIT also provides <a href=\\"https://linbit.com/sla/\\" target=\\"_blank\\">enterprise subscription</a> for LINSTOR, which include technical support with SLA.</p>\\n<p>The following diagram illustrates a LINSTOR storage solution running on Amazon EKS using multiple Availability Zones and <a href=\\"http://aws.amazon.com/s3\\" target=\\"_blank\\">Amazon Simple Storage Service</a> ([Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail)) for snapshots.</p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/9649412100524726b3ffe220f5985024_image.png\\" alt=\\"image.png\\" /></p>\n<p>Figure 2. LINSTOR storage solution running on Amazon EKS using multiple availability zones and S3 for snapshot.</p>\n<p>LINSTOR is composed of a control plane and a data plane. The control plane consists of a set of containers deployed into Amazon EKS and is responsible for managing the data plane. The data plane consists of a collection of open-source block storage software, most importantly LINBIT’s <a href=\\"https://linbit.com/drbd/\\" target=\\"_blank\\">Distributed Replicated Storage System (DRBD)</a> software. DRBD is responsible for provisioning and synchronously replicating storage between [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) worker instances in different Availability Zones.</p>\\n<p>LINSTOR is deployed via <a href=\\"https://helm.sh/\\" target=\\"_blank\\">Helm</a> into [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail), and the LINSTOR cluster is initialized by the LINSTOR Operator. Once deployed, LINSTOR volumes and volume snapshots are managed via Kubernetes <a href=\\"https://kubernetes.io/docs/concepts/storage/storage-classes/\\" target=\\"_blank\\">Storage Classes</a> and <a href=\\"https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/\\" target=\\"_blank\\">Snapshot Classes</a> in a Kubernetes native fashion. LINSTOR volumes are backed by LINSTOR objects known as storage pools, which are composed of one or more EBS volumes attached to each [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) worker instance.</p>\\n<p>LINSTOR volumes layer DRBD on top of the worker’s attached EBS volume to enable synchronous replication between peers in the Amazon EKS cluster. This ensures that you have an identical copy of your persistent volume on the EBS volumes in each Availability Zone. In the event of an Availability Zone outage or planned migration, Amazon EKS moves the Jenkins deployment to another Availability Zone where the persistent volume copy is available. In terms of scaling, LINBIT DRDB supports up to 32 replicas per volume, with a maximum size of 1 PiB per volume. LINSTOR node itself can scale beyond hundreds of nodes, as shown in this case study.</p>\n<p>LINSTOR also provides an HA Controller component in its control plane to speed up failover times during outages. LINSTOR’s HA Controller looks for pods with a specific label, and if LINSTOR’s persistent volumes replication network becomes interrupted (like during an Availability Zone outage), LINSTOR reschedules the pod sooner than the default Kubernetes <code>pod-eviction-timeout</code>.</p>\\n<p>LINBIT provides a detailed full installation for <a href=\\"https://linbit.com/direct-ha-jenkins-aws/\\" target=\\"_blank\\">Jenkins HA in AWS</a>. A sample of LINSTOR’s helm values supporting these features is as follows:</p>\\n<pre><code class=\\"lang-\\">operator:\\n satelliteSet:\\n storagePools:\\n lvmThinPools:\\n - name: lvm-thin\\n thinVolume: thinpool\\n volumeGroup: &quot;&quot;\\n devicePaths:\\n - /dev/nvme1n1\\n kernelModuleInjectionMode: Compile\\nstork:\\n enabled: false\\ncsi:\\n enableTopology: true\\netcd:\\n replicas: 3\\nhaController:\\n replicas: 3\\n</code></pre>\\n<p>After LINSTOR is deployed, you create a Kubernetes StorageClass supporting persistent volumes with three replicas using the following example:</p>\n<pre><code class=\\"lang-\\">apiVersion: storage.k8s.io/v1\\nkind: StorageClass\\nmetadata:\\n name: &quot;linstor-csi-lvm-thin-r3&quot;\\nprovisioner: linstor.csi.linbit.com\\nparameters:\\n allowRemoteVolumeAccess: &quot;false&quot;\\n autoPlace: &quot;3&quot;\\n storagePool: &quot;lvm-thin&quot;\\n DrbdOptions/Disk/disk-flushes: &quot;no&quot;\\n DrbdOptions/Disk/md-flushes: &quot;no&quot;\\n DrbdOptions/Net/max-buffers: &quot;10000&quot;\\nreclaimPolicy: Retain\\nallowVolumeExpansion: true\\nvolumeBindingMode: WaitForFirstConsumer\\n</code></pre>\\n<p>Finally, <a href=\\"https://github.com/jenkinsci/helm-charts\\" target=\\"_blank\\">Jenkins helm charts</a> are deployed into [Amazon EKS](https://aws.amazon.com/cn/eks/?trk=cndc-detail) with the following Helm values to request a PV from the LINSTOR StorageClass:</p>\\n<pre><code class=\\"lang-\\">persistence:\\n storageClass: linstor-csi-lvm-thin-r3\\n size: &quot;200Gi&quot;\\ncontroller:\\n serviceType: LoadBalancer\\n podLabels:\\n linstor.csi.linbit.com/on-storage-lost: remove\\n</code></pre>\\n<p>To protect against entire AWS Region outages and provide disaster recovery, LINSTOR takes volume snapshots and replicates it cross-Region using Amazon S3. LINSTOR requires read and write access to the target S3 bucket using AWS credentials provided as Kubernetes secrets:</p>\n<pre><code class=\\"lang-\\">kind: Secret\\napiVersion: v1\\nmetadata:\\n name: linstor-csi-s3-access\\n namespace: default\\ntype: linstor.csi.linbit.com/s3-credentials.v1\\nimmutable: true\\nstringData:\\n access-key: REDACTED\\n secret-key: REDACTED\\n</code></pre>\\n<p>The target S3 bucket is referenced as a snapshot shipping target using a LINSTOR S3 VolumeSnapshotClass. The following example shows a VolumeSnapshotClass referencing the S3 bucket’s secret and additional configuration for the target S3 bucket:</p>\n<pre><code class=\\"lang-\\">kind: VolumeSnapshotClass\\napiVersion: snapshot.storage.k8s.io/v1\\nmetadata:\\n name: linstor-csi-snapshot-class-s3\\ndriver: linstor.csi.linbit.com\\ndeletionPolicy: Delete\\nparameters:\\n snap.linstor.csi.linbit.com/type: S3\\n snap.linstor.csi.linbit.com/remote-name: s3-us-west-2\\n snap.linstor.csi.linbit.com/allow-incremental: &quot;false&quot;\\n snap.linstor.csi.linbit.com/s3-bucket: name-of-bucket-123\\n snap.linstor.csi.linbit.com/s3-endpoint: http://s3.us-west-2.amazonaws.com\\n snap.linstor.csi.linbit.com/s3-signing-region: us-west-2\\n snap.linstor.csi.linbit.com/s3-use-path-style: &quot;false&quot;\\n # Secret to store access credentials\\n csi.storage.k8s.io/snapshotter-secret-name: linstor-csi-s3-access\\n csi.storage.k8s.io/snapshotter-secret-namespace: default\\n</code></pre>\\n<p>Jenkins deployment persistent volume claim (PVC) is stored as a snapshot in Amazon S3 by using a standard Kubernetes volumeSnapshot definition with LINSTOR’s snapshot class for Amazon S3:</p>\n<pre><code class=\\"lang-\\">apiVersion: snapshot.storage.k8s.io/v1\\nkind: VolumeSnapshot\\nmetadata:\\n name: jenkins-dr-snapshot-0\\nspec:\\n volumeSnapshotClassName: linstor-csi-snapshot-class-s3\\n source:\\n persistentVolumeClaimName: &lt;jenkins-pvc-name&gt;\\n</code></pre>\\n<h3><a id=\\"Conclusion_147\\"></a><strong>Conclusion</strong></h3>\\n<p>In this post, we explained the challenges to scale Jenkins for HA and DR. We also reviewed Jenkins storage architecture with Amazon EBS and Amazon EFS and where to apply these. We demonstrated how you can use Amazon EKS to scale Jenkins compute for HA and how AWS partner solutions such as LINBIT LINSTOR can help scale Jenkins storage for HA and DR. Combining both solutions can help organizations maintain their ability to deploy software with speed and agility. We hope you found this post useful as you think through building your CI/CD infrastructure in AWS. To learn more about running Jenkins in Amazon EKS, check out <a href=\\"https://aws.amazon.com/blogs/devops/orchestrate-jenkins-workloads-using-dynamic-pod-autoscaling-with-amazon-eks/\\" target=\\"_blank\\">Orchestrate Jenkins Workloads using Dynamic Pod Autoscaling with Amazon EKS</a>. To find out more information about LINBIT’s LINSTOR, check the Jenkins <a href=\\"https://linbit.com/direct-ha-jenkins-aws/\\" target=\\"_blank\\">technical guide</a>.</p>\\n<h4><a id=\\"Authors_150\\"></a><strong>Authors:</strong></h4>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/8838497f9ce44fe3a959eff7dc14c37b_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>James Bland</strong><br />\\nJames is a 25+ year veteran in the IT industry helping organizations from startups to ultra large enterprises achieve their business objectives. He has held various leadership roles in software development, worldwide infrastructure automation, and enterprise architecture. James has been<br />\\npracticing DevOps long before the term became popularized. He holds a doctorate in computer science with a focus on leveraging machine learning algorithms for scaling systems. In his current role at AWS as the APN Global Tech Lead for DevOps, he works with partners to help shape the future of technology.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/ca72120a77fe47c6999bf976938f95ae_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>Welly Siauw</strong><br />\\nWelly Siauw is a Sr. Partner Solution Architect at Amazon Web Services (AWS). He spends his day working with customers and partners, solving architectural challenges. He is passionate about service integration and orchestration, serverless and artificial intelligence (AI) and machine learning (ML). He authored several AWS blogs and actively leading AWS Immersion Days and Activation Days. Welly spends his free time tinkering with espresso machine and outdoor hiking.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/75e4446aae444bd282051e4f6d02fdca_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>Matt Kereczman</strong><br />\\nMatt Kereczman is a Solutions Architect at LINBIT with a long history of Linux System Administration and Linux System Engineering. Matt is a cornerstone in LINBIT’s technical team, and plays an important role in making LINBIT and LINBIT’s customer’s solutions great. Matt was President of the GNU/Linux Club at Northampton Area Community College prior to graduating with Honors from Pennsylvania College of Technology with a BS in Information Security. Open Source Software and Hardware are at the core of most of Matt’s hobbies.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭