{"value":"At HDI, one of the biggest European insurance group companies, we use AWS to build new services and capabilities and delight our customers. Working in the financial services industry, the company has to comply with numerous regulatory requirements in the areas of data protection and FSI regulations such as GDPR, German Supervisory Requirements for IT (VAIT) and Supervision of Insurance Undertakings (VAG). The same security and compliance assessment process in the cloud supports development productivity and organizational agility, and helps our teams innovate at a high pace and meet the growing demands of our internal and external customers.\n\nIn this post, we explore how HDI adopted AWS security and compliance best practices. We describe implementation of automated security and compliance monitoring of AWS resources using a combination of AWS and open-source solutions. We also go through the steps to implement automated security findings remediation and address continuous deployment of new security controls.\n\n### **Background**\nData analytics is the key capability for understanding our customers’ needs, driving business operations improvement, and developing new services, products, and capabilities for our customers. We needed a cloud-native data platform of virtually unlimited scale that offers descriptive and prescriptive analytics capabilities to internal teams with a high innovation pace and short experimentation cycles. One of the success metrics in our mission is time to market, therefore it’s important to provide flexibility to internal teams to quickly experiment with new use cases. At the same time, we’re vigilant about data privacy. Having a secure and compliant cloud environment is a prerequisite for every new experiment and use case on our data platform.\n\nCloud security and compliance implementation in the cloud is a shared effort between the Cloud Center of Competence team (C3), the Network Operation Center (NoC), and the product and platform teams. The C3 team is responsible for new AWS account provisioning, account security, and compliance baseline setup. Cross-account networking configuration is established and managed by the NoC team. Product teams are responsible for AWS services configuration to meet their requirements in the most efficient way. Typically, they deploy and configure infrastructure and application stacks, including the following:\n\n- **Network configuration** – [Amazon Virtual Private Cloud](http://aws.amazon.com/vpc) ([Amazon VPC](https://aws.amazon.com/cn/vpc/?trk=cndc-detail)) subnets and routing\n- **Object storage setup** – [Amazon Simple Storage Service](http://aws.amazon.com/s3) ([Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail)) buckets and bucket policies\n- **Data encryption at rest configuration** – Management of [AWS Key Management Service](http://aws.amazon.com/kms) (AWS KMS) customer master keys (CMKs) and key policies\n- **Managed services configuration** – [AWS Glue](https://aws.amazon.com/glue) jobs, [AWS Cloud9](https://aws.amazon.com/cloud9/) environments, and others\n\nWe were looking for security controls model that would allow us to continuously monitor infrastructure and application components set up by all the teams. The model also needed to support guardrails that allowed product teams to focus on new use case implementation, but also inherited the security and compliance best practices promoted and ensured within our company.\n### **Security and compliance baseline definition**\n\nWe started with the [AWS Well-Architected Framework Security Pillar](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/definition.html) whitepaper, which provides implementation guidance on the essential areas of security and compliance in the cloud, including identity and access management, infrastructure security, data protection, detection, and incident response. Although all five elements are equally important for implementing enterprise-grade security and compliance in the cloud, we saw an opportunity to improve controls of on-premises environments by automating detection and incident response elements. The continuous monitoring of AWS infrastructure and application changes complemented by the automated incident response of the security baseline helps us foster security best practices and allows for a high innovation pace. Manual security reviews are no longer required to asses security posture.\n\nOur security and compliance controls framework is based on GDPR and several standards and programs, including ISO 27001, C5. Translation of the controls framework into the security and compliance baseline definition in the cloud isn’t always straightforward, so we use a number of guidelines. As a starting point, we use [CIS Amazon Web Services benchmarks](https://www.cisecurity.org/benchmark/amazon_web_services/), because it’s a prescriptive recommendation and its controls cover multiple AWS security areas, including identity and access management, logging and monitoring configuration, and network configuration. CIS benchmarks are industry-recognized cyber security best practices and recommendations that cover a wide range of technology families, and are used by enterprise organizations around the world. We also apply [GDPR compliance on AWS](https://docs.aws.amazon.com/whitepapers/latest/navigating-gdpr-compliance/data-access-controls.html) recommendations and [AWS Foundational Security Best Practices](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards-fsbp.html), extending controls recommended by CIS AWS Foundations Benchmarks in multiple control areas: inventory, logging, data protection, access management, and more.\n\n### **Security controls implementation**\nAWS provides multiple services that help implement security and compliance controls:\n\n- [AWS CloudTrail](http://aws.amazon.com/cloudtrail) provides a history of events in an AWS account, including those originating from command line tools, AWS SDKs, AWS APIs, or the [AWS Management Console](http://aws.amazon.com/console). In addition, it allows exporting event history for further analysis and subscribing to specific events to implement automated remediation.\n- [AWS Config](https://aws.amazon.com/config/) allows you to monitor AWS resource configuration, and automatically evaluate and remediate incidents related to unexpected resources configuration. [AWS Config](https://aws.amazon.com/cn/config/?trk=cndc-detail) comes with [pre-built conformance pack sample templates](https://docs.aws.amazon.com/config/latest/developerguide/conformancepack-sample-templates.html) designed to help you meet operational best practices and compliance standards.\n- [Amazon GuardDuty](https://aws.amazon.com/guardduty/) provides threat detection capabilities that continuously monitor network activity, data access patterns, and account behavior.\n\nWith multiple AWS services to use as building blocks for continuous monitoring and automation, there is a strong need for a consolidated findings overview and unified remediation framework. This is where [AWS Security Hub](https://aws.amazon.com/security-hub/) comes into play. Security Hub provides built-in security standards and controls that make it easy to enable foundational security controls. Then, Security Hub integrates with CloudTrail, [AWS Config](https://aws.amazon.com/cn/config/?trk=cndc-detail), GuardDuty, and [other AWS services](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-internal-providers.html) out of the box, which eliminates the need to develop and maintain integration code. Security Hub also accepts [findings from third-party partner products](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-partner-providers.html) and provides APIs for custom product integration. Security Hub significantly reduces the effort to consolidate audit information coming from multiple AWS-native and third-party channels. Its API and supported partner products ecosystem gave us confidence that we can adhere to changes in security and compliance standards with low effort.\n\nWhile AWS provides a rich set of services to manage risk at the [Three Lines Model](https://global.theiia.org/about/about-internal-auditing/Public%20Documents/Three-Lines-Model-Updated.pdf), we were looking for wider community support in maintaining and extending security controls beyond those defined by CIS benchmarks and compliance and best practices recommendations on AWS. We came across [Prowler](https://github.com/toniblyx/prowler), an open-source tool focusing on AWS security assessment and auditing and infrastructure hardening. Prowler implements CIS AWS benchmark controls and has over 100 additional checks. We appreciated Prowler providing checks that helped us meet GDPR and ISO 27001 requirements, specifically. Prowler delivers assessment reports in multiple formats, which makes it easy to implement reporting archival for future auditing needs. In addition, Prowler integrates well with Security Hub, which allows us to use a single service for consolidating security and compliance incidents across a number of channels.\n\nWe came up with the solution architecture depicted in the following diagram.\n\n![image.png](https://dev-media.amazoncloud.cn/cfa388cb105344fdad999baff5a66b4a_image.png)\n\nAutomated remediation solution architecture HDI\n\nLet’s look closely into the most critical components of this solution.\n\nProwler is a command line tool that uses the [AWS Command Line Interface](http://aws.amazon.com/cli) (AWS CLI) and a bash script. Individual Prowler checks are bash scripts organized into groups by compliance standard or AWS service. By supplying corresponding command line arguments, we can run Prowler against a specific AWS Region or multiple Regions at the same time. We can run Prowler in multiple ways; we chose to run it as an [AWS Fargate](https://aws.amazon.com/fargate) task for [Amazon Elastic Container Service](http://aws.amazon.com/ecs) ([Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail)). Fargate is a serverless compute engine that runs Docker-compatible containers. ECS Fargate tasks are scheduled tasks that make it easy to perform periodic assessments of an AWS account and export findings. We configured Prowler to run every 7 days in every account and Region it’s deployed into.\n\nSecurity Hub acts as a single place for consolidating security findings from multiple sources. When Security Hub is enabled in a given Region, CIS AWS Foundations Benchmark and Foundational Security Best Practices standards are enabled as well. Enabling these standards also configures integration with [AWS Config](https://aws.amazon.com/cn/config/?trk=cndc-detail) and Guard Duty. Integration with Prowler requires enabling product integration on the Security Hub side by calling the ```EnableImportFindingsForProduct```\n API action for a given product. Because Prowler supports integration with Security Hub out of the box, posting security findings is a matter of passing the right command line arguments: ```-M json-asff ```to format reports as [AWS Security Findings Format](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format.html) and ```-S``` to ship findings to Security Hub.\n\nAutomated security findings remediation is implemented using [AWS Lambda](http://aws.amazon.com/lambda) functions and the [AWS SDK for Python (Boto3)](https://aws.amazon.com/sdk-for-python/). The remediation function can be triggered in two ways: automatically in response to a new security finding, or by a security engineer from the Security Hub findings page. In both cases, the same Lambda function is used. Remediation functions implement security standards in accordance with recommendations, whether they’re CIS AWS Foundations Benchmark and Foundational Security Best Practices standards, or others.\n\nThe exact activities performed depend on the security findings type and its severity. Examples of activities performed include deleting non-rotated [AWS Identity and Access Management](http://aws.amazon.com/iam) (IAM) access keys, enabling server-side encryption for S3 buckets, and deleting unencrypted [Amazon Elastic Block Store](http://aws.amazon.com/ebs) ([Amazon EBS](https://aws.amazon.com/cn/ebs/?trk=cndc-detail)) volumes.\n\nTo trigger the Lambda function, we use [Amazon EventBridge](https://aws.amazon.com/eventbridge/), which makes it easy to build an event-driven remediation engine and allows us to define Lambda functions as targets for Security Hub findings and custom actions. EventBridge allows us to define filters for security findings and therefore map finding types to specific remediation functions. Upon successfully performing security remediation, each function updates one or more Security Hub findings by calling the ```BatchUpdateFindings```\n API and passing the corresponding finding ID.\n\nThe following example code shows a function enforcing an IAM password policy:\n\nPython\n```\\nimport boto3\\nimport os\\nimport logging\\nfrom botocore.exceptions import ClientError\\n\\niam = boto3.client(\\"iam\\")\\nsecurityhub = boto3.client(\\"securityhub\\")\\n\\nlog_level = os.environ.get(\\"LOG_LEVEL\\", \\"INFO\\")\\nlogging.root.setLevel(logging.getLevelName(log_level))\\nlogger = logging.getLogger(__name__)\\n\\n\\ndef lambda_handler(event, context, iam=iam, securityhub=securityhub):\\n \\"\\"\\"Remediate findings related to cis15 and cis11.\\n\\n Params:\\n event: Lambda event object\\n context: Lambda context object\\n iam: iam boto3 client\\n securityhub: securityhub boto3 client\\n Returns:\\n No returns\\n \\"\\"\\"\\n finding_id = event[\\"detail\\"][\\"findings\\"][0][\\"Id\\"]\\n product_arn = event[\\"detail\\"][\\"findings\\"][0][\\"ProductArn\\"]\\n lambda_name = os.environ[\\"AWS_LAMBDA_FUNCTION_NAME\\"]\\n try:\\n iam.update_account_password_policy(\\n MinimumPasswordLength=14,\\n RequireSymbols=True,\\n RequireNumbers=True,\\n RequireUppercaseCharacters=True,\\n RequireLowercaseCharacters=True,\\n AllowUsersToChangePassword=True,\\n MaxPasswordAge=90,\\n PasswordReusePrevention=24,\\n HardExpiry=True,\\n )\\n logger.info(\\"IAM Password Policy Updated\\")\\n except ClientError as e:\\n logger.exception(e)\\n raise e\\n try:\\n securityhub.batch_update_findings(\\n FindingIdentifiers=[{\\"Id\\": finding_id, \\"ProductArn\\": product_arn},],\\n Note={\\n \\"Text\\": \\"Changed non compliant password policy\\",\\n \\"UpdatedBy\\": lambda_name,\\n },\\n Workflow={\\"Status\\": \\"RESOLVED\\"},\\n )\\n except ClientError as e:\\n logger.exception(e)\\n raise e\\n```\nA key aspect in developing remediation Lambda functions is testability. To quickly iterate through testing cycles, we cover each remediation function with unit tests, in which necessary dependencies are mocked and replaced with stub objects. Because no Lambda deployment is required to check remediation logic, we can test newly developed functions and ensure reliability of existing ones in seconds.\n\nEach Lambda function developed is accompanied with an ```event.json```\n document containing an example of an EventBridge event for a given security finding. A security finding event allows us to verify remediation logic precisely, including deletion or suspension of non-compliant resources or a finding status update in Security Hub and the response returned. Unit tests cover both successful and erroneous remediation logic. We use pytest to develop unit tests, and botocore.stub and moto to replace runtime dependencies with mocks and stubs.\n\n### **Automated security findings remediation**\nThe following diagram illustrates our security assessment and automated remediation process.\n\n![image.png](https://dev-media.amazoncloud.cn/2025407a88bc499381ca17353c9e0126_image.png)\n\nThe workflow includes the following steps:\n\n1. An existing Security Hub integration performs periodic resource audits. The integration posts new security findings to Security Hub.\n2. Security Hub reports the security incident to the company’s centralized Service Now instance by using the Service Now ITSM Security Hub integration.\n3. Security Hub triggers automated remediation:\n\n\t1.Security Hub triggers the remediation function by sending an event to EventBridge. The event has a source field equal to ```aws.securityhub ```, with the filter ID corresponding to the specific finding type and compliance status as ```FAILED ```. The combination of these fields allows us to map the event to a particular remediation function.\n\n\t2.The remediation function starts processing the security finding event.\n\n\t3.The function calls the ```UpdateFindings ``` Security Hub API to update the security finding status upon completing remediation.\n\n\t4.Security Hub updates the corresponding security incident status in Service Now (Step 2)\n4. Alternatively, the security operations engineer resolves the security incident in Service Now:\n\n\t1.The engineer reviews the current security incident in Service Now.\n\n\t2.The engineer manually resolves the security incident in Service Now.\n\n\t3.Service Now updates the finding status by calling the ```UpdateFindings``` Security Hub API. Service Now uses the AWS Service Management Connector.\n5. Alternatively, the platform security engineer triggers remediation:\n\n\t1.The engineer reviews the currently active security findings on the Security Hub findings page.\n\n\t2.The engineer triggers remediation from the security findings page by selecting the appropriate action.\n\n\t3.Security Hub triggers the remediation function by sending an event with the source ```aws.securityhub``` to EventBridge. The automated remediation flow continues as described in the Step 3.\n\n### **Deployment automation**\nDue to legal requirements, HDI uses the infrastructure as code (IaC) principle while defining and deploying AWS infrastructure. We started with [AWS CloudFormation](http://aws.amazon.com/cloudformation) templates defined as YAML or JSON format. The templates are static by nature and define resources in a declarative way. We figured out that as our solution complexity grows, the CloudFormation templates also grow in size and complexity, because all the resources deployed have to be explicitly defined. We wanted a solution to increase our development productivity and simplify infrastructure definition.\n\nThe [AWS Cloud Development Kit](https://aws.amazon.com/cdk/) ([AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail)) helped us in two ways:\n\n- The [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) provides ready-to-use building blocks called constructs. These constructs include pre-configured AWS services following best practices. For example, a Lambda function always gets an IAM role with an IAM policy to be able to write logs to CloudWatch Logs.\n- The [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) allows us to use high-level programming languages to define configuration of all AWS services. Imperative definition allows us to build our own abstractions and reuse them to achieve concise resource definition.\n\n\nWe found that implementing IaC with the [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) is faster and less error-prone. At HDI, we use Python to build application logic and define AWS infrastructure. The imperative nature of the [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) is truly a turning point in fulfilling legal requirements and achieving high developer productivity at the same time.\n\nOne of the [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) constructs we use is [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) pipeline. This construct creates a customizable continuous integration and continuous delivery (CI/CD) pipeline implemented with [AWS CodePipeline](http://aws.amazon.com/codepipeline). The source action is based on [AWS CodeCommit](https://aws.amazon.com/codecommit/). The synth action is responsible for creating a CloudFormation template from the [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) project. The synth action also runs unit tests on remediations functions. The pipeline actions are connected via artifacts. Lastly, the [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) pipeline constructs offer a self-mutating feature, which allows us to maintain the [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) project as well as the pipeline in a single code repository. Changes of the pipeline definition as well as automated remediation solutions are deployed seamlessly. The actual solution deployment is also implemented as a CI/CD stage. Stages can be eventually deployed in cross-Region and cross-account patterns. To use cross-account deployments, the [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) provides a bootstrap functionality to create a trust relationship between AWS accounts.\n\nThe [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) project is broken down to multiple stacks. To deploy the CI/CD pipeline, we run the ```cdk deploy cicd-4-securityhub```\n command. To add a new Lambda remediation function, we must add remediation code, optional unit tests, and finally the Lambda remediation configuration object. This configuration object defines the Lambda function’s environment variables, necessary IAM policies, and external dependencies. See the following example code of this configuration:\n\nPython\n```\\nprowler_729_lambda = {\\n \\"name\\": \\"Prowler 7.29\\",\\n \\"id\\": \\"prowler729\\",\\n \\"description\\": \\"Remediates Prowler 7.29 by deleting/terminating unencrypted EC2 instances/EBS volumes\\",\\n \\"policies\\": [\\n _iam.PolicyStatement(\\n effect=_iam.Effect.ALLOW,\\n actions=[\\"ec2:TerminateInstances\\", \\"ec2:DeleteVolume\\"],\\n resources=[\\"*\\"])\\n ],\\n \\"path\\": \\"delete_unencrypted_ebs_volumes\\",\\n \\"environment_variables\\": [\\n {\\"key\\": \\"ACCOUNT_ID\\", \\"value\\": core.Aws.ACCOUNT_ID}\\n ],\\n \\"filter_id\\": [\\"prowler-extra729\\"],\\n }\\n```\nRemediation functions are organized in accordance with the security and compliance frameworks they belong to. The [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) code iterates over remediation definition lists and synthesizes corresponding policies and Lambda functions to be deployed later. Committing Git changes and pushing them triggers the CI/CD pipeline, which deploys the newly defined remediation function and adjusts the configuration of Prowler.\n\nWe are working on publishing the source code discussed in this blog post.\n\n### **Looking forward**\nAs we keep introducing new use cases in the cloud, we plan to improve our solution in the following ways:\n\n- Continuously add new controls based on our own experience and improving industry standards\n- Introduce cross-account security and compliance assessment by consolidating findings in a central security account\n- Improve automated remediation resiliency by introducing remediation failure notifications and retry queues\n- Run a [Well-Architected review](https://docs.aws.amazon.com/wellarchitected/latest/framework/the-review-process.html) to identify and address possible areas of improvement\n\n### **Conclusion**\nWorking on the solution described in this post helped us improve our security posture and meet compliancy requirements in the cloud. Specifically, we were able to achieve the following:\n\n- Gain a shared understanding of security and compliance controls implementation as well as shared responsibilities in the cloud between multiple teams\n- Speed up security reviews of cloud environments by implementing continuous assessment and minimizing manual reviews\n- Provide product and platform teams with secure and compliant environments\n- Lay a foundation for future requirements and improvement of security posture in the cloud\n\n\n*The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.*\n\n#### **About the Authors**\n\n![image.png](https://dev-media.amazoncloud.cn/53d772d4b877487182d0cbcddc06efdc_image.png)\n\n**Dr. Malte Polley**\nDr. Malte Polley is a Cloud Solutions Architect of Modern Data Platform (MDP) at HDI Germany. MDP focuses on DevSecOps practices applied to data analytics and provides secure and compliant environment for every data product at HDI Germany. As a cloud enthusiast Malte runs AWS Hannover user group. When not working, Malte enjoys hiking with his family and improving his backyard vegetable garden.\n\n![image.png](https://dev-media.amazoncloud.cn/9a7b4b0490fb4dc281a589454f372323_image.png)\n\n**Uladzimir Palkhouski**\nUladzimir Palkhouski is a Sr. Solutions Architect at Amazon Web Services. Uladzimir supports German financial services industry customers on their cloud journey. He helps finding practical forward looking solutions to complex technical and business challenges.","render":"<p>At HDI, one of the biggest European insurance group companies, we use AWS to build new services and capabilities and delight our customers. Working in the financial services industry, the company has to comply with numerous regulatory requirements in the areas of data protection and FSI regulations such as GDPR, German Supervisory Requirements for IT (VAIT) and Supervision of Insurance Undertakings (VAG). The same security and compliance assessment process in the cloud supports development productivity and organizational agility, and helps our teams innovate at a high pace and meet the growing demands of our internal and external customers.</p>\n<p>In this post, we explore how HDI adopted AWS security and compliance best practices. We describe implementation of automated security and compliance monitoring of AWS resources using a combination of AWS and open-source solutions. We also go through the steps to implement automated security findings remediation and address continuous deployment of new security controls.</p>\n<h3><a id=\\"Background_4\\"></a><strong>Background</strong></h3>\\n<p>Data analytics is the key capability for understanding our customers’ needs, driving business operations improvement, and developing new services, products, and capabilities for our customers. We needed a cloud-native data platform of virtually unlimited scale that offers descriptive and prescriptive analytics capabilities to internal teams with a high innovation pace and short experimentation cycles. One of the success metrics in our mission is time to market, therefore it’s important to provide flexibility to internal teams to quickly experiment with new use cases. At the same time, we’re vigilant about data privacy. Having a secure and compliant cloud environment is a prerequisite for every new experiment and use case on our data platform.</p>\n<p>Cloud security and compliance implementation in the cloud is a shared effort between the Cloud Center of Competence team (C3), the Network Operation Center (NoC), and the product and platform teams. The C3 team is responsible for new AWS account provisioning, account security, and compliance baseline setup. Cross-account networking configuration is established and managed by the NoC team. Product teams are responsible for AWS services configuration to meet their requirements in the most efficient way. Typically, they deploy and configure infrastructure and application stacks, including the following:</p>\n<ul>\\n<li><strong>Network configuration</strong> – <a href=\\"http://aws.amazon.com/vpc\\" target=\\"_blank\\">Amazon Virtual Private Cloud</a> ([Amazon VPC](https://aws.amazon.com/cn/vpc/?trk=cndc-detail)) subnets and routing</li>\\n<li><strong>Object storage setup</strong> – <a href=\\"http://aws.amazon.com/s3\\" target=\\"_blank\\">Amazon Simple Storage Service</a> ([Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail)) buckets and bucket policies</li>\\n<li><strong>Data encryption at rest configuration</strong> – Management of <a href=\\"http://aws.amazon.com/kms\\" target=\\"_blank\\">AWS Key Management Service</a> (AWS KMS) customer master keys (CMKs) and key policies</li>\\n<li><strong>Managed services configuration</strong> – <a href=\\"https://aws.amazon.com/glue\\" target=\\"_blank\\">AWS Glue</a> jobs, <a href=\\"https://aws.amazon.com/cloud9/\\" target=\\"_blank\\">AWS Cloud9</a> environments, and others</li>\\n</ul>\n<p>We were looking for security controls model that would allow us to continuously monitor infrastructure and application components set up by all the teams. The model also needed to support guardrails that allowed product teams to focus on new use case implementation, but also inherited the security and compliance best practices promoted and ensured within our company.</p>\n<h3><a id=\\"Security_and_compliance_baseline_definition_15\\"></a><strong>Security and compliance baseline definition</strong></h3>\\n<p>We started with the <a href=\\"https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/definition.html\\" target=\\"_blank\\">AWS Well-Architected Framework Security Pillar</a> whitepaper, which provides implementation guidance on the essential areas of security and compliance in the cloud, including identity and access management, infrastructure security, data protection, detection, and incident response. Although all five elements are equally important for implementing enterprise-grade security and compliance in the cloud, we saw an opportunity to improve controls of on-premises environments by automating detection and incident response elements. The continuous monitoring of AWS infrastructure and application changes complemented by the automated incident response of the security baseline helps us foster security best practices and allows for a high innovation pace. Manual security reviews are no longer required to asses security posture.</p>\\n<p>Our security and compliance controls framework is based on GDPR and several standards and programs, including ISO 27001, C5. Translation of the controls framework into the security and compliance baseline definition in the cloud isn’t always straightforward, so we use a number of guidelines. As a starting point, we use <a href=\\"https://www.cisecurity.org/benchmark/amazon_web_services/\\" target=\\"_blank\\">CIS Amazon Web Services benchmarks</a>, because it’s a prescriptive recommendation and its controls cover multiple AWS security areas, including identity and access management, logging and monitoring configuration, and network configuration. CIS benchmarks are industry-recognized cyber security best practices and recommendations that cover a wide range of technology families, and are used by enterprise organizations around the world. We also apply <a href=\\"https://docs.aws.amazon.com/whitepapers/latest/navigating-gdpr-compliance/data-access-controls.html\\" target=\\"_blank\\">GDPR compliance on AWS</a> recommendations and <a href=\\"https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards-fsbp.html\\" target=\\"_blank\\">AWS Foundational Security Best Practices</a>, extending controls recommended by CIS AWS Foundations Benchmarks in multiple control areas: inventory, logging, data protection, access management, and more.</p>\\n<h3><a id=\\"Security_controls_implementation_21\\"></a><strong>Security controls implementation</strong></h3>\\n<p>AWS provides multiple services that help implement security and compliance controls:</p>\n<ul>\\n<li><a href=\\"http://aws.amazon.com/cloudtrail\\" target=\\"_blank\\">AWS CloudTrail</a> provides a history of events in an AWS account, including those originating from command line tools, AWS SDKs, AWS APIs, or the <a href=\\"http://aws.amazon.com/console\\" target=\\"_blank\\">AWS Management Console</a>. In addition, it allows exporting event history for further analysis and subscribing to specific events to implement automated remediation.</li>\\n<li><a href=\\"https://aws.amazon.com/config/\\" target=\\"_blank\\">AWS Config</a> allows you to monitor AWS resource configuration, and automatically evaluate and remediate incidents related to unexpected resources configuration. [AWS Config](https://aws.amazon.com/cn/config/?trk=cndc-detail) comes with <a href=\\"https://docs.aws.amazon.com/config/latest/developerguide/conformancepack-sample-templates.html\\" target=\\"_blank\\">pre-built conformance pack sample templates</a> designed to help you meet operational best practices and compliance standards.</li>\\n<li><a href=\\"https://aws.amazon.com/guardduty/\\" target=\\"_blank\\">Amazon GuardDuty</a> provides threat detection capabilities that continuously monitor network activity, data access patterns, and account behavior.</li>\\n</ul>\n<p>With multiple AWS services to use as building blocks for continuous monitoring and automation, there is a strong need for a consolidated findings overview and unified remediation framework. This is where <a href=\\"https://aws.amazon.com/security-hub/\\" target=\\"_blank\\">AWS Security Hub</a> comes into play. Security Hub provides built-in security standards and controls that make it easy to enable foundational security controls. Then, Security Hub integrates with CloudTrail, [AWS Config](https://aws.amazon.com/cn/config/?trk=cndc-detail), GuardDuty, and <a href=\\"https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-internal-providers.html\\" target=\\"_blank\\">other AWS services</a> out of the box, which eliminates the need to develop and maintain integration code. Security Hub also accepts <a href=\\"https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-partner-providers.html\\" target=\\"_blank\\">findings from third-party partner products</a> and provides APIs for custom product integration. Security Hub significantly reduces the effort to consolidate audit information coming from multiple AWS-native and third-party channels. Its API and supported partner products ecosystem gave us confidence that we can adhere to changes in security and compliance standards with low effort.</p>\\n<p>While AWS provides a rich set of services to manage risk at the <a href=\\"https://global.theiia.org/about/about-internal-auditing/Public%20Documents/Three-Lines-Model-Updated.pdf\\" target=\\"_blank\\">Three Lines Model</a>, we were looking for wider community support in maintaining and extending security controls beyond those defined by CIS benchmarks and compliance and best practices recommendations on AWS. We came across <a href=\\"https://github.com/toniblyx/prowler\\" target=\\"_blank\\">Prowler</a>, an open-source tool focusing on AWS security assessment and auditing and infrastructure hardening. Prowler implements CIS AWS benchmark controls and has over 100 additional checks. We appreciated Prowler providing checks that helped us meet GDPR and ISO 27001 requirements, specifically. Prowler delivers assessment reports in multiple formats, which makes it easy to implement reporting archival for future auditing needs. In addition, Prowler integrates well with Security Hub, which allows us to use a single service for consolidating security and compliance incidents across a number of channels.</p>\\n<p>We came up with the solution architecture depicted in the following diagram.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/cfa388cb105344fdad999baff5a66b4a_image.png\\" alt=\\"image.png\\" /></p>\n<p>Automated remediation solution architecture HDI</p>\n<p>Let’s look closely into the most critical components of this solution.</p>\n<p>Prowler is a command line tool that uses the <a href=\\"http://aws.amazon.com/cli\\" target=\\"_blank\\">AWS Command Line Interface</a> (AWS CLI) and a bash script. Individual Prowler checks are bash scripts organized into groups by compliance standard or AWS service. By supplying corresponding command line arguments, we can run Prowler against a specific AWS Region or multiple Regions at the same time. We can run Prowler in multiple ways; we chose to run it as an <a href=\\"https://aws.amazon.com/fargate\\" target=\\"_blank\\">AWS Fargate</a> task for <a href=\\"http://aws.amazon.com/ecs\\" target=\\"_blank\\">Amazon Elastic Container Service</a> ([Amazon ECS](https://aws.amazon.com/cn/ecs/?trk=cndc-detail)). Fargate is a serverless compute engine that runs Docker-compatible containers. ECS Fargate tasks are scheduled tasks that make it easy to perform periodic assessments of an AWS account and export findings. We configured Prowler to run every 7 days in every account and Region it’s deployed into.</p>\\n<p>Security Hub acts as a single place for consolidating security findings from multiple sources. When Security Hub is enabled in a given Region, CIS AWS Foundations Benchmark and Foundational Security Best Practices standards are enabled as well. Enabling these standards also configures integration with AWS Config and Guard Duty. Integration with Prowler requires enabling product integration on the Security Hub side by calling the <code>EnableImportFindingsForProduct</code><br />\\nAPI action for a given product. Because Prowler supports integration with Security Hub out of the box, posting security findings is a matter of passing the right command line arguments: <code>-M json-asff </code>to format reports as <a href=\\"https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format.html\\" target=\\"_blank\\">AWS Security Findings Format</a> and <code>-S</code> to ship findings to Security Hub.</p>\\n<p>Automated security findings remediation is implemented using <a href=\\"http://aws.amazon.com/lambda\\" target=\\"_blank\\">AWS Lambda</a> functions and the <a href=\\"https://aws.amazon.com/sdk-for-python/\\" target=\\"_blank\\">AWS SDK for Python (Boto3)</a>. The remediation function can be triggered in two ways: automatically in response to a new security finding, or by a security engineer from the Security Hub findings page. In both cases, the same Lambda function is used. Remediation functions implement security standards in accordance with recommendations, whether they’re CIS AWS Foundations Benchmark and Foundational Security Best Practices standards, or others.</p>\\n<p>The exact activities performed depend on the security findings type and its severity. Examples of activities performed include deleting non-rotated <a href=\\"http://aws.amazon.com/iam\\" target=\\"_blank\\">AWS Identity and Access Management</a> (IAM) access keys, enabling server-side encryption for S3 buckets, and deleting unencrypted <a href=\\"http://aws.amazon.com/ebs\\" target=\\"_blank\\">Amazon Elastic Block Store</a> ([Amazon EBS](https://aws.amazon.com/cn/ebs/?trk=cndc-detail)) volumes.</p>\\n<p>To trigger the Lambda function, we use <a href=\\"https://aws.amazon.com/eventbridge/\\" target=\\"_blank\\">Amazon EventBridge</a>, which makes it easy to build an event-driven remediation engine and allows us to define Lambda functions as targets for Security Hub findings and custom actions. EventBridge allows us to define filters for security findings and therefore map finding types to specific remediation functions. Upon successfully performing security remediation, each function updates one or more Security Hub findings by calling the <code>BatchUpdateFindings</code><br />\\nAPI and passing the corresponding finding ID.</p>\n<p>The following example code shows a function enforcing an IAM password policy:</p>\n<p>Python</p>\n<pre><code class=\\"lang-\\">import boto3\\nimport os\\nimport logging\\nfrom botocore.exceptions import ClientError\\n\\niam = boto3.client("iam")\\nsecurityhub = boto3.client("securityhub")\\n\\nlog_level = os.environ.get("LOG_LEVEL", "INFO")\\nlogging.root.setLevel(logging.getLevelName(log_level))\\nlogger = logging.getLogger(__name__)\\n\\n\\ndef lambda_handler(event, context, iam=iam, securityhub=securityhub):\\n """Remediate findings related to cis15 and cis11.\\n\\n Params:\\n event: Lambda event object\\n context: Lambda context object\\n iam: iam boto3 client\\n securityhub: securityhub boto3 client\\n Returns:\\n No returns\\n """\\n finding_id = event["detail"]["findings"][0]["Id"]\\n product_arn = event["detail"]["findings"][0]["ProductArn"]\\n lambda_name = os.environ["AWS_LAMBDA_FUNCTION_NAME"]\\n try:\\n iam.update_account_password_policy(\\n MinimumPasswordLength=14,\\n RequireSymbols=True,\\n RequireNumbers=True,\\n RequireUppercaseCharacters=True,\\n RequireLowercaseCharacters=True,\\n AllowUsersToChangePassword=True,\\n MaxPasswordAge=90,\\n PasswordReusePrevention=24,\\n HardExpiry=True,\\n )\\n logger.info("IAM Password Policy Updated")\\n except ClientError as e:\\n logger.exception(e)\\n raise e\\n try:\\n securityhub.batch_update_findings(\\n FindingIdentifiers=[{"Id": finding_id, "ProductArn": product_arn},],\\n Note={\\n "Text": "Changed non compliant password policy",\\n "UpdatedBy": lambda_name,\\n },\\n Workflow={"Status": "RESOLVED"},\\n )\\n except ClientError as e:\\n logger.exception(e)\\n raise e\\n</code></pre>\\n<p>A key aspect in developing remediation Lambda functions is testability. To quickly iterate through testing cycles, we cover each remediation function with unit tests, in which necessary dependencies are mocked and replaced with stub objects. Because no Lambda deployment is required to check remediation logic, we can test newly developed functions and ensure reliability of existing ones in seconds.</p>\n<p>Each Lambda function developed is accompanied with an <code>event.json</code><br />\\ndocument containing an example of an EventBridge event for a given security finding. A security finding event allows us to verify remediation logic precisely, including deletion or suspension of non-compliant resources or a finding status update in Security Hub and the response returned. Unit tests cover both successful and erroneous remediation logic. We use pytest to develop unit tests, and botocore.stub and moto to replace runtime dependencies with mocks and stubs.</p>\n<h3><a id=\\"Automated_security_findings_remediation_117\\"></a><strong>Automated security findings remediation</strong></h3>\\n<p>The following diagram illustrates our security assessment and automated remediation process.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/2025407a88bc499381ca17353c9e0126_image.png\\" alt=\\"image.png\\" /></p>\n<p>The workflow includes the following steps:</p>\n<ol>\\n<li>\\n<p>An existing Security Hub integration performs periodic resource audits. The integration posts new security findings to Security Hub.</p>\n</li>\\n<li>\\n<p>Security Hub reports the security incident to the company’s centralized Service Now instance by using the Service Now ITSM Security Hub integration.</p>\n</li>\\n<li>\\n<p>Security Hub triggers automated remediation:</p>\n<p>1.Security Hub triggers the remediation function by sending an event to EventBridge. The event has a source field equal to <code>aws.securityhub </code>, with the filter ID corresponding to the specific finding type and compliance status as <code>FAILED </code>. The combination of these fields allows us to map the event to a particular remediation function.</p>\\n<p>2.The remediation function starts processing the security finding event.</p>\n<p>3.The function calls the <code>UpdateFindings </code> Security Hub API to update the security finding status upon completing remediation.</p>\\n<p>4.Security Hub updates the corresponding security incident status in Service Now (Step 2)</p>\n</li>\\n<li>\\n<p>Alternatively, the security operations engineer resolves the security incident in Service Now:</p>\n<p>1.The engineer reviews the current security incident in Service Now.</p>\n<p>2.The engineer manually resolves the security incident in Service Now.</p>\n<p>3.Service Now updates the finding status by calling the <code>UpdateFindings</code> Security Hub API. Service Now uses the AWS Service Management Connector.</p>\\n</li>\n<li>\\n<p>Alternatively, the platform security engineer triggers remediation:</p>\n<p>1.The engineer reviews the currently active security findings on the Security Hub findings page.</p>\n<p>2.The engineer triggers remediation from the security findings page by selecting the appropriate action.</p>\n<p>3.Security Hub triggers the remediation function by sending an event with the source <code>aws.securityhub</code> to EventBridge. The automated remediation flow continues as described in the Step 3.</p>\\n</li>\n</ol>\\n<h3><a id=\\"Deployment_automation_150\\"></a><strong>Deployment automation</strong></h3>\\n<p>Due to legal requirements, HDI uses the infrastructure as code (IaC) principle while defining and deploying AWS infrastructure. We started with <a href=\\"http://aws.amazon.com/cloudformation\\" target=\\"_blank\\">AWS CloudFormation</a> templates defined as YAML or JSON format. The templates are static by nature and define resources in a declarative way. We figured out that as our solution complexity grows, the CloudFormation templates also grow in size and complexity, because all the resources deployed have to be explicitly defined. We wanted a solution to increase our development productivity and simplify infrastructure definition.</p>\\n<p>The <a href=\\"https://aws.amazon.com/cdk/\\" target=\\"_blank\\">AWS Cloud Development Kit</a> ([AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail)) helped us in two ways:</p>\\n<ul>\\n<li>The AWS CDK provides ready-to-use building blocks called constructs. These constructs include pre-configured AWS services following best practices. For example, a Lambda function always gets an IAM role with an IAM policy to be able to write logs to CloudWatch Logs.</li>\n<li>The AWS CDK allows us to use high-level programming languages to define configuration of all AWS services. Imperative definition allows us to build our own abstractions and reuse them to achieve concise resource definition.</li>\n</ul>\\n<p>We found that implementing IaC with the AWS CDK is faster and less error-prone. At HDI, we use Python to build application logic and define AWS infrastructure. The imperative nature of the AWS CDK is truly a turning point in fulfilling legal requirements and achieving high developer productivity at the same time.</p>\n<p>One of the AWS CDK constructs we use is AWS CDK pipeline. This construct creates a customizable continuous integration and continuous delivery (CI/CD) pipeline implemented with <a href=\\"http://aws.amazon.com/codepipeline\\" target=\\"_blank\\">AWS CodePipeline</a>. The source action is based on <a href=\\"https://aws.amazon.com/codecommit/\\" target=\\"_blank\\">AWS CodeCommit</a>. The synth action is responsible for creating a CloudFormation template from the [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) project. The synth action also runs unit tests on remediations functions. The pipeline actions are connected via artifacts. Lastly, the [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) pipeline constructs offer a self-mutating feature, which allows us to maintain the [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) project as well as the pipeline in a single code repository. Changes of the pipeline definition as well as automated remediation solutions are deployed seamlessly. The actual solution deployment is also implemented as a CI/CD stage. Stages can be eventually deployed in cross-Region and cross-account patterns. To use cross-account deployments, the [AWS CDK](https://aws.amazon.com/cn/cdk/?trk=cndc-detail) provides a bootstrap functionality to create a trust relationship between AWS accounts.</p>\\n<p>The AWS CDK project is broken down to multiple stacks. To deploy the CI/CD pipeline, we run the <code>cdk deploy cicd-4-securityhub</code><br />\\ncommand. To add a new Lambda remediation function, we must add remediation code, optional unit tests, and finally the Lambda remediation configuration object. This configuration object defines the Lambda function’s environment variables, necessary IAM policies, and external dependencies. See the following example code of this configuration:</p>\n<p>Python</p>\n<pre><code class=\\"lang-\\">prowler_729_lambda = {\\n "name": "Prowler 7.29",\\n "id": "prowler729",\\n "description": "Remediates Prowler 7.29 by deleting/terminating unencrypted EC2 instances/EBS volumes",\\n "policies": [\\n _iam.PolicyStatement(\\n effect=_iam.Effect.ALLOW,\\n actions=["ec2:TerminateInstances", "ec2:DeleteVolume"],\\n resources=["*"])\\n ],\\n "path": "delete_unencrypted_ebs_volumes",\\n "environment_variables": [\\n {"key": "ACCOUNT_ID", "value": core.Aws.ACCOUNT_ID}\\n ],\\n "filter_id": ["prowler-extra729"],\\n }\\n</code></pre>\\n<p>Remediation functions are organized in accordance with the security and compliance frameworks they belong to. The AWS CDK code iterates over remediation definition lists and synthesizes corresponding policies and Lambda functions to be deployed later. Committing Git changes and pushing them triggers the CI/CD pipeline, which deploys the newly defined remediation function and adjusts the configuration of Prowler.</p>\n<p>We are working on publishing the source code discussed in this blog post.</p>\n<h3><a id=\\"Looking_forward_189\\"></a><strong>Looking forward</strong></h3>\\n<p>As we keep introducing new use cases in the cloud, we plan to improve our solution in the following ways:</p>\n<ul>\\n<li>Continuously add new controls based on our own experience and improving industry standards</li>\n<li>Introduce cross-account security and compliance assessment by consolidating findings in a central security account</li>\n<li>Improve automated remediation resiliency by introducing remediation failure notifications and retry queues</li>\n<li>Run a <a href=\\"https://docs.aws.amazon.com/wellarchitected/latest/framework/the-review-process.html\\" target=\\"_blank\\">Well-Architected review</a> to identify and address possible areas of improvement</li>\\n</ul>\n<h3><a id=\\"Conclusion_197\\"></a><strong>Conclusion</strong></h3>\\n<p>Working on the solution described in this post helped us improve our security posture and meet compliancy requirements in the cloud. Specifically, we were able to achieve the following:</p>\n<ul>\\n<li>Gain a shared understanding of security and compliance controls implementation as well as shared responsibilities in the cloud between multiple teams</li>\n<li>Speed up security reviews of cloud environments by implementing continuous assessment and minimizing manual reviews</li>\n<li>Provide product and platform teams with secure and compliant environments</li>\n<li>Lay a foundation for future requirements and improvement of security posture in the cloud</li>\n</ul>\\n<p><em>The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.</em></p>\\n<h4><a id=\\"About_the_Authors_208\\"></a><strong>About the Authors</strong></h4>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/53d772d4b877487182d0cbcddc06efdc_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>Dr. Malte Polley</strong><br />\\nDr. Malte Polley is a Cloud Solutions Architect of Modern Data Platform (MDP) at HDI Germany. MDP focuses on DevSecOps practices applied to data analytics and provides secure and compliant environment for every data product at HDI Germany. As a cloud enthusiast Malte runs AWS Hannover user group. When not working, Malte enjoys hiking with his family and improving his backyard vegetable garden.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/9a7b4b0490fb4dc281a589454f372323_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>Uladzimir Palkhouski</strong><br />\\nUladzimir Palkhouski is a Sr. Solutions Architect at Amazon Web Services. Uladzimir supports German financial services industry customers on their cloud journey. He helps finding practical forward looking solutions to complex technical and business challenges.</p>\n"}