Accelerate your identity verification projects using Amazon Amplify and Amazon Rekognition sample implementations

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"[Amazon Rekognition](https://aws.amazon.com/rekognition/) allows you to mitigate fraudulent attacks and minimize onboarding friction for legitimate customers through a streamlined identity verification process. This can result in an increase in customer trust and safety. Key capabilities of this solution include:\n\n- Register a new user using a selfie\n- Register a new user after face match against an ID card and ID card data extraction\n- Authenticate returning user\n\nAmazon Rekognition offers pre-trained [facial recognition](https://docs.aws.amazon.com/rekognition/latest/dg/face-feature-differences.html) capabilities that you can quickly add to your user onboarding and authentication workflows to verify opted-in users’ identities online. No machine learning (ML) expertise is required to use this service.\n\nIn a previous [post](https://aws.amazon.com/blogs/machine-learning/identity-verification-using-amazon-rekognition/), we described a typical identity verification workflow and showed you how to build an identity verification solution using various Amazon Rekognition APIs. In this post, we have added a facial identity-based authentication user interface to show a complete end-to-end identity verification solution. We provide a complete sample implementation in our [GitHub repository](https://github.com/aws-samples/rekognition-identity-verification).\n\n### **Solution overview**\n\nThe following reference architecture shows how you can use Amazon Rekognition, along with other AWS services, to implement identity verification.\n\n![image.png](https://dev-media.amazoncloud.cn/1d1512bd96dd44febfc86b37a79457a3_image.png)\n\n\nThe architecture includes the following components:\n\n1. Users access the front-end web portal hosted within the [AWS Amplify](https://aws.amazon.com/amplify/) Amplify is an end-to-end solution that enables front-end web developers to build and deploy secure, scalable full stack applications.\n2. Applications invoke [Amazon API Gateway](https://aws.amazon.com/api-gateway/) to route requests to the correct [AWS Lambda](https://aws.amazon.com/lambda/) function depending on the user flow. There are four major actions in this solution: authenticate, register, register with ID card, and update.\n3. API Gateway uses a service integration to run the [AWS Step Functions](https://aws.amazon.com/step-functions/?step-functions.sort-by=item.additionalFields.postDateTime&step-functions.sort-order=desc) express state machine corresponding to the specific endpoint called from API Gateway. Within each step, Lambda functions are responsible for triggering the correct set of calls to and from [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) and [Amazon Simple Storage Service](https://aws.amazon.com/s3/) (Amazon S3), along with the relevant Amazon Rekognition APIs.\n4. DynamoDB holds face IDs (```face-id```), S3 path URIs, and unique IDs (for example employee ID number) for each ```face-id```. Amazon S3 stores all the face images.\n5. The final major component of the solution is Amazon Rekognition. Each flow (authenticate, register, register with ID card, and update) calls different Amazon Rekognition APIs depending on the task.\n\n\nBefore we deploy the solution, it’s important to know the following concepts and API descriptions:\n\n\n- [Collections](https://docs.aws.amazon.com/rekognition/latest/dg/collections.html) – Amazon Rekognition stores information about detected faces in server-side containers known as collections. You can use the facial information that’s stored in a collection to search for known faces in images, stored videos, and streaming videos. You can use collections in a variety of scenarios. For example, you might create a face collection to store scanned badge images by using the [IndexFaces](https://docs.aws.amazon.com/rekognition/latest/dg/API_IndexFaces.html) When an employee enters the building, an image of the employee’s face may be captured and sent to the [SearchFacesByImage](https://docs.aws.amazon.com/rekognition/latest/dg/API_SearchFacesByImage.html) operation. If the face match produces a sufficiently high similarity score (say 99%), you can authenticate the employee.\n- [DetectFaces API](https://docs.aws.amazon.com/rekognition/latest/dg/API_DetectFaces.html) – This API detects faces within an image provided as input and returns information about faces. In a user registration workflow, this operation may help you screen images before moving to the next step. For example, you can check if a photo contains a face, if the person identified is in the right orientation, and if they’re not wearing a face blocker such as sunglasses or a cap.\n- [IndexFaces API](https://docs.aws.amazon.com/rekognition/latest/dg/API_IndexFaces.html) – This API detects faces in the input image and adds them to the specified collection. This operation is used to add a screened image to a collection for future queries.\n- [SearchFacesByImage API](https://docs.aws.amazon.com/rekognition/latest/dg/API_SearchFacesByImage.html) – For a given input image, the API first detects the largest face in the image, and then searches the specified collection for matching faces. The operation compares the features of the input face with face features in the specified collection.\n- [CompareFaces API](https://docs.aws.amazon.com/rekognition/latest/dg/API_CompareFaces.html) – This API compares a face in the source input image with each of the 100 largest faces detected in the target input image. If the source image contains multiple faces, the service detects the largest face and compares it with each face detected in the target image. For our use case, we expect both the source and target image to contain a single face.\n- [DeleteFaces API](https://docs.aws.amazon.com/rekognition/latest/dg/API_DeleteFaces.html) – This API deletes faces from a collection. You specify a collection ID and an array of face IDs to remove.\n\n\n### **Workflows**\n\nThe solution provides a sample of workflows to enable user registration, authentication, and updates to the user profile image. We detail each workflow in this section.\n\n#### **Register a new user using a face selfie**\n\nThe following figure shows the workflow of a new user registration. Typical steps in this process are:\n\n1. A user captures a selfie image.\n2. A quality check of the selfie image is performed. **Note:** A liveness detection check can also be performed after this step. For more details, please read this [blog](https://aws.amazon.com/pt/blogs/industries/improving-fraud-prevention-in-financial-institutions-by-building-a-liveness-detection-solution/).\n3. The selfie is checked against a database of existing user faces. \n\n![image.png](https://dev-media.amazoncloud.cn/cfdc719413e94c109028ba4ff4f11430_image.png)\n\nThe following image illustrates the Step Functions workflow for new user registration.\n\n![image.png](https://dev-media.amazoncloud.cn/029df35b5aee4d32ac8687185d1d52f0_image.png)\n\n\nThree functions are called in this workflow: [detect-faces](https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/detect-faces), [search-faces](https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/search-faces), and [index-faces](https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/index-faces). The [detect-faces](https://github.com/aws-samples/rekognition-identity-verification/blob/main/src/rekognition/detect-faces/handler.py) function calls the Amazon Rekognition ```DetectFaces``` API to determine if a face is detected in an image and is usable. Some of the quality checks include determining that only one face is present in the image, ensuring the face isn’t obscured by sunglasses or a hat, and confirming that the face isn’t rotated by using the [pose](https://docs.aws.amazon.com/rekognition/latest/dg/API_Pose.html) dimension. If the image passes the quality check, the [search-faces](https://github.com/aws-samples/rekognition-identity-verification/blob/main/src/rekognition/search-faces/handler.py) function searches for an existing face match in the Amazon Rekognition collections by confirming the [FaceMatchThreshold](https://docs.aws.amazon.com/rekognition/latest/dg/API_SearchFaces.html) confidence score meets your threshold objective. For more information, refer to [Using similarity thresholds to match faces](https://docs.aws.amazon.com/rekognition/latest/dg/collections.html). If the face image doesn’t exist in the collections, the [index-faces](https://github.com/aws-samples/rekognition-identity-verification/blob/main/src/rekognition/index-faces/handler.py) function is called to index the face in the collections. The face image metadata is stored in the DynamoDB table and the face images are stored in an S3 bucket.\n\nIf the new user registration succeeds, the face image attribute information is added in DynamoDB. You can customize the flow according to the business process. It often contains some or all of the steps presented in the preceding diagram. You can choose to run all the steps synchronously (wait for one step to complete before moving on to the next step). Alternately, you can run some of the steps asynchronously (don’t wait for that step to complete) to speed up the user registration process and improve the customer experience. If the steps aren’t successful, you must roll back the user registration.\n\n\n#### **Register a new user after face match against an ID card with ID card data extraction**\n\nIn addition to user registration with image, this workflow allows users to register with an identification card like driver’s license. The steps to register a new user with an ID card are similar to the steps for registering a new user.\n\n![image.png](https://dev-media.amazoncloud.cn/ac1f6b13b7294d8daf08e4c19b0bbb16_image.png)\n\n\nThe following image illustrates the Step Functions workflow for new user registration with ID.\n\n![image.png](https://dev-media.amazoncloud.cn/3f61522d3980427ba596493ce32095d1_image.png)\n\n\nFour functions are called in this workflow: [detect-faces](https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/detect-faces), [search-faces](https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/search-faces), [index-faces](https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/index-faces) and [compare-faces](Four functions are called in this workflow: detect-faces, search-faces, index-faces and compare-faces. The sequence of operations in this workflow is similar to the user registration workflow with the addition of compare-faces. After verifying the quality of the selfie image and ensuring the face image is not present in the collection, the compare-faces function is invoked to verify the selfie image matches the face image in the ID card. If the images match, the relevant properties are extracted from the ID card. You can extract key-value pairs from identity documents using the newly launched Amazon Textract AnalyzeID API (for US regions) or Amazon Rekognition DetectText API (non-US regions and non-English languages). The extracted properties from the ID card are merged and the user’s face is indexed in the collection via the index-faces function.). The sequence of operations in this workflow is similar to the user registration workflow with the addition of compare-faces. After verifying the quality of the selfie image and ensuring the face image is not present in the collection, the compare-faces function is invoked to verify the selfie image matches the face image in the ID card. If the images match, the relevant properties are extracted from the ID card. You can extract key-value pairs from identity documents using the newly launched Amazon [Textract](https://aws.amazon.com/textract/) ```AnalyzeID``` API (for US regions) or Amazon Rekognition ```DetectText``` API (non-US regions and non-English languages). The extracted properties from the ID card are merged and the user’s face is indexed in the collection via the [index-faces](https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/index-faces) function.\n\n\nThe face image metadata is stored in the DynamoDB table and the face images are stored in an S3 bucket.\n\nIf the images don’t match or a duplicate registration is detected, the user receives a login failure. Login failures can be logged using an [Amazon CloudWatch](http://aws.amazon.com/cloudwatch) event, and actions can be triggered using [Amazon Simple Notification Service](http://aws.amazon.com/sns) (Amazon SNS) to notify security operations for monitoring and tracking failed logins. For more information, refer to [Monitoring Amazon SNS topics using CloudWatch](https://docs.aws.amazon.com/sns/latest/dg/sns-monitoring-using-cloudwatch.html).\n\n\n#### **Authenticate returning user**\n\nAnother common flow is an existing or returning user login. In this flow, a check of the user face (selfie) is performed against a previously registered face. Typical steps in this process include user face capture (selfie), check of the selfie image quality, and search and compare of the selfie against the faces database. The following diagram shows a possible flow.\n\n\n![image.png](https://dev-media.amazoncloud.cn/d25cecdf305a4bc38018b272a4161928_image.png)\n\nThe following image illustrates the workflow for authenticating an existing user.\n\n![image.png](https://dev-media.amazoncloud.cn/a125e6f91aa743a785ab7fbe8807e30a_image.png)\n\nThis Step Function workflow calls three functions: [detect-faces](https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/detect-faces), [compare-faces](https://github.com/aws-samples/rekognition-identity-verification/blob/main/src/rekognition/compare-faces/handler.py) and [search-faces](https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/search-faces). After the [detect-faces](https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/detect-faces) function verifies that the captured face image is valid, the [compare-faces](https://github.com/aws-samples/rekognition-identity-verification/blob/main/src/rekognition/compare-faces/handler.py) function checks the link in the DynamoDB table for a face image in S3 bucket that matches an existing user. If a match is found, the user authenticates successfully. If a match isn’t found, the search-faces function is called to search for the face image in the collections. The user is verified and the authentication process completes if their face image exists in the collections. Otherwise, the user’s access is denied.\n\n### **Prerequisites**\n\nBefore you get started, complete the following prerequisites:\n\n1. [Create an AWS account](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/).\n2. Install the [AWS Command Line Interface ](http://aws.amazon.com/cli)(AWS CLI) version 2 on your local machine. For instructions, refer to [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).\n3. [Set up the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html).\n4. [Install Node.js](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) on your local machine.\n5. Clone the sample repo on your local machine:\n\n```\n\ngit clone https://github.com/aws-samples/rekognition-identity-verification.git\n\n```\n\n### **Deploy the solution**\n\nChoose the appropriate CloudFormation stack to provision the solution in your AWS account in your preferred Region. This solution deploys API Gateway integrated with Step Functions and Amazon Rekognition APIs to run the identity verification workflows.\n\nClicking on one of the following launch buttons will provision the solution into your AWS Account in the particular region.\n\n[![image.png](https://dev-media.amazoncloud.cn/055c6ac1f8e843bdb38b0064bdd65b56_image.png)](https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/create/template?stackName=Riv-Prod&templateURL=https://aws-ml-blog.s3.amazonaws.com/artifacts/rekognition-identity-verification-solution/Riv-Prod.template.json) N. Virginia (```us-east-1```)\n\n[![image.png](https://dev-media.amazoncloud.cn/1ea4ba59c765499dad7363c3f662eba5_image.png)](https://us-west-2.console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/create/template?stackName=Riv-Prod&templateURL=https://aws-ml-blog.s3.amazonaws.com/artifacts/rekognition-identity-verification-solution/Riv-Prod.template.json)Oregon (```us-west-2```)\n\nRun the following steps on your local machine to deploy the Front-end application:\n\n\n```\n\ncd rekognition-identity-verification \n./fe-deployment.sh\n\n```\n\n#### **Invoke the web UI**\n\nThe web portal is deployed with Amplify. On the Amplify console, locate the hosted web application environment and the URL. Copy the URL and access it from your browser.\n\n![image.png](https://dev-media.amazoncloud.cn/6bb0632b1ad9411bb65398504204a38a_image.png)\n\n\n#### **Register a new user using a face selfie**\n\nRegister yourself as a user with the following steps:\n\n1. Open the web URL provided from Amplify.\n2. Choose **Register**\n3. Enable your camera and capture a face image.\n4. Enter your user name and details.\n5. Choose **Signup** to register your account.\n\n\n![image.png](https://dev-media.amazoncloud.cn/66c4e70d4fcd4408adf54c736c87d2d9_image.png)\n\n\n#### **Authenticate returning user**\n\nAfter you’re registered, you log in using the face ID as an authentication mechanism.\n\n1. Open the web URL provided by Amplify\n2. Capture your face ID.\n3. Enter your user ID.\n4. Choose **Login**.\n\n\n![image.png](https://dev-media.amazoncloud.cn/654f141497dc4e26bea3c856593a0e8e_image.png)\n\n\nYou get a “Login successful” message after your face ID is verified with the registration image.\n\n\n![image.png](https://dev-media.amazoncloud.cn/cfb6ec2d03734a2dba14b63c9d7b6ebd_image.png)\n\n\n#### **Register a new user after face match against an ID card with ID card data extraction**\n\nTo test user registration with an ID, complete the following steps:\n\n1. Open the web URL provided by Amplify.\n2. Choose **Register with ID**\n3. Enable your camera and capture a face image.\n4. Drag and drop your ID card\n5. Choose **Register**.\n\n\n![image.png](https://dev-media.amazoncloud.cn/69fb8bc4689f427b8c35ab0702277968_image.png)\n\n\nThe following screenshot shows an example. The application supports ID card images of up to 256 KB.\n\n\n![image.png](https://dev-media.amazoncloud.cn/6a2d184aaea14c69b9383db511345737_image.png)\n\n\nYou receive a “Successfully Registered User” message.\n\n\n![image.png](https://dev-media.amazoncloud.cn/ed8982c8e9344ef0b99edacff3afb090_image.png)\n\n\n### **Clean up**\n\nTo prevent accruing additional charges in your AWS account, delete the resources you provisioned by navigating to the AWS CloudFormation console and deleting the ```Riv-Prod``` stack.\n\nDeleting the stack doesn’t delete the S3 bucket you created. This bucket stores all the face images. If you want to delete the S3 bucket, navigate to the Amazon S3 console, empty the bucket, and then confirm you want to permanently delete it.\n\n\n### **Conclusion**\n\nAmazon Rekognition makes it easy to add image analysis to your identity verification applications using proven, highly scalable, deep learning technology that requires no ML expertise to use. Amazon Rekognition [provides face detection and comparison](https://docs.aws.amazon.com/rekognition/latest/dg/face-feature-differences.html) capabilities. With a combination of the [DetectFaces](https://docs.aws.amazon.com/rekognition/latest/dg/API_DetectFaces.html), [CompareFaces](https://docs.aws.amazon.com/rekognition/latest/dg/API_CompareFaces.html), [IndexFaces](https://docs.aws.amazon.com/rekognition/latest/dg/API_IndexFaces.html), [SearchFacesByImage](https://docs.aws.amazon.com/rekognition/latest/dg/API_SearchFacesByImage.html), [DetectText](https://docs.aws.amazon.com/rekognition/latest/dg/text-detection.html) and [AnalyzeID](https://docs.aws.amazon.com/textract/latest/dg/API_AnalyzeID.html), you can implement the common flows around new user registration and existing user logins.\n\nAmazon Rekognition collections provide a method to store information about detected faces in server-side containers. You can then use the facial information stored in a collection to search for known faces in images. When using collections, you don’t need to store original photos after you index faces in the collection. Amazon Rekognition collections don’t persist actual images. Instead, the underlying detection algorithm detects the faces in the input image, extracts facial features into a feature vector for each face, and stores it in the collection.\n\nTo start your journey towards identity verification, visit [Identity Verification using Amazon Rekognition](https://aws.amazon.com/rekognition/identity-verification/).\n\n\n#### **About the authors**\n\n![image.png](https://dev-media.amazoncloud.cn/fcc9554da9bb4b96b4bf401ee473ee1a_image.png)\n\n**Vineet Kacchawaha** is a Solutions Architect at AWS with expertise in Machine Learning. He is responsible for helping customers architect scalable, secure, and cost-effective workloads on AWS.\n\n![image.png](https://dev-media.amazoncloud.cn/b9a6f1d36cfe452db59612315b4ed415_image.png)\n\n\n**Ramesh Thiagarajan** is a Senior Solutions Architect based out of San Francisco. He holds a Bachelor of Science in Applied Sciences and a master’s in Cyber Security. He specializes in cloud migration, cloud security, compliance, and risk management. Outside of work, he is a passionate gardener, and has an avid interest in real estate and home improvement projects.\n\n![image.png](https://dev-media.amazoncloud.cn/4745bce79979473193bf26f5b78f570f_image.png)\n\n**Amit Gupta** is an AI Services Solutions Architect at AWS. He is passionate about enabling customers with well-architected machine learning solutions at scale.\n\n\n![image.png](https://dev-media.amazoncloud.cn/af60f6e7a3a84473b90696b1cec1aa04_image.png)\n\n\n**Tim Murphy** is a Senior Solutions Architect for AWS, working with enterprise financial service customers building business cloud centric solutions. He has spent the last decade working with startups, non-profits, commercial enterprise, and government agencies, deploying infrastructure at scale. In his spare time when he isn’t tinkering with technology, you’ll most likely find him in far flung areas of the earth hiking mountains, surfing waves, or biking through a new city.\n\n\n![image.png](https://dev-media.amazoncloud.cn/4c3e0e727c2f44178f442d73af31581e_image.png)\n\n\n**Nate Bachmeier** is an AWS Senior Solutions Architect that nomadically explores New York, one cloud integration at a time. He specializes in migrating and modernizing applications. Besides this, Nate is a full-time student and has two kids.\n\n\n![image.png](https://dev-media.amazoncloud.cn/63b171bdf77a407bae30e8c043bf9430_image.png)\n\n\n**Jessie-Lee Fry** is a Snr AIML Specialist with a focus on Computer Vision at AWS. She helps organizations leverage Machine Learning and AI to combat fraud and drive innovation on behalf of their customers. Outside of work, she enjoys spending time with her family, traveling and read all about Responsible AI.","render":"<p><a href=\"https://aws.amazon.com/rekognition/\" target=\"_blank\">Amazon Rekognition</a> allows you to mitigate fraudulent attacks and minimize onboarding friction for legitimate customers through a streamlined identity verification process. This can result in an increase in customer trust and safety. Key capabilities of this solution include:</p>\n<ul>\n<li>Register a new user using a selfie</li>\n<li>Register a new user after face match against an ID card and ID card data extraction</li>\n<li>Authenticate returning user</li>\n</ul>\n<p>Amazon Rekognition offers pre-trained <a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/face-feature-differences.html\" target=\"_blank\">facial recognition</a> capabilities that you can quickly add to your user onboarding and authentication workflows to verify opted-in users’ identities online. No machine learning (ML) expertise is required to use this service.</p>\n<p>In a previous <a href=\"https://aws.amazon.com/blogs/machine-learning/identity-verification-using-amazon-rekognition/\" target=\"_blank\">post</a>, we described a typical identity verification workflow and showed you how to build an identity verification solution using various Amazon Rekognition APIs. In this post, we have added a facial identity-based authentication user interface to show a complete end-to-end identity verification solution. We provide a complete sample implementation in our <a href=\"https://github.com/aws-samples/rekognition-identity-verification\" target=\"_blank\">GitHub repository</a>.</p>\n<h3><a id=\"Solution_overview_10\"></a><strong>Solution overview</strong></h3>\n<p>The following reference architecture shows how you can use Amazon Rekognition, along with other AWS services, to implement identity verification.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/1d1512bd96dd44febfc86b37a79457a3_image.png\" alt=\"image.png\" /></p>\n<p>The architecture includes the following components:</p>\n<ol>\n<li>Users access the front-end web portal hosted within the <a href=\"https://aws.amazon.com/amplify/\" target=\"_blank\">AWS Amplify</a> Amplify is an end-to-end solution that enables front-end web developers to build and deploy secure, scalable full stack applications.</li>\n<li>Applications invoke <a href=\"https://aws.amazon.com/api-gateway/\" target=\"_blank\">Amazon API Gateway</a> to route requests to the correct <a href=\"https://aws.amazon.com/lambda/\" target=\"_blank\">AWS Lambda</a> function depending on the user flow. There are four major actions in this solution: authenticate, register, register with ID card, and update.</li>\n<li>API Gateway uses a service integration to run the <a href=\"https://aws.amazon.com/step-functions/?step-functions.sort-by=item.additionalFields.postDateTime&amp;step-functions.sort-order=desc\" target=\"_blank\">AWS Step Functions</a> express state machine corresponding to the specific endpoint called from API Gateway. Within each step, Lambda functions are responsible for triggering the correct set of calls to and from <a href=\"https://aws.amazon.com/dynamodb/\" target=\"_blank\">Amazon DynamoDB</a> and <a href=\"https://aws.amazon.com/s3/\" target=\"_blank\">Amazon Simple Storage Service</a> (Amazon S3), along with the relevant Amazon Rekognition APIs.</li>\n<li>DynamoDB holds face IDs (<code>face-id</code>), S3 path URIs, and unique IDs (for example employee ID number) for each <code>face-id</code>. Amazon S3 stores all the face images.</li>\n<li>The final major component of the solution is Amazon Rekognition. Each flow (authenticate, register, register with ID card, and update) calls different Amazon Rekognition APIs depending on the task.</li>\n</ol>\n<p>Before we deploy the solution, it’s important to know the following concepts and API descriptions:</p>\n<ul>\n<li><a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/collections.html\" target=\"_blank\">Collections</a> – Amazon Rekognition stores information about detected faces in server-side containers known as collections. You can use the facial information that’s stored in a collection to search for known faces in images, stored videos, and streaming videos. You can use collections in a variety of scenarios. For example, you might create a face collection to store scanned badge images by using the <a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/API_IndexFaces.html\" target=\"_blank\">IndexFaces</a> When an employee enters the building, an image of the employee’s face may be captured and sent to the <a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/API_SearchFacesByImage.html\" target=\"_blank\">SearchFacesByImage</a> operation. If the face match produces a sufficiently high similarity score (say 99%), you can authenticate the employee.</li>\n<li><a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/API_DetectFaces.html\" target=\"_blank\">DetectFaces API</a> – This API detects faces within an image provided as input and returns information about faces. In a user registration workflow, this operation may help you screen images before moving to the next step. For example, you can check if a photo contains a face, if the person identified is in the right orientation, and if they’re not wearing a face blocker such as sunglasses or a cap.</li>\n<li><a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/API_IndexFaces.html\" target=\"_blank\">IndexFaces API</a> – This API detects faces in the input image and adds them to the specified collection. This operation is used to add a screened image to a collection for future queries.</li>\n<li><a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/API_SearchFacesByImage.html\" target=\"_blank\">SearchFacesByImage API</a> – For a given input image, the API first detects the largest face in the image, and then searches the specified collection for matching faces. The operation compares the features of the input face with face features in the specified collection.</li>\n<li><a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/API_CompareFaces.html\" target=\"_blank\">CompareFaces API</a> – This API compares a face in the source input image with each of the 100 largest faces detected in the target input image. If the source image contains multiple faces, the service detects the largest face and compares it with each face detected in the target image. For our use case, we expect both the source and target image to contain a single face.</li>\n<li><a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/API_DeleteFaces.html\" target=\"_blank\">DeleteFaces API</a> – This API deletes faces from a collection. You specify a collection ID and an array of face IDs to remove.</li>\n</ul>\n<h3><a id=\"Workflows_37\"></a><strong>Workflows</strong></h3>\n<p>The solution provides a sample of workflows to enable user registration, authentication, and updates to the user profile image. We detail each workflow in this section.</p>\n<h4><a id=\"Register_a_new_user_using_a_face_selfie_41\"></a><strong>Register a new user using a face selfie</strong></h4>\n<p>The following figure shows the workflow of a new user registration. Typical steps in this process are:</p>\n<ol>\n<li>A user captures a selfie image.</li>\n<li>A quality check of the selfie image is performed. <strong>Note:</strong> A liveness detection check can also be performed after this step. For more details, please read this <a href=\"https://aws.amazon.com/pt/blogs/industries/improving-fraud-prevention-in-financial-institutions-by-building-a-liveness-detection-solution/\" target=\"_blank\">blog</a>.</li>\n<li>The selfie is checked against a database of existing user faces.</li>\n</ol>\n<p><img src=\"https://dev-media.amazoncloud.cn/cfdc719413e94c109028ba4ff4f11430_image.png\" alt=\"image.png\" /></p>\n<p>The following image illustrates the Step Functions workflow for new user registration.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/029df35b5aee4d32ac8687185d1d52f0_image.png\" alt=\"image.png\" /></p>\n<p>Three functions are called in this workflow: <a href=\"https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/detect-faces\" target=\"_blank\">detect-faces</a>, <a href=\"https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/search-faces\" target=\"_blank\">search-faces</a>, and <a href=\"https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/index-faces\" target=\"_blank\">index-faces</a>. The <a href=\"https://github.com/aws-samples/rekognition-identity-verification/blob/main/src/rekognition/detect-faces/handler.py\" target=\"_blank\">detect-faces</a> function calls the Amazon Rekognition <code>DetectFaces</code> API to determine if a face is detected in an image and is usable. Some of the quality checks include determining that only one face is present in the image, ensuring the face isn’t obscured by sunglasses or a hat, and confirming that the face isn’t rotated by using the <a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/API_Pose.html\" target=\"_blank\">pose</a> dimension. If the image passes the quality check, the <a href=\"https://github.com/aws-samples/rekognition-identity-verification/blob/main/src/rekognition/search-faces/handler.py\" target=\"_blank\">search-faces</a> function searches for an existing face match in the Amazon Rekognition collections by confirming the <a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/API_SearchFaces.html\" target=\"_blank\">FaceMatchThreshold</a> confidence score meets your threshold objective. For more information, refer to <a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/collections.html\" target=\"_blank\">Using similarity thresholds to match faces</a>. If the face image doesn’t exist in the collections, the <a href=\"https://github.com/aws-samples/rekognition-identity-verification/blob/main/src/rekognition/index-faces/handler.py\" target=\"_blank\">index-faces</a> function is called to index the face in the collections. The face image metadata is stored in the DynamoDB table and the face images are stored in an S3 bucket.</p>\n<p>If the new user registration succeeds, the face image attribute information is added in DynamoDB. You can customize the flow according to the business process. It often contains some or all of the steps presented in the preceding diagram. You can choose to run all the steps synchronously (wait for one step to complete before moving on to the next step). Alternately, you can run some of the steps asynchronously (don’t wait for that step to complete) to speed up the user registration process and improve the customer experience. If the steps aren’t successful, you must roll back the user registration.</p>\n<h4><a id=\"Register_a_new_user_after_face_match_against_an_ID_card_with_ID_card_data_extraction_61\"></a><strong>Register a new user after face match against an ID card with ID card data extraction</strong></h4>\n<p>In addition to user registration with image, this workflow allows users to register with an identification card like driver’s license. The steps to register a new user with an ID card are similar to the steps for registering a new user.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/ac1f6b13b7294d8daf08e4c19b0bbb16_image.png\" alt=\"image.png\" /></p>\n<p>The following image illustrates the Step Functions workflow for new user registration with ID.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/3f61522d3980427ba596493ce32095d1_image.png\" alt=\"image.png\" /></p>\n<p>Four functions are called in this workflow: <a href=\"https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/detect-faces\" target=\"_blank\">detect-faces</a>, <a href=\"https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/search-faces\" target=\"_blank\">search-faces</a>, <a href=\"https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/index-faces\" target=\"_blank\">index-faces</a> and [compare-faces](Four functions are called in this workflow: detect-faces, search-faces, index-faces and compare-faces. The sequence of operations in this workflow is similar to the user registration workflow with the addition of compare-faces. After verifying the quality of the selfie image and ensuring the face image is not present in the collection, the compare-faces function is invoked to verify the selfie image matches the face image in the ID card. If the images match, the relevant properties are extracted from the ID card. You can extract key-value pairs from identity documents using the newly launched Amazon Textract AnalyzeID API (for US regions) or Amazon Rekognition DetectText API (non-US regions and non-English languages). The extracted properties from the ID card are merged and the user’s face is indexed in the collection via the index-faces function.). The sequence of operations in this workflow is similar to the user registration workflow with the addition of compare-faces. After verifying the quality of the selfie image and ensuring the face image is not present in the collection, the compare-faces function is invoked to verify the selfie image matches the face image in the ID card. If the images match, the relevant properties are extracted from the ID card. You can extract key-value pairs from identity documents using the newly launched Amazon <a href=\"https://aws.amazon.com/textract/\" target=\"_blank\">Textract</a> <code>AnalyzeID</code> API (for US regions) or Amazon Rekognition <code>DetectText</code> API (non-US regions and non-English languages). The extracted properties from the ID card are merged and the user’s face is indexed in the collection via the <a href=\"https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/index-faces\" target=\"_blank\">index-faces</a> function.</p>\n<p>The face image metadata is stored in the DynamoDB table and the face images are stored in an S3 bucket.</p>\n<p>If the images don’t match or a duplicate registration is detected, the user receives a login failure. Login failures can be logged using an <a href=\"http://aws.amazon.com/cloudwatch\" target=\"_blank\">Amazon CloudWatch</a> event, and actions can be triggered using <a href=\"http://aws.amazon.com/sns\" target=\"_blank\">Amazon Simple Notification Service</a> (Amazon SNS) to notify security operations for monitoring and tracking failed logins. For more information, refer to <a href=\"https://docs.aws.amazon.com/sns/latest/dg/sns-monitoring-using-cloudwatch.html\" target=\"_blank\">Monitoring Amazon SNS topics using CloudWatch</a>.</p>\n<h4><a id=\"Authenticate_returning_user_81\"></a><strong>Authenticate returning user</strong></h4>\n<p>Another common flow is an existing or returning user login. In this flow, a check of the user face (selfie) is performed against a previously registered face. Typical steps in this process include user face capture (selfie), check of the selfie image quality, and search and compare of the selfie against the faces database. The following diagram shows a possible flow.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/d25cecdf305a4bc38018b272a4161928_image.png\" alt=\"image.png\" /></p>\n<p>The following image illustrates the workflow for authenticating an existing user.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/a125e6f91aa743a785ab7fbe8807e30a_image.png\" alt=\"image.png\" /></p>\n<p>This Step Function workflow calls three functions: <a href=\"https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/detect-faces\" target=\"_blank\">detect-faces</a>, <a href=\"https://github.com/aws-samples/rekognition-identity-verification/blob/main/src/rekognition/compare-faces/handler.py\" target=\"_blank\">compare-faces</a> and <a href=\"https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/search-faces\" target=\"_blank\">search-faces</a>. After the <a href=\"https://github.com/aws-samples/rekognition-identity-verification/tree/main/src/rekognition/detect-faces\" target=\"_blank\">detect-faces</a> function verifies that the captured face image is valid, the <a href=\"https://github.com/aws-samples/rekognition-identity-verification/blob/main/src/rekognition/compare-faces/handler.py\" target=\"_blank\">compare-faces</a> function checks the link in the DynamoDB table for a face image in S3 bucket that matches an existing user. If a match is found, the user authenticates successfully. If a match isn’t found, the search-faces function is called to search for the face image in the collections. The user is verified and the authentication process completes if their face image exists in the collections. Otherwise, the user’s access is denied.</p>\n<h3><a id=\"Prerequisites_94\"></a><strong>Prerequisites</strong></h3>\n<p>Before you get started, complete the following prerequisites:</p>\n<ol>\n<li><a href=\"https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/\" target=\"_blank\">Create an AWS account</a>.</li>\n<li>Install the <a href=\"http://aws.amazon.com/cli\" target=\"_blank\">AWS Command Line Interface </a>(AWS CLI) version 2 on your local machine. For instructions, refer to <a href=\"https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html\" target=\"_blank\">Installing or updating the latest version of the AWS CLI</a>.</li>\n<li><a href=\"https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html\" target=\"_blank\">Set up the AWS CLI</a>.</li>\n<li><a href=\"https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html\" target=\"_blank\">Install Node.js</a> on your local machine.</li>\n<li>Clone the sample repo on your local machine:</li>\n</ol>\n<pre><code class=\"lang-\">\ngit clone https://github.com/aws-samples/rekognition-identity-verification.git\n\n</code></pre>\n<h3><a id=\"Deploy_the_solution_110\"></a><strong>Deploy the solution</strong></h3>\n<p>Choose the appropriate CloudFormation stack to provision the solution in your AWS account in your preferred Region. This solution deploys API Gateway integrated with Step Functions and Amazon Rekognition APIs to run the identity verification workflows.</p>\n<p>Clicking on one of the following launch buttons will provision the solution into your AWS Account in the particular region.</p>\n<p><a href=\"https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/create/template?stackName=Riv-Prod&amp;templateURL=https://aws-ml-blog.s3.amazonaws.com/artifacts/rekognition-identity-verification-solution/Riv-Prod.template.json\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/055c6ac1f8e843bdb38b0064bdd65b56_image.png\" alt=\"image.png\" /></a> N. Virginia (<code>us-east-1</code>)</p>\n<p><a href=\"https://us-west-2.console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/create/template?stackName=Riv-Prod&amp;templateURL=https://aws-ml-blog.s3.amazonaws.com/artifacts/rekognition-identity-verification-solution/Riv-Prod.template.json\" target=\"_blank\"><img src=\"https://dev-media.amazoncloud.cn/1ea4ba59c765499dad7363c3f662eba5_image.png\" alt=\"image.png\" /></a>Oregon (<code>us-west-2</code>)</p>\n<p>Run the following steps on your local machine to deploy the Front-end application:</p>\n<pre><code class=\"lang-\">\ncd rekognition-identity-verification \n./fe-deployment.sh\n\n</code></pre>\n<h4><a id=\"Invoke_the_web_UI_130\"></a><strong>Invoke the web UI</strong></h4>\n<p>The web portal is deployed with Amplify. On the Amplify console, locate the hosted web application environment and the URL. Copy the URL and access it from your browser.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/6bb0632b1ad9411bb65398504204a38a_image.png\" alt=\"image.png\" /></p>\n<h4><a id=\"Register_a_new_user_using_a_face_selfie_137\"></a><strong>Register a new user using a face selfie</strong></h4>\n<p>Register yourself as a user with the following steps:</p>\n<ol>\n<li>Open the web URL provided from Amplify.</li>\n<li>Choose <strong>Register</strong></li>\n<li>Enable your camera and capture a face image.</li>\n<li>Enter your user name and details.</li>\n<li>Choose <strong>Signup</strong> to register your account.</li>\n</ol>\n<p><img src=\"https://dev-media.amazoncloud.cn/66c4e70d4fcd4408adf54c736c87d2d9_image.png\" alt=\"image.png\" /></p>\n<h4><a id=\"Authenticate_returning_user_151\"></a><strong>Authenticate returning user</strong></h4>\n<p>After you’re registered, you log in using the face ID as an authentication mechanism.</p>\n<ol>\n<li>Open the web URL provided by Amplify</li>\n<li>Capture your face ID.</li>\n<li>Enter your user ID.</li>\n<li>Choose <strong>Login</strong>.</li>\n</ol>\n<p><img src=\"https://dev-media.amazoncloud.cn/654f141497dc4e26bea3c856593a0e8e_image.png\" alt=\"image.png\" /></p>\n<p>You get a “Login successful” message after your face ID is verified with the registration image.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/cfb6ec2d03734a2dba14b63c9d7b6ebd_image.png\" alt=\"image.png\" /></p>\n<h4><a id=\"Register_a_new_user_after_face_match_against_an_ID_card_with_ID_card_data_extraction_170\"></a><strong>Register a new user after face match against an ID card with ID card data extraction</strong></h4>\n<p>To test user registration with an ID, complete the following steps:</p>\n<ol>\n<li>Open the web URL provided by Amplify.</li>\n<li>Choose <strong>Register with ID</strong></li>\n<li>Enable your camera and capture a face image.</li>\n<li>Drag and drop your ID card</li>\n<li>Choose <strong>Register</strong>.</li>\n</ol>\n<p><img src=\"https://dev-media.amazoncloud.cn/69fb8bc4689f427b8c35ab0702277968_image.png\" alt=\"image.png\" /></p>\n<p>The following screenshot shows an example. The application supports ID card images of up to 256 KB.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/6a2d184aaea14c69b9383db511345737_image.png\" alt=\"image.png\" /></p>\n<p>You receive a “Successfully Registered User” message.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/ed8982c8e9344ef0b99edacff3afb090_image.png\" alt=\"image.png\" /></p>\n<h3><a id=\"Clean_up_196\"></a><strong>Clean up</strong></h3>\n<p>To prevent accruing additional charges in your AWS account, delete the resources you provisioned by navigating to the AWS CloudFormation console and deleting the <code>Riv-Prod</code> stack.</p>\n<p>Deleting the stack doesn’t delete the S3 bucket you created. This bucket stores all the face images. If you want to delete the S3 bucket, navigate to the Amazon S3 console, empty the bucket, and then confirm you want to permanently delete it.</p>\n<h3><a id=\"Conclusion_203\"></a><strong>Conclusion</strong></h3>\n<p>Amazon Rekognition makes it easy to add image analysis to your identity verification applications using proven, highly scalable, deep learning technology that requires no ML expertise to use. Amazon Rekognition <a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/face-feature-differences.html\" target=\"_blank\">provides face detection and comparison</a> capabilities. With a combination of the <a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/API_DetectFaces.html\" target=\"_blank\">DetectFaces</a>, <a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/API_CompareFaces.html\" target=\"_blank\">CompareFaces</a>, <a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/API_IndexFaces.html\" target=\"_blank\">IndexFaces</a>, <a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/API_SearchFacesByImage.html\" target=\"_blank\">SearchFacesByImage</a>, <a href=\"https://docs.aws.amazon.com/rekognition/latest/dg/text-detection.html\" target=\"_blank\">DetectText</a> and <a href=\"https://docs.aws.amazon.com/textract/latest/dg/API_AnalyzeID.html\" target=\"_blank\">AnalyzeID</a>, you can implement the common flows around new user registration and existing user logins.</p>\n<p>Amazon Rekognition collections provide a method to store information about detected faces in server-side containers. You can then use the facial information stored in a collection to search for known faces in images. When using collections, you don’t need to store original photos after you index faces in the collection. Amazon Rekognition collections don’t persist actual images. Instead, the underlying detection algorithm detects the faces in the input image, extracts facial features into a feature vector for each face, and stores it in the collection.</p>\n<p>To start your journey towards identity verification, visit <a href=\"https://aws.amazon.com/rekognition/identity-verification/\" target=\"_blank\">Identity Verification using Amazon Rekognition</a>.</p>\n<h4><a id=\"About_the_authors_212\"></a><strong>About the authors</strong></h4>\n<p><img src=\"https://dev-media.amazoncloud.cn/fcc9554da9bb4b96b4bf401ee473ee1a_image.png\" alt=\"image.png\" /></p>\n<p><strong>Vineet Kacchawaha</strong> is a Solutions Architect at AWS with expertise in Machine Learning. He is responsible for helping customers architect scalable, secure, and cost-effective workloads on AWS.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/b9a6f1d36cfe452db59612315b4ed415_image.png\" alt=\"image.png\" /></p>\n<p><strong>Ramesh Thiagarajan</strong> is a Senior Solutions Architect based out of San Francisco. He holds a Bachelor of Science in Applied Sciences and a master’s in Cyber Security. He specializes in cloud migration, cloud security, compliance, and risk management. Outside of work, he is a passionate gardener, and has an avid interest in real estate and home improvement projects.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/4745bce79979473193bf26f5b78f570f_image.png\" alt=\"image.png\" /></p>\n<p><strong>Amit Gupta</strong> is an AI Services Solutions Architect at AWS. He is passionate about enabling customers with well-architected machine learning solutions at scale.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/af60f6e7a3a84473b90696b1cec1aa04_image.png\" alt=\"image.png\" /></p>\n<p><strong>Tim Murphy</strong> is a Senior Solutions Architect for AWS, working with enterprise financial service customers building business cloud centric solutions. He has spent the last decade working with startups, non-profits, commercial enterprise, and government agencies, deploying infrastructure at scale. In his spare time when he isn’t tinkering with technology, you’ll most likely find him in far flung areas of the earth hiking mountains, surfing waves, or biking through a new city.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/4c3e0e727c2f44178f442d73af31581e_image.png\" alt=\"image.png\" /></p>\n<p><strong>Nate Bachmeier</strong> is an AWS Senior Solutions Architect that nomadically explores New York, one cloud integration at a time. He specializes in migrating and modernizing applications. Besides this, Nate is a full-time student and has two kids.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/63b171bdf77a407bae30e8c043bf9430_image.png\" alt=\"image.png\" /></p>\n<p><strong>Jessie-Lee Fry</strong> is a Snr AIML Specialist with a focus on Computer Vision at AWS. She helps organizations leverage Machine Learning and AI to combat fraud and drive innovation on behalf of their customers. Outside of work, she enjoys spending time with her family, traveling and read all about Responsible AI.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭
contact-us