Build a news-based real-time alert system with Twitter, Amazon SageMaker, and Hugging Face

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Today, social media is a huge source of news. Users rely on platforms like Facebook and Twitter to consume news. For certain industries such as insurance companies, first respondents, law enforcement, and government agencies, being able to quickly process news about relevant events occurring can help them take action while these events are still unfolding.\n\nIt’s not uncommon for organizations trying to extract value from text data to look for a solution that doesn’t involve the training of a complex NLP (natural language processing) model. For those organizations, using a pre-trained NLP model is more practical. Furthermore, if the chosen model doesn’t satisfy their success metrics, organizations want to be able to easily pick another model and reassess.\n\nAt present, it’s easier than ever to extract information from text data thanks to the following:\n\n\n- The rise of state-of-the art, general-purpose NLP architectures such as transformers\n- The ability that developers and data scientists have to quickly build, train, and deploy machine learning (ML) models at scale on the cloud with services like [Amazon SageMaker](https://aws.amazon.com/sagemaker/)\n- The availability of thousands of pre-trained NLP models in hundreds of languages and with support for multiple frameworks provided by the community in platforms like [Hugging Face Hub](https://huggingface.co/models)\n\nIn this post, we show you how to build a real-time alert system that consumes news from Twitter and classifies the tweets using a pre-trained model from the Hugging Face Hub. You can use this solution for zero-shot classification, meaning you can classify tweets at virtually any set of categories, and deploy the model with SageMaker for real-time inference.\n\nAlternatively, if you’re looking for insights into your customer’s conversations and deepen brand awareness by analyzing social media interactions, we encourage you to check out the [AI-Driven Social Media Dashboard](https://aws.amazon.com/solutions/implementations/ai-driven-social-media-dashboard/). The solution uses [Amazon Comprehend](https://aws.amazon.com/comprehend/), a fully managed NLP service that uncovers valuable insights and connections in text without requiring machine learning experience.\n\n\n### **Zero-shot learning**\n\nThe fields of NLP and natural language understanding (NLU) have rapidly evolved to address use cases involving text classification, question answering, summarization, text generation, and more. This evolution has been possible, in part, thanks to the rise of state-of-the art, general-purpose architectures such as transformers, but also the availability of more and better-quality text corpora available for the training of such models.\n\nThe transformer architecture is a complex neural network that requires domain expertise and a huge amount of data in order to be trained from scratch. A common practice is to take a pre-trained state-of-the-art transformer like BERT, RoBERTa, T5, GPT-2, or DistilBERT and fine-tune (transfer learning) the model to a specific use case.\n\nNevertheless, even performing transfer learning on a pre-trained NLP model can often be a challenging task, requiring large amounts of labeled text data and a team of experts to curate the data. This complexity prevents most organizations from using these models effectively, but zero-shot learning helps ML practitioners and organizations overcome this shortcoming.\n\nZero-shot learning is a specific ML task in which a classifier learns on one set of labels during training, and then during inference is evaluated on a different set of labels that the classifier has never seen before. In NLP, you can use a zero-shot sequence classifier trained on a natural language inference (NLI) task to classify text without any fine-tuning. In this post, we use the popular NLI [BART](https://arxiv.org/abs/1910.13461) model bart-large-mnli to classify tweets. This is a large pre-trained model (1.6 GB), available on the Hugging Face model hub.\n\nHugging Face is an AI company that manages an open-source platform (Hugging Face Hub) with thousands of pre-trained NLP models (transformers) in more than 100 different languages and with support for different frameworks such as TensorFlow and PyTorch. The transformers library helps developers and data scientists get started in complex NLP and NLU tasks such as classification, information extraction, question answering, summarization, translation, and text generation.\n\n[AWS and Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) have been collaborating to simplify and accelerate the adoption of NLP models. A set of Deep Learning Containers (DLCs) for training and inference in PyTorch or TensorFlow, and Hugging Face estimators and predictors for the SageMaker Python SDK are now available. These capabilities help developers with all levels of expertise get started with NLP easily.\n\n### **Overview of solution**\n\nWe provide a working solution that fetches tweets in real time from selected Twitter accounts. For the demonstration of our solution, we use three accounts, Amazon Web Services ([@awscloud](https://twitter.com/awscloud)), AWS Security ([@AWSSecurityInfo](https://twitter.com/AWSSecurityInfo)), and Amazon Science ([@AmazonScience](https://twitter.com/AmazonScience)), and classify their content into one of the following categories: security, database, compute, storage, and machine learning. If the model returns a category with a confidence score greater than 40%, a notification is sent.\n\nIn the following example, the model classified a tweet from Amazon Web Services in the machine learning category, with a confidence score of 97%, generating an alert.\n\n![image.png](https://dev-media.amazoncloud.cn/e5529172b529481bb729d83d0612ff72_image.png)\n\n\nThe solution relies on a Hugging Face pre-trained transformer model (from the Hugging Face Hub) to classify tweets based on a set of labels that are provided at inference time—the model doesn’t need to be trained. The following screenshots show more examples and how they were classified.\n\n\n![image.png](https://dev-media.amazoncloud.cn/d9ccf85573f74fe8aae8d675a1a826e0_image.png)\n\n\n\nWe encourage you to try the solution for yourself. Simply download the source code from the [GitHub repository](https://github.com/aws-samples/tweet-classification) and follow the deployment instructions in the README file.\n\n### **Solution architecture**\n\nThe solution keeps an open connection to Twitter’s endpoint and, when a new tweet arrives, sends a message to a queue. A consumer reads messages from the queue, calls the classification endpoint, and, depending on the results, notifies the end user.\n\nThe following is the architecture diagram of the solution.\n\n![image.png](https://dev-media.amazoncloud.cn/52f35e61fd33417abb6b1e12f056e4ca_image.png)\n\n\nThe solution workflow consists of the following components:\n\n1. The solution relies on Twitter’s Stream API to get tweets that match the configured rules (tweets from the accounts of interest) in real time. To do so, an application running inside a container keeps an open connection to Twitter’s endpoint. Refer to [Twitter API](https://developer.twitter.com/en/docs/twitter-api) for more details.\n2. The container runs on [Amazon Elastic Container Service](https://aws.amazon.com/ecs) (Amazon ECS), a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. A single task runs on a serverless infrastructure managed by [AWS Fargate](https://aws.amazon.com/fargate/).\n3. The Twitter Bearer token is securely stored in [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html), a capability of [AWS Systems Manager](https://aws.amazon.com/systems-manager/) that provides secure, hierarchical storage for configuration data and secrets. The container image is hosted on [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/) (Amazon ECR), a fully managed container registry offering high-performance hosting.\n4. Whenever a new tweet arrives, the container application puts the tweet into an [Amazon Simple Queue Service](https://aws.amazon.com/sqs/) (Amazon SQS) queue. Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.\n5. The logic of the solution resides in an [AWS Lambda](https://aws.amazon.com/lambda/) function. Lambda is a serverless, event-driven compute service. The function consumes new tweets from the queue and classifies them by calling an endpoint.\n6. The endpoint relies on a Hugging Face model and is hosted on SageMaker. The endpoint runs the inference and outputs the class of the tweet.\n7. Depending on the classification, the function generates a notification through [Amazon Simple Notification Service](https://aws.amazon.com/sns/) (Amazon SNS), a fully managed messaging service. You can subscribe to the SNS topic, and multiple destinations can receive that notification (see [Amazon SNS event destinations](https://docs.aws.amazon.com/sns/latest/dg/sns-event-destinations.html)). For instance, you can deliver the notification to inboxes as email messages (see [Email notifications](https://docs.aws.amazon.com/sns/latest/dg/sns-email-notifications.html)).\n\n\n### **Cost Analysis**\n\n\nThe following table provides a monthly cost breakdown example for deploying this solution with the default parameters in the US East (N. Virginia) Region, excluding free tier and assuming that 1000 tweets will be processed per day:\n\n![image.png](https://dev-media.amazoncloud.cn/83c41043fbc446e989111ae3061bfcbf_image.png)\n\n\nThe SageMaker endpoint, used to perform the machine learning inference, is the most expensive piece of the solution. Depending on how many tweets you want to process at the same time, you may need to increase the capacity by adding more SageMaker instances of the same type or by changing the instance type to a more powerful one. SageMaker autoscaling feature could be used as well. Those changes will incur in additional costs.\n\n### **Deploy Hugging Face models with SageMaker**\n\nYou can select any of the over 10,000 publicly available models from the [Hugging Face Model Hub](https://huggingface.co/models) and deploy them with SageMaker by using [Hugging Face Inference DLCs](https://huggingface.co/docs/sagemaker/main#deep-learning-containers).\n\nWhen using [AWS CloudFormation](http://aws.amazon.com/cloudformation), you select one of the publicly available [Hugging Face Inference Containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-inference-containers) and configure the model and the task. This solution uses the [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) model and the zero-shot-classification task, but you can choose any of the models under **Zero-Shot Classification** on the Hugging Face Model Hub. You configure those by setting the HF_MODEL_ID and HF_TASK environment variables in your CloudFormation template, as in the following code:\n\n\n```\n\nSageMakerModel:\n Type: AWS::SageMaker::Model\n Properties:\n ExecutionRoleArn: !GetAtt SageMakerModelRole.Arn\n PrimaryContainer:\n Image: 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-inference:1.7-transformers4.6-cpu-py36-ubuntu18.04\n Environment:\n HF_MODEL_ID: facebook/bart-large-mnli\n HF_TASK: zero-shot-classification\n SAGEMAKER_CONTAINER_LOG_LEVEL: 20\n SAGEMAKER_REGION: us-east-1\n\n```\n\nAlternatively, if you’re not using AWS CloudFormation, you can achieve the same results with few lines of code. Refer to [Deploy models to Amazon SageMaker](https://huggingface.co/docs/sagemaker/inference#deploy-a-model-from-the-hub) for more details.\n\nTo classify the content, you just call the SageMaker endpoint. The following is a Python code snippet:\n\n\n```\n\nendpoint_name = os.environ['ENDPOINT_NAME']\nlabels = os.environ['ENDPOINT_NAME']\n\ndata = {\n 'inputs': tweet,\n 'parameters': {\n 'candidate_labels': labels,\n 'multi_class': False\n }\n}\n\nresponse = sagemaker.invoke_endpoint(EndpointName=endpoint_name,\n ContentType='application/json',\n Body=json.dumps(data))\n\nresponse_body = json.loads(response['Body'].read())\n\n```\n\nNote the False value for the multi_class parameter to indicate that the sum of all the probabilities for each class will add up to 1.\n\n### **Solution improvements**\n\nYou can enhance the solution proposed here by storing the tweets and the model results. [Amazon Simple Storage Service](https://aws.amazon.com/s3/) (Amazon S3), an object storage service, is one option. You can write tweets, results, and other metadata as JSON objects into an S3 bucket. You can then perform ad hoc queries against that content using [Amazon Athena](https://aws.amazon.com/athena/), an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.\n\nYou can use the history not only to extract insights but also to train a custom model. You can use Hugging Face support to train a model with your own data with SageMaker. Learn more on [Run training on Amazon SageMaker](https://huggingface.co/docs/sagemaker/train).\n\n### **Real-world use cases**\n\n\nCustomers are already experimenting with Hugging Face models on SageMaker. [Seguros Bolívar](https://www.segurosbolivar.com/), a Colombian financial and insurance company founded in 1939, is an example.\n\n\n- *“We developed a threat notification solution for customers and insurance brokers. We use Hugging Face pre-trained NLP models to classify tweets from relevant accounts to generate notifications for our customers in near-real time as a prevention strategy to help mitigate claims. A claim occurs because customers are not aware of the level of risk they are exposed to. The solution allows us to generate awareness in our customers, turning risk into something measurable in concrete situations.”*\n\n- Julian Rico, Chief of Research and Knowledge at Seguros Bolívar.\n\n\nSeguros Bolívar worked with AWS to re-architecture their solution; it now relies on SageMaker and resembles the one described in this post.\n\n### **Conclusion**\n\nZero-shot classification is ideal when you have little data to train a custom text classifier or when you can’t afford to train a custom NLP model. For specialized use cases, when text is based on specific words or terms, it’s better to go with a supervised classification model based on a custom training set.\n\nIn this post, we showed you how to build a news classifier using a Hugging Face zero-shot model on AWS. We used Twitter as our news source, but you can choose a news source that is more suitable to your specific needs. Furthermore, you can easily change the model, just specify your chosen model in the CloudFormation template.\n\nFor the source code, refer to the [GitHub repository](https://github.com/aws-samples/tweet-classification) It includes the full setup instructions. You can clone, change, deploy, and run it yourself. You can also use it as a starting point and customize the categories and the alert logic or build another solution for a similar use case.\n\nPlease give it a try, and let us know what you think. As always, we’re looking forward to your feedback. You can send it to your usual AWS Support contacts, or in the [AWS Forum for SageMaker](https://forums.aws.amazon.com/forum.jspa?forumID=285).\n\n\n#### **About the authors**\n\n![image.png](https://dev-media.amazoncloud.cn/0e3951cd65a644ddb267859a45e191b1_image.png)\n\n\n**David Laredo** is a Prototyping Architect at AWS Envision Engineering in LATAM, where he has helped develop multiple machine learning prototypes. Previously he has worked as a Machine Learning Engineer and has been doing machine learning for over 5 years. His areas of interest are NLP, time series, and end-to-end ML.\n\n![image.png](https://dev-media.amazoncloud.cn/3c63d7370c42406ebe80dbc8552ac920_image.png)\n\n**Rafael Werneck** is a Senior Prototyping Architect at AWS Envision Engineering, based in Brazil. Previously, he worked as a Software Development Engineer on Amazon.com.br and Amazon RDS Performance Insights.\n\n![image.png](https://dev-media.amazoncloud.cn/bef81695dc824cc692eb26f432a2f497_image.png)\n\n**Vikram Elango** is an AI/ML Specialist Solutions Architect at Amazon Web Services, based in Virginia, USA. Vikram helps financial and insurance industry customers with design and thought leadership to build and deploy machine learning applications at scale. He is currently focused on natural language processing, responsible AI, inference optimization, and scaling ML across the enterprise. In his spare time, he enjoys traveling, hiking, cooking, and camping with his family.","render":"<p>Today, social media is a huge source of news. Users rely on platforms like Facebook and Twitter to consume news. For certain industries such as insurance companies, first respondents, law enforcement, and government agencies, being able to quickly process news about relevant events occurring can help them take action while these events are still unfolding.</p>\n<p>It’s not uncommon for organizations trying to extract value from text data to look for a solution that doesn’t involve the training of a complex NLP (natural language processing) model. For those organizations, using a pre-trained NLP model is more practical. Furthermore, if the chosen model doesn’t satisfy their success metrics, organizations want to be able to easily pick another model and reassess.</p>\n<p>At present, it’s easier than ever to extract information from text data thanks to the following:</p>\n<ul>\n<li>The rise of state-of-the art, general-purpose NLP architectures such as transformers</li>\n<li>The ability that developers and data scientists have to quickly build, train, and deploy machine learning (ML) models at scale on the cloud with services like <a href=\"https://aws.amazon.com/sagemaker/\" target=\"_blank\">Amazon SageMaker</a></li>\n<li>The availability of thousands of pre-trained NLP models in hundreds of languages and with support for multiple frameworks provided by the community in platforms like <a href=\"https://huggingface.co/models\" target=\"_blank\">Hugging Face Hub</a></li>\n</ul>\n<p>In this post, we show you how to build a real-time alert system that consumes news from Twitter and classifies the tweets using a pre-trained model from the Hugging Face Hub. You can use this solution for zero-shot classification, meaning you can classify tweets at virtually any set of categories, and deploy the model with SageMaker for real-time inference.</p>\n<p>Alternatively, if you’re looking for insights into your customer’s conversations and deepen brand awareness by analyzing social media interactions, we encourage you to check out the <a href=\"https://aws.amazon.com/solutions/implementations/ai-driven-social-media-dashboard/\" target=\"_blank\">AI-Driven Social Media Dashboard</a>. The solution uses <a href=\"https://aws.amazon.com/comprehend/\" target=\"_blank\">Amazon Comprehend</a>, a fully managed NLP service that uncovers valuable insights and connections in text without requiring machine learning experience.</p>\n<h3><a id=\"Zeroshot_learning_16\"></a><strong>Zero-shot learning</strong></h3>\n<p>The fields of NLP and natural language understanding (NLU) have rapidly evolved to address use cases involving text classification, question answering, summarization, text generation, and more. This evolution has been possible, in part, thanks to the rise of state-of-the art, general-purpose architectures such as transformers, but also the availability of more and better-quality text corpora available for the training of such models.</p>\n<p>The transformer architecture is a complex neural network that requires domain expertise and a huge amount of data in order to be trained from scratch. A common practice is to take a pre-trained state-of-the-art transformer like BERT, RoBERTa, T5, GPT-2, or DistilBERT and fine-tune (transfer learning) the model to a specific use case.</p>\n<p>Nevertheless, even performing transfer learning on a pre-trained NLP model can often be a challenging task, requiring large amounts of labeled text data and a team of experts to curate the data. This complexity prevents most organizations from using these models effectively, but zero-shot learning helps ML practitioners and organizations overcome this shortcoming.</p>\n<p>Zero-shot learning is a specific ML task in which a classifier learns on one set of labels during training, and then during inference is evaluated on a different set of labels that the classifier has never seen before. In NLP, you can use a zero-shot sequence classifier trained on a natural language inference (NLI) task to classify text without any fine-tuning. In this post, we use the popular NLI <a href=\"https://arxiv.org/abs/1910.13461\" target=\"_blank\">BART</a> model bart-large-mnli to classify tweets. This is a large pre-trained model (1.6 GB), available on the Hugging Face model hub.</p>\n<p>Hugging Face is an AI company that manages an open-source platform (Hugging Face Hub) with thousands of pre-trained NLP models (transformers) in more than 100 different languages and with support for different frameworks such as TensorFlow and PyTorch. The transformers library helps developers and data scientists get started in complex NLP and NLU tasks such as classification, information extraction, question answering, summarization, translation, and text generation.</p>\n<p><a href=\"https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html\" target=\"_blank\">AWS and Hugging Face</a> have been collaborating to simplify and accelerate the adoption of NLP models. A set of Deep Learning Containers (DLCs) for training and inference in PyTorch or TensorFlow, and Hugging Face estimators and predictors for the SageMaker Python SDK are now available. These capabilities help developers with all levels of expertise get started with NLP easily.</p>\n<h3><a id=\"Overview_of_solution_30\"></a><strong>Overview of solution</strong></h3>\n<p>We provide a working solution that fetches tweets in real time from selected Twitter accounts. For the demonstration of our solution, we use three accounts, Amazon Web Services (<a href=\"https://twitter.com/awscloud\" target=\"_blank\">@awscloud</a>), AWS Security (<a href=\"https://twitter.com/AWSSecurityInfo\" target=\"_blank\">@AWSSecurityInfo</a>), and Amazon Science (<a href=\"https://twitter.com/AmazonScience\" target=\"_blank\">@AmazonScience</a>), and classify their content into one of the following categories: security, database, compute, storage, and machine learning. If the model returns a category with a confidence score greater than 40%, a notification is sent.</p>\n<p>In the following example, the model classified a tweet from Amazon Web Services in the machine learning category, with a confidence score of 97%, generating an alert.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/e5529172b529481bb729d83d0612ff72_image.png\" alt=\"image.png\" /></p>\n<p>The solution relies on a Hugging Face pre-trained transformer model (from the Hugging Face Hub) to classify tweets based on a set of labels that are provided at inference time—the model doesn’t need to be trained. The following screenshots show more examples and how they were classified.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/d9ccf85573f74fe8aae8d675a1a826e0_image.png\" alt=\"image.png\" /></p>\n<p>We encourage you to try the solution for yourself. Simply download the source code from the <a href=\"https://github.com/aws-samples/tweet-classification\" target=\"_blank\">GitHub repository</a> and follow the deployment instructions in the README file.</p>\n<h3><a id=\"Solution_architecture_48\"></a><strong>Solution architecture</strong></h3>\n<p>The solution keeps an open connection to Twitter’s endpoint and, when a new tweet arrives, sends a message to a queue. A consumer reads messages from the queue, calls the classification endpoint, and, depending on the results, notifies the end user.</p>\n<p>The following is the architecture diagram of the solution.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/52f35e61fd33417abb6b1e12f056e4ca_image.png\" alt=\"image.png\" /></p>\n<p>The solution workflow consists of the following components:</p>\n<ol>\n<li>The solution relies on Twitter’s Stream API to get tweets that match the configured rules (tweets from the accounts of interest) in real time. To do so, an application running inside a container keeps an open connection to Twitter’s endpoint. Refer to <a href=\"https://developer.twitter.com/en/docs/twitter-api\" target=\"_blank\">Twitter API</a> for more details.</li>\n<li>The container runs on <a href=\"https://aws.amazon.com/ecs\" target=\"_blank\">Amazon Elastic Container Service</a> (Amazon ECS), a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. A single task runs on a serverless infrastructure managed by <a href=\"https://aws.amazon.com/fargate/\" target=\"_blank\">AWS Fargate</a>.</li>\n<li>The Twitter Bearer token is securely stored in <a href=\"https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html\" target=\"_blank\">AWS Systems Manager Parameter Store</a>, a capability of <a href=\"https://aws.amazon.com/systems-manager/\" target=\"_blank\">AWS Systems Manager</a> that provides secure, hierarchical storage for configuration data and secrets. The container image is hosted on <a href=\"https://aws.amazon.com/ecr/\" target=\"_blank\">Amazon Elastic Container Registry</a> (Amazon ECR), a fully managed container registry offering high-performance hosting.</li>\n<li>Whenever a new tweet arrives, the container application puts the tweet into an <a href=\"https://aws.amazon.com/sqs/\" target=\"_blank\">Amazon Simple Queue Service</a> (Amazon SQS) queue. Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.</li>\n<li>The logic of the solution resides in an <a href=\"https://aws.amazon.com/lambda/\" target=\"_blank\">AWS Lambda</a> function. Lambda is a serverless, event-driven compute service. The function consumes new tweets from the queue and classifies them by calling an endpoint.</li>\n<li>The endpoint relies on a Hugging Face model and is hosted on SageMaker. The endpoint runs the inference and outputs the class of the tweet.</li>\n<li>Depending on the classification, the function generates a notification through <a href=\"https://aws.amazon.com/sns/\" target=\"_blank\">Amazon Simple Notification Service</a> (Amazon SNS), a fully managed messaging service. You can subscribe to the SNS topic, and multiple destinations can receive that notification (see <a href=\"https://docs.aws.amazon.com/sns/latest/dg/sns-event-destinations.html\" target=\"_blank\">Amazon SNS event destinations</a>). For instance, you can deliver the notification to inboxes as email messages (see <a href=\"https://docs.aws.amazon.com/sns/latest/dg/sns-email-notifications.html\" target=\"_blank\">Email notifications</a>).</li>\n</ol>\n<h3><a id=\"Cost_Analysis_68\"></a><strong>Cost Analysis</strong></h3>\n<p>The following table provides a monthly cost breakdown example for deploying this solution with the default parameters in the US East (N. Virginia) Region, excluding free tier and assuming that 1000 tweets will be processed per day:</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/83c41043fbc446e989111ae3061bfcbf_image.png\" alt=\"image.png\" /></p>\n<p>The SageMaker endpoint, used to perform the machine learning inference, is the most expensive piece of the solution. Depending on how many tweets you want to process at the same time, you may need to increase the capacity by adding more SageMaker instances of the same type or by changing the instance type to a more powerful one. SageMaker autoscaling feature could be used as well. Those changes will incur in additional costs.</p>\n<h3><a id=\"Deploy_Hugging_Face_models_with_SageMaker_78\"></a><strong>Deploy Hugging Face models with SageMaker</strong></h3>\n<p>You can select any of the over 10,000 publicly available models from the <a href=\"https://huggingface.co/models\" target=\"_blank\">Hugging Face Model Hub</a> and deploy them with SageMaker by using <a href=\"https://huggingface.co/docs/sagemaker/main#deep-learning-containers\" target=\"_blank\">Hugging Face Inference DLCs</a>.</p>\n<p>When using <a href=\"http://aws.amazon.com/cloudformation\" target=\"_blank\">AWS CloudFormation</a>, you select one of the publicly available <a href=\"https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-inference-containers\" target=\"_blank\">Hugging Face Inference Containers</a> and configure the model and the task. This solution uses the <a href=\"https://huggingface.co/facebook/bart-large-mnli\" target=\"_blank\">facebook/bart-large-mnli</a> model and the zero-shot-classification task, but you can choose any of the models under <strong>Zero-Shot Classification</strong> on the Hugging Face Model Hub. You configure those by setting the HF_MODEL_ID and HF_TASK environment variables in your CloudFormation template, as in the following code:</p>\n<pre><code class=\"lang-\">\nSageMakerModel:\n Type: AWS::SageMaker::Model\n Properties:\n ExecutionRoleArn: !GetAtt SageMakerModelRole.Arn\n PrimaryContainer:\n Image: 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-inference:1.7-transformers4.6-cpu-py36-ubuntu18.04\n Environment:\n HF_MODEL_ID: facebook/bart-large-mnli\n HF_TASK: zero-shot-classification\n SAGEMAKER_CONTAINER_LOG_LEVEL: 20\n SAGEMAKER_REGION: us-east-1\n\n</code></pre>\n<p>Alternatively, if you’re not using AWS CloudFormation, you can achieve the same results with few lines of code. Refer to <a href=\"https://huggingface.co/docs/sagemaker/inference#deploy-a-model-from-the-hub\" target=\"_blank\">Deploy models to Amazon SageMaker</a> for more details.</p>\n<p>To classify the content, you just call the SageMaker endpoint. The following is a Python code snippet:</p>\n<pre><code class=\"lang-\">\nendpoint_name = os.environ['ENDPOINT_NAME']\nlabels = os.environ['ENDPOINT_NAME']\n\ndata = {\n 'inputs': tweet,\n 'parameters': {\n 'candidate_labels': labels,\n 'multi_class': False\n }\n}\n\nresponse = sagemaker.invoke_endpoint(EndpointName=endpoint_name,\n ContentType='application/json',\n Body=json.dumps(data))\n\nresponse_body = json.loads(response['Body'].read())\n\n</code></pre>\n<p>Note the False value for the multi_class parameter to indicate that the sum of all the probabilities for each class will add up to 1.</p>\n<h3><a id=\"Solution_improvements_129\"></a><strong>Solution improvements</strong></h3>\n<p>You can enhance the solution proposed here by storing the tweets and the model results. <a href=\"https://aws.amazon.com/s3/\" target=\"_blank\">Amazon Simple Storage Service</a> (Amazon S3), an object storage service, is one option. You can write tweets, results, and other metadata as JSON objects into an S3 bucket. You can then perform ad hoc queries against that content using <a href=\"https://aws.amazon.com/athena/\" target=\"_blank\">Amazon Athena</a>, an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.</p>\n<p>You can use the history not only to extract insights but also to train a custom model. You can use Hugging Face support to train a model with your own data with SageMaker. Learn more on <a href=\"https://huggingface.co/docs/sagemaker/train\" target=\"_blank\">Run training on Amazon SageMaker</a>.</p>\n<h3><a id=\"Realworld_use_cases_135\"></a><strong>Real-world use cases</strong></h3>\n<p>Customers are already experimenting with Hugging Face models on SageMaker. <a href=\"https://www.segurosbolivar.com/\" target=\"_blank\">Seguros Bolívar</a>, a Colombian financial and insurance company founded in 1939, is an example.</p>\n<ul>\n<li>\n<p><em>“We developed a threat notification solution for customers and insurance brokers. We use Hugging Face pre-trained NLP models to classify tweets from relevant accounts to generate notifications for our customers in near-real time as a prevention strategy to help mitigate claims. A claim occurs because customers are not aware of the level of risk they are exposed to. The solution allows us to generate awareness in our customers, turning risk into something measurable in concrete situations.”</em></p>\n</li>\n<li>\n<p>Julian Rico, Chief of Research and Knowledge at Seguros Bolívar.</p>\n</li>\n</ul>\n<p>Seguros Bolívar worked with AWS to re-architecture their solution; it now relies on SageMaker and resembles the one described in this post.</p>\n<h3><a id=\"Conclusion_148\"></a><strong>Conclusion</strong></h3>\n<p>Zero-shot classification is ideal when you have little data to train a custom text classifier or when you can’t afford to train a custom NLP model. For specialized use cases, when text is based on specific words or terms, it’s better to go with a supervised classification model based on a custom training set.</p>\n<p>In this post, we showed you how to build a news classifier using a Hugging Face zero-shot model on AWS. We used Twitter as our news source, but you can choose a news source that is more suitable to your specific needs. Furthermore, you can easily change the model, just specify your chosen model in the CloudFormation template.</p>\n<p>For the source code, refer to the <a href=\"https://github.com/aws-samples/tweet-classification\" target=\"_blank\">GitHub repository</a> It includes the full setup instructions. You can clone, change, deploy, and run it yourself. You can also use it as a starting point and customize the categories and the alert logic or build another solution for a similar use case.</p>\n<p>Please give it a try, and let us know what you think. As always, we’re looking forward to your feedback. You can send it to your usual AWS Support contacts, or in the <a href=\"https://forums.aws.amazon.com/forum.jspa?forumID=285\" target=\"_blank\">AWS Forum for SageMaker</a>.</p>\n<h4><a id=\"About_the_authors_159\"></a><strong>About the authors</strong></h4>\n<p><img src=\"https://dev-media.amazoncloud.cn/0e3951cd65a644ddb267859a45e191b1_image.png\" alt=\"image.png\" /></p>\n<p><strong>David Laredo</strong> is a Prototyping Architect at AWS Envision Engineering in LATAM, where he has helped develop multiple machine learning prototypes. Previously he has worked as a Machine Learning Engineer and has been doing machine learning for over 5 years. His areas of interest are NLP, time series, and end-to-end ML.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/3c63d7370c42406ebe80dbc8552ac920_image.png\" alt=\"image.png\" /></p>\n<p><strong>Rafael Werneck</strong> is a Senior Prototyping Architect at AWS Envision Engineering, based in Brazil. Previously, he worked as a Software Development Engineer on Amazon.com.br and Amazon RDS Performance Insights.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/bef81695dc824cc692eb26f432a2f497_image.png\" alt=\"image.png\" /></p>\n<p><strong>Vikram Elango</strong> is an AI/ML Specialist Solutions Architect at Amazon Web Services, based in Virginia, USA. Vikram helps financial and insurance industry customers with design and thought leadership to build and deploy machine learning applications at scale. He is currently focused on natural language processing, responsible AI, inference optimization, and scaling ML across the enterprise. In his spare time, he enjoys traveling, hiking, cooking, and camping with his family.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭