{"value":"#### **NLP@AWS Customer Success Story**\nMeasuring customer sentiment in call centres is a huge challenge, especially for large organisations. Accurate call transcripts can help unlock insights such as sentiment, trending issues, and agent effectiveness at resolving calls in call centres.\n\nWix.com expanded visibility of customer conversation sentiment by using Amazon Transcribe, a speech to text service, to develop a sentiment analysis system that can effectively determine how users feel throughout an interaction with customer care agents.\n\nLearn more about this AWS customer success story in this blog post: [https://aws.amazon.com/blogs/machine-learning/how-wix-empowers-customer-care-with-ai-capabilities-using-amazon-transcribe/](https://aws.amazon.com/blogs/machine-learning/how-wix-empowers-customer-care-with-ai-capabilities-using-amazon-transcribe/)\n\n![image.png](https://dev-media.amazoncloud.cn/933dab66e836424babf3613956338014_image.png)\n\n#### **AWS AI Language Services**\n[**How to approach conversation design with Amazon Lex**](https://aws.amazon.com/blogs/machine-learning/part-3-how-to-approach-conversation-design-with-amazon-lex-building-and-testing/)\nIn this blog post you will learn how to draft an interaction model to deliver natural conversational experiences, and how to test and tune your application.\n\n![image.png](https://dev-media.amazoncloud.cn/ada3ed2877df48c5acaad354ecb07feb_image.png)\n\n#### **NLP on Amazon SageMaker**\n- [**Detecting NLP Data Drift with SageMaker**](https://aws.amazon.com/blogs/machine-learning/detect-nlp-data-drift-using-custom-amazon-sagemaker-model-monitor/)\nNLP models can be an extremely effective tool for extracting information from unstructured text data. When the data that is used for inference (production data) differs from the data used during model training, we encounter a phenomenon known as data drift. When data drift occurs, the model is no longer relevant to the data in production and likely performs worse than expected. It’s important to continuously monitor the inference data and compare it to the data used during training.\n\n![image.png](https://dev-media.amazoncloud.cn/95291b344a744eb18979a210189d0f21_image.png)\n\n- [Distributed fine-tuning of a BERT model on SageMaker](https://aws.amazon.com/blogs/machine-learning/distributed-fine-tuning-of-a-bert-large-model-for-a-question-answering-task-using-hugging-face-transformers-on-amazon-sagemaker/)\nHugging Face has been working closely with SageMaker to deliver ready-to-use Deep Learning Containers (DLCs) that make training and deploying the latest Transformers models easier and faster than ever. Because features such as SageMaker Data Parallel (SMDP), SageMaker Model Parallel (SMMP), S3 pipe mode, are integrated into the container, using these drastically reduces the time for companies to create Transformers-based ML solutions such as question-answering, generating text and images, optimizing search results, and improves customer support automation, conversational interfaces, semantic search, document analyses, and many more applications.\n\nIn this post, we focus on the deep integration of SageMaker distributed libraries with Hugging Face, which enables data scientists to accelerate training and fine-tuning of Transformers models from days to hours, all in SageMaker.\n\n![image.png](https://dev-media.amazoncloud.cn/dd65e0573fc44837917a003cf2b698a6_image.png)\n\n#### **NLP@AWS Community Content**\n- **[Enterprise-Scale NLP with Hugging Face & Amazon SageMaker](https://www.philschmid.de/hugginface-sagemaker-workshop)**\nIn October and November, AWS & Hugging Face held a workshop series on “Enterprise-Scale NLP with Hugging Face & Amazon SageMaker”. This workshop series consisted out of 3 parts and covers:\n\n1. Getting Started with Amazon SageMaker: Training your first NLP Transformer model with Hugging Face and deploying it\n2. Going Production: Deploying, Scaling & Monitoring Hugging Face Transformer models with Amazon SageMaker\n3. MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines\n\n\nThe workshop has been recorded and the resources have been made available on Github so you are now able to do the whole workshop series on your own to enhance your Hugging Face Transformers skills with Amazon SageMaker.\n\nYoutube Playlist: [Hugging Face SageMaker Playlist](https://www.youtube.com/playlist?list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ)\nGithub Repository: [huggingface-sagemaker-workshop-series](huggingface-sagemaker-workshop-series)\n\n![image.png](https://dev-media.amazoncloud.cn/a745e29d7d8d4fe7839e7267837096b7_image.png)\n\n- **[Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker](https://huggingface.co/blog/gptj-sagemaker)**\nGPT-J is one of the most popular open-source alternatives to GPT-3. In this blog post, you will learn how to easily deploy GPT-J using Amazon SageMaker and the Hugging Face Inference Toolkit with a few lines of code for scalable, reliable, and secure real-time inference using a regular size GPU instance with NVIDIA T4.\n\n- **[Teach an AI Model to Write like Shakespeare — For Free](https://towardsdatascience.com/teach-an-ai-model-to-write-like-shakespeare-for-free-a9e6a307139)**\nIn this tutorial you will learn how to train an NLP model to write like Shakespeare within 5 minutes using [Sagemaker Studio Lab](https://studiolab.sagemaker.aws/)\n\n##### **Stay in touch with NLP on AWS**\nOur contact: [aws-nlp@amazon.com](https://mailto:aws-nlp@amazon.com/)\nEmail us about (1) your awesome project about NLP on AWS, (2) let us know which post in the newsletter helped your NLP journey, (3) other things that you want us to post on the newsletter. Talk to you soon.","render":"<h4><a id=\"NLPAWS_Customer_Success_Story_0\"></a><strong>NLP@AWS Customer Success Story</strong></h4>\n<p>Measuring customer sentiment in call centres is a huge challenge, especially for large organisations. Accurate call transcripts can help unlock insights such as sentiment, trending issues, and agent effectiveness at resolving calls in call centres.</p>\n<p>Wix.com expanded visibility of customer conversation sentiment by using Amazon Transcribe, a speech to text service, to develop a sentiment analysis system that can effectively determine how users feel throughout an interaction with customer care agents.</p>\n<p>Learn more about this AWS customer success story in this blog post: <a href=\"https://aws.amazon.com/blogs/machine-learning/how-wix-empowers-customer-care-with-ai-capabilities-using-amazon-transcribe/\" target=\"_blank\">https://aws.amazon.com/blogs/machine-learning/how-wix-empowers-customer-care-with-ai-capabilities-using-amazon-transcribe/</a></p>\n<p><img src=\"https://dev-media.amazoncloud.cn/933dab66e836424babf3613956338014_image.png\" alt=\"image.png\" /></p>\n<h4><a id=\"AWS_AI_Language_Services_9\"></a><strong>AWS AI Language Services</strong></h4>\n<p><a href=\"https://aws.amazon.com/blogs/machine-learning/part-3-how-to-approach-conversation-design-with-amazon-lex-building-and-testing/\" target=\"_blank\"><strong>How to approach conversation design with Amazon Lex</strong></a><br />\nIn this blog post you will learn how to draft an interaction model to deliver natural conversational experiences, and how to test and tune your application.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/ada3ed2877df48c5acaad354ecb07feb_image.png\" alt=\"image.png\" /></p>\n<h4><a id=\"NLP_on_Amazon_SageMaker_15\"></a><strong>NLP on Amazon SageMaker</strong></h4>\n<ul>\n<li><a href=\"https://aws.amazon.com/blogs/machine-learning/detect-nlp-data-drift-using-custom-amazon-sagemaker-model-monitor/\" target=\"_blank\"><strong>Detecting NLP Data Drift with SageMaker</strong></a><br />\nNLP models can be an extremely effective tool for extracting information from unstructured text data. When the data that is used for inference (production data) differs from the data used during model training, we encounter a phenomenon known as data drift. When data drift occurs, the model is no longer relevant to the data in production and likely performs worse than expected. It’s important to continuously monitor the inference data and compare it to the data used during training.</li>\n</ul>\n<p><img src=\"https://dev-media.amazoncloud.cn/95291b344a744eb18979a210189d0f21_image.png\" alt=\"image.png\" /></p>\n<ul>\n<li><a href=\"https://aws.amazon.com/blogs/machine-learning/distributed-fine-tuning-of-a-bert-large-model-for-a-question-answering-task-using-hugging-face-transformers-on-amazon-sagemaker/\" target=\"_blank\">Distributed fine-tuning of a BERT model on SageMaker</a><br />\nHugging Face has been working closely with SageMaker to deliver ready-to-use Deep Learning Containers (DLCs) that make training and deploying the latest Transformers models easier and faster than ever. Because features such as SageMaker Data Parallel (SMDP), SageMaker Model Parallel (SMMP), S3 pipe mode, are integrated into the container, using these drastically reduces the time for companies to create Transformers-based ML solutions such as question-answering, generating text and images, optimizing search results, and improves customer support automation, conversational interfaces, semantic search, document analyses, and many more applications.</li>\n</ul>\n<p>In this post, we focus on the deep integration of SageMaker distributed libraries with Hugging Face, which enables data scientists to accelerate training and fine-tuning of Transformers models from days to hours, all in SageMaker.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/dd65e0573fc44837917a003cf2b698a6_image.png\" alt=\"image.png\" /></p>\n<h4><a id=\"NLPAWS_Community_Content_28\"></a><strong>NLP@AWS Community Content</strong></h4>\n<ul>\n<li><strong><a href=\"https://www.philschmid.de/hugginface-sagemaker-workshop\" target=\"_blank\">Enterprise-Scale NLP with Hugging Face & Amazon SageMaker</a></strong><br />\nIn October and November, AWS & Hugging Face held a workshop series on “Enterprise-Scale NLP with Hugging Face & Amazon SageMaker”. This workshop series consisted out of 3 parts and covers:</li>\n</ul>\n<ol>\n<li>Getting Started with Amazon SageMaker: Training your first NLP Transformer model with Hugging Face and deploying it</li>\n<li>Going Production: Deploying, Scaling & Monitoring Hugging Face Transformer models with Amazon SageMaker</li>\n<li>MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines</li>\n</ol>\n<p>The workshop has been recorded and the resources have been made available on Github so you are now able to do the whole workshop series on your own to enhance your Hugging Face Transformers skills with Amazon SageMaker.</p>\n<p>Youtube Playlist: <a href=\"https://www.youtube.com/playlist?list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ\" target=\"_blank\">Hugging Face SageMaker Playlist</a><br />\nGithub Repository: <a href=\"huggingface-sagemaker-workshop-series\" target=\"_blank\">huggingface-sagemaker-workshop-series</a></p>\n<p><img src=\"https://dev-media.amazoncloud.cn/a745e29d7d8d4fe7839e7267837096b7_image.png\" alt=\"image.png\" /></p>\n<ul>\n<li>\n<p><strong><a href=\"https://huggingface.co/blog/gptj-sagemaker\" target=\"_blank\">Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker</a></strong><br />\nGPT-J is one of the most popular open-source alternatives to GPT-3. In this blog post, you will learn how to easily deploy GPT-J using Amazon SageMaker and the Hugging Face Inference Toolkit with a few lines of code for scalable, reliable, and secure real-time inference using a regular size GPU instance with NVIDIA T4.</p>\n</li>\n<li>\n<p><strong><a href=\"https://towardsdatascience.com/teach-an-ai-model-to-write-like-shakespeare-for-free-a9e6a307139\" target=\"_blank\">Teach an AI Model to Write like Shakespeare — For Free</a></strong><br />\nIn this tutorial you will learn how to train an NLP model to write like Shakespeare within 5 minutes using <a href=\"https://studiolab.sagemaker.aws/\" target=\"_blank\">Sagemaker Studio Lab</a></p>\n</li>\n</ul>\n<h5><a id=\"Stay_in_touch_with_NLP_on_AWS_50\"></a><strong>Stay in touch with NLP on AWS</strong></h5>\n<p>Our contact: <a href=\"https://mailto:aws-nlp@amazon.com/\" target=\"_blank\">aws-nlp@amazon.com</a><br />\nEmail us about (1) your awesome project about NLP on AWS, (2) let us know which post in the newsletter helped your NLP journey, (3) other things that you want us to post on the newsletter. Talk to you soon.</p>\n"}