亚马逊云科技 NLP 月刊 2022年8月

人工智能
海外精选
Amazon SageMaker
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"#### **NLP@AWS Customer Success Story**\n**[How Mantium achieves low-latency GPT-J inference](https://aws.amazon.com/blogs/machine-learning/how-mantium-achieves-low-latency-gpt-j-inference-with-deepspeed-on-amazon-sagemaker/)**\n\n![image.png](https://dev-media.amazoncloud.cn/87dd233b957247a8868b2ab795c51fad_image.png)\n\nMantium helps customers build AI applications that incorporate state of the art language models in minutes and managing them at scale through their low-code cloud platform. Mantium supports access to model APIs from AI providers as well as open source models like GPT-J that are trained using SageMaker distributed model parallel library. To ensure users get best-in-class performance from such open source models, Mantium used DeepSpeed to optimize inference of GPT-J deployed on SageMaker inference endpoints.\n\n**[How eMagazines utilizes Amazon Polly to voice articles for school-aged kids](https://aws.amazon.com/blogs/machine-learning/how-emagazines-utilizes-amazon-polly-to-voice-articles-for-school-aged-kids/)**\n\nResearch has shown that developing brains need to hear language even before learning to talk and is a pre-requisite for learning to read. eMagazines used [Amazon Polly](https://aws.amazon.com/cn/polly/?trk=cndc-detail) to help TIME For Kids to automate audio synthesis as content were added dynamically on a daily basis without involvement of the audio artist. With [Amazon Polly](https://aws.amazon.com/cn/polly/?trk=cndc-detail), they were also able to support new features like text highlight and scrolling as the article is read aloud as well as collecting and analysing usage data in real time.\n\n#### **AI** **Language Services**\n**[Break through language barriers with AWS AI Services](https://aws.amazon.com/blogs/machine-learning/break-through-language-barriers-with-amazon-transcribe-amazon-translate-and-amazon-polly/)**\n\n![image.png](https://dev-media.amazoncloud.cn/a00a562dcca64ae5a46386c83d85783c_image.png)\n\nThe ability to directly communicate in a multi-lingual context on demand without the need of a human translator can be applied in many areas like media, medicine, education, hospitality and many more. This blog post will show how you can combine three fully managed AWS AI services ([Amazon Transcribe](https://aws.amazon.com/cn/transcribe/?trk=cndc-detail), [Amazon Translate](https://aws.amazon.com/cn/translate/?trk=cndc-detail), and [Amazon Polly](https://aws.amazon.com/cn/polly/?trk=cndc-detail)) to create a near-real-time speech-to-speech translator solution that can quickly translate a source speaker’s live voice input into a spoken, accurate, translated target language.\n\n**[Using NLP to gain insights from customer tickets](https://aws.amazon.com/blogs/machine-learning/how-service-providers-can-use-natural-language-processing-to-gain-insights-from-customer-tickets-with-amazon-comprehend/)**\n\n![image.png](https://dev-media.amazoncloud.cn/d1ca944f93c941d297fa7a24ea662cbc_image.png)\n\n[Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) is a natural language processing (NLP) service that uses machine learning (ML) to uncover valuable insights and connections in text. This blog shares how Amazon Managed Services (AMS) used [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail)'s custom classifications to categorise inbound requests by resource and operation type according to customer’s description of the issue. This allowed AMS to build workflows that recommend automated solutions for the issue related to the tickets and generate classification analysis reports using [Amazon Q](https://aws.amazon.com/cn/q/?trk=cndc-detail)uicksight.\n\n**[Enable your contact centre with live call analytics and agent assist using AI Services](https://aws.amazon.com/blogs/machine-learning/live-call-analytics-and-agent-assist-for-your-contact-center-with-amazon-language-ai-services/)**\n\n![image.png](https://dev-media.amazoncloud.cn/b24ff21dd2764fd59787fdd61f068566_image.png)\n\nOne way to raise the bar on good caller experience to your contact center is to provide supervisors the ability to assess the quality of the caller experiences through call analytics and respond decisively before the call ends. Furthermore, if agents are assisted with proactive and contextual information guidance, it can greatly enhance their ability to deliver a great caller experience. This blog shows how to build a solution for live call analytics and real time agent assist using AWS AI services like [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail), [Amazon Lex](https://aws.amazon.com/cn/lex/?trk=cndc-detail), [Amazon Kendra](https://aws.amazon.com/cn/kendra/?trk=cndc-detail) and [Amazon Transcribe](https://aws.amazon.com/cn/transcribe/?trk=cndc-detail).\n\n#### **NLP on SageMaker**\n**[Text classification for online conversations with machine learning on AWS](https://aws.amazon.com/blogs/machine-learning/text-classification-for-online-conversations-with-machine-learning-on-aws/)**\n\nThe explosion of usage of online conversations in modern digital life have led to wide-spread non-traditional usage of language. One phenomenon is the use of constantly evolving and domain-specific vocabularies. Another is the (both intentional and accidental) adoption of lexical deviation of words from proper English in such conversations. Traditional NLP techniques do not perform well in analysing such online conversations. The authors of this blog post from Amazon ML Solutions Lab discusses 2 different model approaches to predict toxicity and subtype labels like obscene, threat, insult, identity attack and sexual explicit and tested them on the Jigsaw Unintended Bias in Toxicity Classification dataset.\n\n**[Text summarization with Amazon SageMaker and Hugging Face](https://aws.amazon.com/blogs/machine-learning/text-summarization-with-amazon-sagemaker-and-hugging-face/)**\n\n![image.png](https://dev-media.amazoncloud.cn/e1586d23011447d683f88ec0bc33bacf_image.png)\n\nThe use of modern digital services and communication generates data that are growing at zettabyte scale. Text summarization is a helpful technique in understanding large amounts of text data because it creates a subset of contextually meaningful information from source documents. Hugging Face and AWS have a partnership to seamlessly integrate the HuggingFace library into [Amazon SageMaker](https://aws.amazon.com/cn/sagemaker/?trk=cndc-detail) to enable developers and data scientists to get started with NLP on AWS more easily. Hugging Face’s 400 pretrained text summarization models can be easily deployed using the summarization pipeline Hugging Face transformer API. This blog post is a good starting point to learn how to quickly experiment and select suitable Hugging Face text summarization models and deploy them on [Amazon SageMaker](https://aws.amazon.com/cn/sagemaker/?trk=cndc-detail).\n\n**[Build a news-based real-time alert system with Twitter, Amazon SageMaker, and Hugging Face](https://aws.amazon.com/blogs/machine-learning/build-a-news-based-real-time-alert-system-with-twitter-amazon-sagemaker-and-hugging-face/)**\n\n![image.png](https://dev-media.amazoncloud.cn/dfe19d419e344b43a7e094592ad2dcd5_image.png)\n\nIn industries like insurance, law enforcement, first respondents and government agencies, the ability to process news and social media feeds in near real time can allow them to respond immediately as events unfold. If you are thinking about such an use case, this blog post can guide you on building a real-time alert system on AWS that consumes news alerts from social media and classify the alerts using a pre-trained model from Hugging Face Hub deployed on [Amazon SageMaker](https://aws.amazon.com/cn/sagemaker/?trk=cndc-detail).\n\n#### **NLP@Community**\n[The World’s Largest Open Multilingual Language Model](https://bigscience.huggingface.co/blog/bloom)\n\n![image.png](https://dev-media.amazoncloud.cn/dad5002eb31f4ff5aac3e48362ed3e67_image.png)\n\nBLOOM is the first multilingual LLM (Large Language Model) trained completely transparently to make LLMs accessible to academia, nonprofits, and smaller research labs. With its 176 billion parameters, BLOOM is able to generate text in 46 natural languages and 13 programming languages and will be the very first LLM with over 100B parameters for almost all of these languages. It is hoped that BLOOM will be the seed for a living family of models that will grow in the future through the community.\n\n**[The Curious Case of LaMDA, the AI that Claimed to Be Sentient](https://www.prindleinstitute.org/2022/06/the-curious-case-of-lamda-the-ai-that-claimed-to-be-sentient/)**\n\n![image.png](https://dev-media.amazoncloud.cn/bca1702e27fb4dae928ff321b45152ed_image.png)\n\nIn this article, the author discusses the recent controversial claim that Google’s LaMDA maybe sentient by examining the suppositions of the claim.","render":"<h4><a id=\\"NLPAWS_Customer_Success_Story_0\\"></a><strong>NLP@AWS Customer Success Story</strong></h4>\\n<p><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/how-mantium-achieves-low-latency-gpt-j-inference-with-deepspeed-on-amazon-sagemaker/\\" target=\\"_blank\\">How Mantium achieves low-latency GPT-J inference</a></strong></p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/87dd233b957247a8868b2ab795c51fad_image.png\\" alt=\\"image.png\\" /></p>\n<p>Mantium helps customers build AI applications that incorporate state of the art language models in minutes and managing them at scale through their low-code cloud platform. Mantium supports access to model APIs from AI providers as well as open source models like GPT-J that are trained using SageMaker distributed model parallel library. To ensure users get best-in-class performance from such open source models, Mantium used DeepSpeed to optimize inference of GPT-J deployed on SageMaker inference endpoints.</p>\n<p><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/how-emagazines-utilizes-amazon-polly-to-voice-articles-for-school-aged-kids/\\" target=\\"_blank\\">How eMagazines utilizes Amazon Polly to voice articles for school-aged kids</a></strong></p>\n<p>Research has shown that developing brains need to hear language even before learning to talk and is a pre-requisite for learning to read. eMagazines used Amazon Polly to help TIME For Kids to automate audio synthesis as content were added dynamically on a daily basis without involvement of the audio artist. With Amazon Polly, they were also able to support new features like text highlight and scrolling as the article is read aloud as well as collecting and analysing usage data in real time.</p>\n<h4><a id=\\"AI_Language_Services_11\\"></a><strong>AI</strong> <strong>Language Services</strong></h4>\\n<p><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/break-through-language-barriers-with-amazon-transcribe-amazon-translate-and-amazon-polly/\\" target=\\"_blank\\">Break through language barriers with AWS AI Services</a></strong></p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/a00a562dcca64ae5a46386c83d85783c_image.png\\" alt=\\"image.png\\" /></p>\n<p>The ability to directly communicate in a multi-lingual context on demand without the need of a human translator can be applied in many areas like media, medicine, education, hospitality and many more. This blog post will show how you can combine three fully managed AWS AI services (Amazon Transcribe, Amazon Translate, and Amazon Polly) to create a near-real-time speech-to-speech translator solution that can quickly translate a source speaker’s live voice input into a spoken, accurate, translated target language.</p>\n<p><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/how-service-providers-can-use-natural-language-processing-to-gain-insights-from-customer-tickets-with-amazon-comprehend/\\" target=\\"_blank\\">Using NLP to gain insights from customer tickets</a></strong></p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/d1ca944f93c941d297fa7a24ea662cbc_image.png\\" alt=\\"image.png\\" /></p>\n<p>Amazon Comprehend is a natural language processing (NLP) service that uses machine learning (ML) to uncover valuable insights and connections in text. This blog shares how Amazon Managed Services (AMS) used Amazon Comprehend’s custom classifications to categorise inbound requests by resource and operation type according to customer’s description of the issue. This allowed AMS to build workflows that recommend automated solutions for the issue related to the tickets and generate classification analysis reports using Amazon Quicksight.</p>\n<p><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/live-call-analytics-and-agent-assist-for-your-contact-center-with-amazon-language-ai-services/\\" target=\\"_blank\\">Enable your contact centre with live call analytics and agent assist using AI Services</a></strong></p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/b24ff21dd2764fd59787fdd61f068566_image.png\\" alt=\\"image.png\\" /></p>\n<p>One way to raise the bar on good caller experience to your contact center is to provide supervisors the ability to assess the quality of the caller experiences through call analytics and respond decisively before the call ends. Furthermore, if agents are assisted with proactive and contextual information guidance, it can greatly enhance their ability to deliver a great caller experience. This blog shows how to build a solution for live call analytics and real time agent assist using AWS AI services like Amazon Comprehend, Amazon Lex, Amazon Kendra and Amazon Transcribe.</p>\n<h4><a id=\\"NLP_on_SageMaker_30\\"></a><strong>NLP on SageMaker</strong></h4>\\n<p><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/text-classification-for-online-conversations-with-machine-learning-on-aws/\\" target=\\"_blank\\">Text classification for online conversations with machine learning on AWS</a></strong></p>\n<p>The explosion of usage of online conversations in modern digital life have led to wide-spread non-traditional usage of language. One phenomenon is the use of constantly evolving and domain-specific vocabularies. Another is the (both intentional and accidental) adoption of lexical deviation of words from proper English in such conversations. Traditional NLP techniques do not perform well in analysing such online conversations. The authors of this blog post from Amazon ML Solutions Lab discusses 2 different model approaches to predict toxicity and subtype labels like obscene, threat, insult, identity attack and sexual explicit and tested them on the Jigsaw Unintended Bias in Toxicity Classification dataset.</p>\n<p><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/text-summarization-with-amazon-sagemaker-and-hugging-face/\\" target=\\"_blank\\">Text summarization with Amazon SageMaker and Hugging Face</a></strong></p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/e1586d23011447d683f88ec0bc33bacf_image.png\\" alt=\\"image.png\\" /></p>\n<p>The use of modern digital services and communication generates data that are growing at zettabyte scale. Text summarization is a helpful technique in understanding large amounts of text data because it creates a subset of contextually meaningful information from source documents. Hugging Face and AWS have a partnership to seamlessly integrate the HuggingFace library into Amazon SageMaker to enable developers and data scientists to get started with NLP on AWS more easily. Hugging Face’s 400 pretrained text summarization models can be easily deployed using the summarization pipeline Hugging Face transformer API. This blog post is a good starting point to learn how to quickly experiment and select suitable Hugging Face text summarization models and deploy them on Amazon SageMaker.</p>\n<p><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/build-a-news-based-real-time-alert-system-with-twitter-amazon-sagemaker-and-hugging-face/\\" target=\\"_blank\\">Build a news-based real-time alert system with Twitter, Amazon SageMaker, and Hugging Face</a></strong></p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/dfe19d419e344b43a7e094592ad2dcd5_image.png\\" alt=\\"image.png\\" /></p>\n<p>In industries like insurance, law enforcement, first respondents and government agencies, the ability to process news and social media feeds in near real time can allow them to respond immediately as events unfold. If you are thinking about such an use case, this blog post can guide you on building a real-time alert system on AWS that consumes news alerts from social media and classify the alerts using a pre-trained model from Hugging Face Hub deployed on Amazon SageMaker.</p>\n<h4><a id=\\"NLPCommunity_47\\"></a><strong>NLP@Community</strong></h4>\\n<p><a href=\\"https://bigscience.huggingface.co/blog/bloom\\" target=\\"_blank\\">The World’s Largest Open Multilingual Language Model</a></p>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/dad5002eb31f4ff5aac3e48362ed3e67_image.png\\" alt=\\"image.png\\" /></p>\n<p>BLOOM is the first multilingual LLM (Large Language Model) trained completely transparently to make LLMs accessible to academia, nonprofits, and smaller research labs. With its 176 billion parameters, BLOOM is able to generate text in 46 natural languages and 13 programming languages and will be the very first LLM with over 100B parameters for almost all of these languages. It is hoped that BLOOM will be the seed for a living family of models that will grow in the future through the community.</p>\n<p><strong><a href=\\"https://www.prindleinstitute.org/2022/06/the-curious-case-of-lamda-the-ai-that-claimed-to-be-sentient/\\" target=\\"_blank\\">The Curious Case of LaMDA, the AI that Claimed to Be Sentient</a></strong></p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/bca1702e27fb4dae928ff321b45152ed_image.png\\" alt=\\"image.png\\" /></p>\n<p>In this article, the author discusses the recent controversial claim that Google’s LaMDA maybe sentient by examining the suppositions of the claim.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭