亚马逊云科技 NLP 月刊 2022年3月

人工智能
机器学习
自然语言处理
海外精选
亚马逊云科技
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Hello world. This is the monthly Natural Language Processing(NLP) newsletter covering everything related to NLP at AWS in the month of February. You can find previous month's newsletters here. Feel free to leave comments or share it with your social networks to celebrate this new launch with us. Let's dive in!\n\n### **NLP Customer Success Stories**\n\n++**[How Kustomer utilizes custom Docker images & Amazon SageMaker to build a text classification pipeline](https://aws.amazon.com/blogs/machine-learning/how-kustomer-utilizes-custom-docker-images-amazon-sagemaker-to-build-a-text-classification-pipeline/)**++\nKustomer is the omnichannel SaaS CRM platform reimagining enterprise customer service to deliver standout experiences. Kustomer wanted the ability to rapidly analyze large volumes of support communications for their business customers — customer experience and service organizations — and automate discovery of information such as the end-customer’s intent, customer service issue, and other relevant insights related to the consumer.\n\nIn this blog post, the authors describe how Kustomer uses custom Docker images for SageMaker training and inference, which eases integration and streamlines the process. With this approach, Kustomer’s business customers are automatically classifying over 50k support emails each month with up to 70% accuracy.\n\n### **Updates on AWS Language Services**\n++**[Apply profanity masking in Amazon Translate](https://aws.amazon.com/blogs/machine-learning/apply-profanity-masking-in-amazon-translate/)**++\n[Amazon Translate](https://aws.amazon.com/cn/translate/?trk=cndc-detail) typically chooses clean words for your translation output. But in some situations, you want to prevent words that are commonly considered as profane terms from appearing in the translated output.\n\nYou can now apply profanity masking to both ++[real-time translation](https://docs.aws.amazon.com/translate/latest/dg/sync.html)++ or ++[asynchronous batch processing](https://docs.aws.amazon.com/translate/latest/dg/async.html)++ in [Amazon Translate](https://aws.amazon.com/cn/translate/?trk=cndc-detail). When using [Amazon Translate](https://aws.amazon.com/cn/translate/?trk=cndc-detail) with profanity masking enabled, the five-character sequence ?\$#@\$ is used to mask each profane word or phrase, regardless of the number of characters. [Amazon Translate](https://aws.amazon.com/cn/translate/?trk=cndc-detail) detects each profane word or phrase literally, not contextually.\n\n![image.png](https://dev-media.amazoncloud.cn/2bd7d2c10a1e4d1183ce522fff554d00_image.png)\n\n++**[Control formality in machine translated text using Amazon Translate](https://aws.amazon.com/blogs/machine-learning/control-formality-in-machine-translated-text-using-amazon-translate/)**++\nThis newly released feature in [Amazon Translate](https://aws.amazon.com/cn/translate/?trk=cndc-detail) allows you to customize the level of formality in your translation output. At the time of writing, the formality customization feature is available for six target languages: French, German, Hindi, Italian, Japanese, and Spanish. You can customize the formality of your translated output to suit your communication needs, at three different levels:\n\n- **Default** – No control over formality by letting the neural machine translation operate with no influence\n- **Formal** – Useful in the insurance and healthcare industry, where you may prefer a more formal translation\n- **Informal** – Useful for customers in gaming and social media who prefer an informal translation\n\n++**[Announcing the launch of the model copy feature for Amazon Comprehend custom models](https://aws.amazon.com/blogs/machine-learning/announcing-the-launch-of-the-model-copy-feature-for-amazon-comprehend-custom-models/)**++\nAWS has launched the [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) custom model copy feature this past month, unlocking the important capability of automatically copying your [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) custom models from a source account to designated target accounts in the same Region, without requiring access to the datasets that the model was trained and evaluated on. This new feature is available for both [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) custom classification and custom entity recognition models. This feature also unlocks benefits such as:\n\n- **Multi-account MLOps strategy** – Train a model one time, deploy in multiple accounts\n- **Faster deployment** – No need to retrain in every account\n- **Protect sensitive datasets** – No need to share datasets between accounts or users – especially important for industries bound to regulatory requirements around data isolation and sandboxing\n- **Easy collaboration** – Partners or vendors can now easily train in [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) Custom and share the models with their customers.\n\n### **NLP on [Amazon SageMaker](https://aws.amazon.com/cn/sagemaker/?trk=cndc-detail)**\n++**[Train 175+ billion parameter NLP models with model parallel additions and Hugging Face on Amazon SageMaker](https://aws.amazon.com/blogs/machine-learning/train-175-billion-parameter-nlp-models-with-model-parallel-additions-and-hugging-face-on-amazon-sagemaker/)**++\nIn this blog post, the authors briefly summarize the rise of large and small-scale NLP models, primarily through the abstraction provided by Hugging Face and with the modular backend of [Amazon SageMaker](https://aws.amazon.com/cn/sagemaker/?trk=cndc-detail). The launch of four additional features within the SageMaker model parallel library are highlighted, which unlock 175 billion parameter NLP model pretraining and fine-tuning for customers.\n\nThe SM Model Parallel library is used on the SageMaker training platform, achieving a throughput of 32 samples per second on 120 ml.p4d.24xlarge instances and 175 billion parameters. The authors extrapolate that, if compute power is increased to 240 instances, the full model would take 25 days to train.\n\n![image.png](https://dev-media.amazoncloud.cn/52d944270ca2479f8ba19087e246ad40_image.png)\n\n++[In this repo](https://github.com/aws/amazon-sagemaker-examples/tree/main/training/distributed_training/pytorch/model_parallel)++ you will find sample code for training BERT, GPT-2, and the recently released GPT-J models using model parallelism on [Amazon SageMaker](https://aws.amazon.com/cn/sagemaker/?trk=cndc-detail).\n\n++**[Improve high-value research with Hugging Face and Amazon SageMaker asynchronous inference endpoints](https://aws.amazon.com/blogs/machine-learning/improve-high-value-research-with-hugging-face-and-amazon-sagemaker-asynchronous-inference-endpoints/)**++\nMany of our AWS customers provide research, analytics, and business intelligence as a service. This type of research and business intelligence enables their end customers to stay ahead of markets and competitors, identify growth opportunities, and address issues proactively. The NLP models used for these types of research tasks deal with large models and usually involve long articles to be summarized considering the size of the corpus—and dedicated endpoints, which aren’t cost-optimized at the moment. These applications receive a burst of incoming traffic at different times of the day.\n\nWe believe customers would greatly benefit from the ability to scale down to zero and ramp up their inference capability on as needed basis. This optimizes the research cost and still doesn’t compromise on inference quality. This post discusses how Hugging Face along with [Amazon SageMaker](https://aws.amazon.com/cn/sagemaker/?trk=cndc-detail) asynchronous inference can help achieve this.\n\n++**[Choose the best data source for your Amazon SageMaker training job](https://aws.amazon.com/blogs/machine-learning/choose-the-best-data-source-for-your-amazon-sagemaker-training-job/)**++\nData ingestion is an integral part of any training pipeline, and SageMaker training jobs support a variety of data storage and input modes to suit a wide range of training workloads.\n\nThis post helps you choose the best data source for your SageMaker ML training use case. We introduce the data sources options that SageMaker training jobs support natively. For each data source and input mode, we outline its ease of use, performance characteristics, cost, and limitations. To help you get started quickly, we provide the diagram with a sample decision flow that you can follow based on your key workload characteristics. Lastly, we perform several benchmarks for realistic training scenarios to demonstrate the practical implications on the overall training cost and performance.\n\n\n![k5d32ft58wogbigpypjn.png](https://awsdevweb.s3.cn-north-1.amazonaws.com.cn/d306c90c6d504669a5656976574bb744_k5d32ft58wogbigpypjn.png)\n\n### **Community Content**\n++**[Hugging Face Inference Sagemaker Terraform Module](https://www.philschmid.de/terraform-huggingface-amazon-sagemaker-advanced)**++\nOur partners at Hugging Face have released a Terraform module which is incredibly useful to deploy Hugging Face Transformer models like BERT, from either [Amazon S3](https://aws.amazon.com/cn/s3/?trk=cndc-detail) or the ++[Hugging Face Model Hub](https://huggingface.co/models)++ to [Amazon SageMaker](https://aws.amazon.com/cn/sagemaker/?trk=cndc-detail). They have jam-packed it full of great features, such as deploying private Transformer models from hf.co/models, directly adding an autoscaling configuration for the deployed [Amazon SageMaker](https://aws.amazon.com/cn/sagemaker/?trk=cndc-detail) endpoints, and even deploying ++[Asynchronous Inference Endpoints!](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html)++\n\nCheck out the Terraform module ++[here.](https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest)++\n\n++**[NLP Data Augmentation on Amazon SageMaker](https://dev.tonlp%20data%20augmentation%20on%20amazon%20sagemaker/)**++\nMachine learning models are very data-intensive – which is especially true for Natural Language Processing (NLP) models; at the same time, data scarcity is a common challenge in NLP, especially for low-resource languages. This is where data augmentation can greatly help – it is the process of enriching or synthetically enlarge the dataset that a machine learning model is trained on.\n\nIn this blog post, the authors explain how to efficiently perform data augmentation – namely using back translation – by leveraging SageMaker Processing Jobs and pre-trained Hugging Face translation models.)\n![f8um90pm5mvz6a8dz1lf.png](https://awsdevweb.s3.cn-north-1.amazonaws.com.cn/78c7dbe140984def90c4b03bbd63f245_f8um90pm5mvz6a8dz1lf.png)\n","render":"<p>Hello world. This is the monthly Natural Language Processing(NLP) newsletter covering everything related to NLP at AWS in the month of February. You can find previous month’s newsletters here. Feel free to leave comments or share it with your social networks to celebrate this new launch with us. Let’s dive in!</p>\n<h3><a id=\\"NLP_Customer_Success_Stories_2\\"></a><strong>NLP Customer Success Stories</strong></h3>\\n<p><ins><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/how-kustomer-utilizes-custom-docker-images-amazon-sagemaker-to-build-a-text-classification-pipeline/\\" target=\\"_blank\\">How Kustomer utilizes custom Docker images &amp; Amazon SageMaker to build a text classification pipeline</a></strong></ins><br />\\nKustomer is the omnichannel SaaS CRM platform reimagining enterprise customer service to deliver standout experiences. Kustomer wanted the ability to rapidly analyze large volumes of support communications for their business customers — customer experience and service organizations — and automate discovery of information such as the end-customer’s intent, customer service issue, and other relevant insights related to the consumer.</p>\n<p>In this blog post, the authors describe how Kustomer uses custom Docker images for SageMaker training and inference, which eases integration and streamlines the process. With this approach, Kustomer’s business customers are automatically classifying over 50k support emails each month with up to 70% accuracy.</p>\n<h3><a id=\\"Updates_on_AWS_Language_Services_9\\"></a><strong>Updates on AWS Language Services</strong></h3>\\n<p><ins><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/apply-profanity-masking-in-amazon-translate/\\" target=\\"_blank\\">Apply profanity masking in Amazon Translate</a></strong></ins><br />\\nAmazon Translate typically chooses clean words for your translation output. But in some situations, you want to prevent words that are commonly considered as profane terms from appearing in the translated output.</p>\n<p>You can now apply profanity masking to both <ins><a href=\\"https://docs.aws.amazon.com/translate/latest/dg/sync.html\\" target=\\"_blank\\">real-time translation</a></ins> or <ins><a href=\\"https://docs.aws.amazon.com/translate/latest/dg/async.html\\" target=\\"_blank\\">asynchronous batch processing</a></ins> in Amazon Translate. When using Amazon Translate with profanity masking enabled, the five-character sequence ?#@ is used to mask each profane word or phrase, regardless of the number of characters. Amazon Translate detects each profane word or phrase literally, not contextually.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/2bd7d2c10a1e4d1183ce522fff554d00_image.png\\" alt=\\"image.png\\" /></p>\n<p><ins><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/control-formality-in-machine-translated-text-using-amazon-translate/\\" target=\\"_blank\\">Control formality in machine translated text using Amazon Translate</a></strong></ins><br />\\nThis newly released feature in Amazon Translate allows you to customize the level of formality in your translation output. At the time of writing, the formality customization feature is available for six target languages: French, German, Hindi, Italian, Japanese, and Spanish. You can customize the formality of your translated output to suit your communication needs, at three different levels:</p>\n<ul>\\n<li><strong>Default</strong> – No control over formality by letting the neural machine translation operate with no influence</li>\\n<li><strong>Formal</strong> – Useful in the insurance and healthcare industry, where you may prefer a more formal translation</li>\\n<li><strong>Informal</strong> – Useful for customers in gaming and social media who prefer an informal translation</li>\\n</ul>\n<p><ins><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/announcing-the-launch-of-the-model-copy-feature-for-amazon-comprehend-custom-models/\\" target=\\"_blank\\">Announcing the launch of the model copy feature for Amazon Comprehend custom models</a></strong></ins><br />\\nAWS has launched the Amazon Comprehend custom model copy feature this past month, unlocking the important capability of automatically copying your Amazon Comprehend custom models from a source account to designated target accounts in the same Region, without requiring access to the datasets that the model was trained and evaluated on. This new feature is available for both Amazon Comprehend custom classification and custom entity recognition models. This feature also unlocks benefits such as:</p>\n<ul>\\n<li><strong>Multi-account MLOps strategy</strong> – Train a model one time, deploy in multiple accounts</li>\\n<li><strong>Faster deployment</strong> – No need to retrain in every account</li>\\n<li><strong>Protect sensitive datasets</strong> – No need to share datasets between accounts or users – especially important for industries bound to regulatory requirements around data isolation and sandboxing</li>\\n<li><strong>Easy collaboration</strong> – Partners or vendors can now easily train in [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) Custom and share the models with their customers.</li>\\n</ul>\n<h3><a id=\\"NLP_on_Amazon_SageMaker_32\\"></a><strong>NLP on Amazon SageMaker</strong></h3>\\n<p><ins><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/train-175-billion-parameter-nlp-models-with-model-parallel-additions-and-hugging-face-on-amazon-sagemaker/\\" target=\\"_blank\\">Train 175+ billion parameter NLP models with model parallel additions and Hugging Face on Amazon SageMaker</a></strong></ins><br />\\nIn this blog post, the authors briefly summarize the rise of large and small-scale NLP models, primarily through the abstraction provided by Hugging Face and with the modular backend of Amazon SageMaker. The launch of four additional features within the SageMaker model parallel library are highlighted, which unlock 175 billion parameter NLP model pretraining and fine-tuning for customers.</p>\n<p>The SM Model Parallel library is used on the SageMaker training platform, achieving a throughput of 32 samples per second on 120 ml.p4d.24xlarge instances and 175 billion parameters. The authors extrapolate that, if compute power is increased to 240 instances, the full model would take 25 days to train.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/52d944270ca2479f8ba19087e246ad40_image.png\\" alt=\\"image.png\\" /></p>\n<p><ins><a href=\\"https://github.com/aws/amazon-sagemaker-examples/tree/main/training/distributed_training/pytorch/model_parallel\\" target=\\"_blank\\">In this repo</a></ins> you will find sample code for training BERT, GPT-2, and the recently released GPT-J models using model parallelism on Amazon SageMaker.</p>\n<p><ins><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/improve-high-value-research-with-hugging-face-and-amazon-sagemaker-asynchronous-inference-endpoints/\\" target=\\"_blank\\">Improve high-value research with Hugging Face and Amazon SageMaker asynchronous inference endpoints</a></strong></ins><br />\\nMany of our AWS customers provide research, analytics, and business intelligence as a service. This type of research and business intelligence enables their end customers to stay ahead of markets and competitors, identify growth opportunities, and address issues proactively. The NLP models used for these types of research tasks deal with large models and usually involve long articles to be summarized considering the size of the corpus—and dedicated endpoints, which aren’t cost-optimized at the moment. These applications receive a burst of incoming traffic at different times of the day.</p>\n<p>We believe customers would greatly benefit from the ability to scale down to zero and ramp up their inference capability on as needed basis. This optimizes the research cost and still doesn’t compromise on inference quality. This post discusses how Hugging Face along with Amazon SageMaker asynchronous inference can help achieve this.</p>\n<p><ins><strong><a href=\\"https://aws.amazon.com/blogs/machine-learning/choose-the-best-data-source-for-your-amazon-sagemaker-training-job/\\" target=\\"_blank\\">Choose the best data source for your Amazon SageMaker training job</a></strong></ins><br />\\nData ingestion is an integral part of any training pipeline, and SageMaker training jobs support a variety of data storage and input modes to suit a wide range of training workloads.</p>\n<p>This post helps you choose the best data source for your SageMaker ML training use case. We introduce the data sources options that SageMaker training jobs support natively. For each data source and input mode, we outline its ease of use, performance characteristics, cost, and limitations. To help you get started quickly, we provide the diagram with a sample decision flow that you can follow based on your key workload characteristics. Lastly, we perform several benchmarks for realistic training scenarios to demonstrate the practical implications on the overall training cost and performance.</p>\n<p><img src=\\"https://awsdevweb.s3.cn-north-1.amazonaws.com.cn/d306c90c6d504669a5656976574bb744_k5d32ft58wogbigpypjn.png\\" alt=\\"k5d32ft58wogbigpypjn.png\\" /></p>\n<h3><a id=\\"Community_Content_55\\"></a><strong>Community Content</strong></h3>\\n<p><ins><strong><a href=\\"https://www.philschmid.de/terraform-huggingface-amazon-sagemaker-advanced\\" target=\\"_blank\\">Hugging Face Inference Sagemaker Terraform Module</a></strong></ins><br />\\nOur partners at Hugging Face have released a Terraform module which is incredibly useful to deploy Hugging Face Transformer models like BERT, from either Amazon S3 or the <ins><a href=\\"https://huggingface.co/models\\" target=\\"_blank\\">Hugging Face Model Hub</a></ins> to Amazon SageMaker. They have jam-packed it full of great features, such as deploying private Transformer models from hf.co/models, directly adding an autoscaling configuration for the deployed Amazon SageMaker endpoints, and even deploying <ins><a href=\\"https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html\\" target=\\"_blank\\">Asynchronous Inference Endpoints!</a></ins></p>\n<p>Check out the Terraform module <ins><a href=\\"https://registry.terraform.io/modules/philschmid/sagemaker-huggingface/aws/latest\\" target=\\"_blank\\">here.</a></ins></p>\n<p><ins><strong><a href=\\"https://dev.tonlp%20data%20augmentation%20on%20amazon%20sagemaker/\\" target=\\"_blank\\">NLP Data Augmentation on Amazon SageMaker</a></strong></ins><br />\\nMachine learning models are very data-intensive – which is especially true for Natural Language Processing (NLP) models; at the same time, data scarcity is a common challenge in NLP, especially for low-resource languages. This is where data augmentation can greatly help – it is the process of enriching or synthetically enlarge the dataset that a machine learning model is trained on.</p>\n<p>In this blog post, the authors explain how to efficiently perform data augmentation – namely using back translation – by leveraging SageMaker Processing Jobs and pre-trained Hugging Face translation models.)<br />\\n<img src=\\"https://awsdevweb.s3.cn-north-1.amazonaws.com.cn/78c7dbe140984def90c4b03bbd63f245_f8um90pm5mvz6a8dz1lf.png\\" alt=\\"f8um90pm5mvz6a8dz1lf.png\\" /></p>\n"}
0
目录
关闭