{"value":"[Amazon Comprehend](https://aws.amazon.com/comprehend/) is a natural-language processing (NLP) service you can use to automatically extract entities, key phrases, language, sentiments, and other insights from documents. For example, you can immediately start detecting entities such as people, places, commercial items, dates, and quantities via the [Amazon Comprehend console](https://docs.aws.amazon.com/comprehend/latest/dg/realtime-console-analysis.html#realtime-analysis-console-entities), [AWS Command Line Interface](https://docs.aws.amazon.com/comprehend/latest/dg/using-api-sync.html#get-started-api-entities-cli), or [Amazon Comprehend APIs](https://docs.aws.amazon.com/comprehend/latest/dg/using-api-sync.html#get-started-api-entities). In addition, if you need to extract entities that aren’t part of the [Amazon Comprehend built-in entity types](https://docs.aws.amazon.com/comprehend/latest/dg/how-entities.html), you can create a custom entity recognition model (also known as custom entity recognizer) to extract terms that are more relevant for your specific use case, like names of items from a catalog of products, domain-specific identifiers, and so on. Creating an accurate entity recognizer on your own using machine learning libraries and frameworks can be a complex and time-consuming process. [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) simplifies your model training work significantly. All you need to do is load your dataset of documents and annotations, and use the [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) console, AWS CLI, or APIs to create the model.\n\nTo train a custom entity recognizer, you can provide training data to [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) as [annotations or entity lists](https://docs.aws.amazon.com/comprehend/latest/dg/prep-training-data-cer.html). In the first case, you provide a collection of documents and a file with annotations that specify the location where entities occur within the set of documents. Alternatively, with entity lists, you provide a list of entities with their corresponding entity type label, and a set of unannotated documents in which you expect your entities to be present. Both approaches can be used to train a successful custom entity recognition model; however, there are situations in which one method may be a better choice. For example, when the meaning of specific entities could be ambiguous and context-dependent, providing annotations is recommended because this might help you create an [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) model that is capable of better using context when extracting entities.\n\nAnnotating documents can require quite a lot of effort and time, especially if you consider that both the quality and quantity of annotations have an impact on the resulting entity recognition model. Imprecise or too few annotations can lead to poor results. To help you set up a process for acquiring annotations, we provide tools such as [Amazon SageMaker Ground Truth](https://aws.amazon.com/sagemaker/data-labeling/), which you can use to annotate your documents more quickly and generate an [augmented manifest annotations file](https://docs.aws.amazon.com/comprehend/latest/dg/cer-annotation-manifest.html). However, even if you use Ground Truth, you still need to make sure that your training dataset is large enough to successfully build your entity recognizer.\n\nUntil today, to start training an [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) custom entity recognizer, you had to provide a collection of at least 250 documents and a minimum of 100 annotations per entity type. Today, we’re announcing that, thanks to recent improvements in the models underlying [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail), we’ve reduced the minimum requirements for training a recognizer with plain text CSV annotation files. You can now build a custom entity recognition model with as few as three documents and 25 annotations per entity type. You can find further details about new service limits in [Guidelines and quotas](https://docs.aws.amazon.com/comprehend/latest/dg/guidelines-and-limits.html#limits-custom-entity-recognition).\n\nTo showcase how this reduction can help you getting started with the creation of a custom entity recognizer, we ran some tests on a few open-source datasets and collected performance metrics. In this post, we walk you through the benchmarking process and the results we obtained while working on subsampled datasets.\n\n#### **Dataset preparation**\n\nIn this post, we explain how we trained an [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) custom entity recognizer using annotated documents. In general, annotations can be provided as a [CSV file](https://docs.aws.amazon.com/comprehend/latest/dg/cer-annotation-csv.html), an [augmented manifest file generated by Ground Truth](https://docs.aws.amazon.com/comprehend/latest/dg/cer-annotation-manifest.html), or a [PDF file](https://docs.aws.amazon.com/comprehend/latest/dg/cer-annotation-pdf.html). Our focus is on CSV plain text annotations, because this is the type of annotation impacted by the new minimum requirements. CSV files should have the following structure:\n\n```\\nFile, Line, Begin Offset, End Offset, Type\\ndocuments.txt, 0, 0, 13, ENTITY_TYPE_1\\ndocuments.txt, 1, 0, 7, ENTITY_TYPE_2\\n```\n\nThe relevant fields are as follows:\n\n- **File** – The name of the file containing the documents\n- **Line **– The number of the line containing the entity, starting with line 0\n- **Begin Offset** – The character offset in the input text (relative to the beginning of the line) that shows where the entity begins, considering that the first character is at position 0\n- **End Offset** – The character offset in the input text that shows where the entity ends\n- **Type** – The name of the entity type you want to \n\nAdditionally, when using this approach, you have to provide a collection of training documents as .txt files with one document per line, or one document per file.\n\nFor our tests, we used the [SNIPS Natural Language Understanding benchmark](https://github.com/sonos/nlu-benchmark), a dataset of crowdsourced utterances distributed among seven user intents (```AddToPlaylist```, ```BookRestaurant```, ```GetWeather```, ```PlayMusic```, ```RateBook```, ```SearchCreativeWork```, ```SearchScreeningEvent```). The dataset was published in 2018 in the context of the paper [Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces](https://arxiv.org/abs/1805.10190) by Coucke, et al.\n\nThe SNIPS dataset is made of a collection of JSON files condensing both annotations and raw text files. The following is a snippet from the dataset:\ndefine\n\n```\\n{\\n \\"annotations\\":{\\n \\"named_entity\\":[\\n {\\n \\"start\\":16,\\n \\"end\\":36,\\n \\"extent\\":\\"within the same area\\",\\n \\"tag\\":\\"spatial_relation\\"\\n },\\n {\\n \\"start\\":40,\\n \\"end\\":51,\\n \\"extent\\":\\"Lawrence St\\",\\n \\"tag\\":\\"poi\\"\\n },\\n {\\n \\"start\\":67,\\n \\"end\\":70,\\n \\"extent\\":\\"one\\",\\n \\"tag\\":\\"party_size_number\\"\\n }\\n ],\\n \\"intent\\":\\"BookRestaurant\\"\\n },\\n \\"raw_text\\":\\"I'd like to eat within the same area of Lawrence St for a party of one\\"\\n}\\n```\n\nBefore creating our entity recognizer, we transformed the SNIPS annotations and raw text files into a CSV annotations file and a .txt documents file.\n\nThe following is an excerpt from our ```annotations.csv``` file:\n\n```\\nFile, Line, Begin Offset, End Offset, Type\\ndocuments.txt, 0, 16, 36, spatial_relation\\ndocuments.txt, 0, 40, 51, poi\\ndocuments.txt, 0, 67, 70, party_size_number\\n```\n\nThe following is an excerpt from our ```documents.txt``` file:\n\n```\\nI'd like to eat within the same area of Lawrence St for a party of one\\nPlease book me a table for three at an american gastropub \\nI would like to book a restaurant in Niagara Falls for 8 on June nineteenth\\nCan you book a table for a party of 6 close to DeKalb Av\\n```\n\n#### **Sampling configuration and benchmarking process**\n\nFor our experiments, we focused on a subset of entity types from the SNIPS dataset:\n\n- **BookRestaurant** – Entity types: ```spatial_relation```, ```poi```, ```party_size_number```, ```restaurant_name```, ```city```, ```timeRange```, ```restaurant_type```, ```served_dish```, ```party_size_description```, ```country```, ```facility```, ```state```, ```sort```, ```cuisine```\n- **GetWeather** – Entity types: ```condition_temperature```, ```current_location```, ```geographic_poi```, ```timeRange, state```, ```spatial_relation```, ```condition_description```, ```city```, ```country```\n- **PlayMusic** – Entity types: ```track```, ```artist```, ```music_item```, ```service```, ```genre```, ```sort```, ```playlist```, ```album```, ```year```\n\nMoreover, we subsampled each dataset to obtain different configurations in terms of number of documents sampled for training and number of annotations per entity (also known as shots). This was done by using a custom script designed to create subsampled datasets in which each entity type appears at least k times, within a minimum of n documents.\n\nEach model was trained using a specific subsample of the training datasets; the nine model configurations are illustrated in the following table.\n\n![image.png](https://dev-media.amazoncloud.cn/7e14a06abc8c4ffea2ca28add38447f8_image.png)\n\nTo measure the accuracy of our models, we collected evaluation metrics that [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) automatically computes when training an entity recognizer:\n\n- **Precision** – This indicates the fraction of entities detected by the recognizer that are correctly identified and labeled. From a different perspective, precision can be defined as tp / (tp + fp), where tp is the number of true positives (correct identifications) and fp is the number of false positives (incorrect identifications).\n- **Recall** – This indicates the fraction of entities present in the documents that are correctly identified and labeled. It’s calculated as tp / (tp + fn), where tp is the number of true positives and fn is the number of false negatives (missed identifications).\n- **F1 score** – This is a combination of the precision and recall metrics, which measures the overall accuracy of the model. The F1 score is the harmonic mean of the precision and recall metrics, and is calculated as 2 * Precision * Recall / (Precision + Recall).\n\nFor comparing performance of our entity recognizers, we focus on F1 scores.\n\nConsidering that, given a dataset and a subsample size (in terms of number of documents and shots), you can generate different subsamples, we generated 10 subsamples for each one of the nine configurations, trained the entity recognition models, collected performance metrics, and averaged them using micro-averaging. This allowed us to get more stable results, especially for few-shot subsamples.\n\n#### **Results**\n\nThe following table shows the micro-averaged F1 scores computed on performance metrics returned by [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) after training each entity recognizer.\n\n![image.png](https://dev-media.amazoncloud.cn/d9cd7f30e96e40849ad0b16663bcc0a6_image.png)\n\nThe following column chart shows the distribution of F1 scores for the nine configurations we trained as described in the previous section.\n\n![image.png](https://dev-media.amazoncloud.cn/5e723f49fcce4025b50e124311cb5248_image.png)\n\nWe can observe that we were able to successfully train custom entity recognition models even with as few as 25 annotations per entity type. If we focus on the three smallest subsampled datasets (```snips-BookRestaurant-subsample-A```, ```snips-GetWeather-subsample-A```, ```and snips-PlayMusic-subsample-A```), we see that, on average, we were able to achieve a F1 score of 84%, which is a pretty good result considering the limited number of documents and annotations we used. If we want to improve the performance of our model, we can collect additional documents and annotations and train a new model with more data. For example, with medium-sized subsamples (```snips-BookRestaurant-subsample-B```, ```snips-GetWeather-subsample-B```, and ```snips-PlayMusic-subsample-B```), which contain twice as many documents and annotations, we obtained on average a F1 score of 88% (5% improvement with respect to ```subsample-A``` datasets). Finally, larger subsampled datasets (```snips-BookRestaurant-subsample-C```, ```snips-GetWeather-subsample-C```, and ```snips-PlayMusic-subsample-C```), which contain even more annotated data (approximately four times the number of documents and annotations used for ```subsample-A``` datasets), provided a further 2% improvement, raising the average F1 score to 90%.\n\n#### **Conclusion**\n\nIn this post, we announced a reduction of the minimum requirements for training a custom entity recognizer with [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail), and ran some benchmarks on open-source datasets to show how this reduction can help you get started. Starting today, you can create an entity recognition model with as few as 25 annotations per entity type (instead of 100), and at least three documents (instead of 250). With this announcement, we’re lowering the barrier to entry for users interested in using [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) custom entity recognition technology. You can now start running your experiments with a very small collection of annotated documents, analyze preliminary results, and iterate by including additional annotations and documents if you need a more accurate entity recognition model for your use case.\n\nTo learn more and get started with a custom entity recognizer, refer to Custom [entity recognition.](https://docs.aws.amazon.com/comprehend/latest/dg/custom-entity-recognition.html)\n\nSpecial thanks to my colleagues Jyoti Bansal and Jie Ma for their precious help with data preparation and benchmarking.\n\n#### **About the author**\n\n![image.png](https://dev-media.amazoncloud.cn/a0628c2f12374351a8913b05db96ac63_image.png)\n\n**Luca Guida** is a Solutions Architect at AWS; he is based in Milan and supports Italian ISVs in their cloud journey. With an academic background in computer science and engineering, he started developing his AI/ML passion at university. As a member of the natural language processing (NLP) community within AWS, Luca helps customers be successful while adopting AI/ML services.","render":"<p><a href=\\"https://aws.amazon.com/comprehend/\\" target=\\"_blank\\">Amazon Comprehend</a> is a natural-language processing (NLP) service you can use to automatically extract entities, key phrases, language, sentiments, and other insights from documents. For example, you can immediately start detecting entities such as people, places, commercial items, dates, and quantities via the <a href=\\"https://docs.aws.amazon.com/comprehend/latest/dg/realtime-console-analysis.html#realtime-analysis-console-entities\\" target=\\"_blank\\">Amazon Comprehend console</a>, <a href=\\"https://docs.aws.amazon.com/comprehend/latest/dg/using-api-sync.html#get-started-api-entities-cli\\" target=\\"_blank\\">AWS Command Line Interface</a>, or <a href=\\"https://docs.aws.amazon.com/comprehend/latest/dg/using-api-sync.html#get-started-api-entities\\" target=\\"_blank\\">Amazon Comprehend APIs</a>. In addition, if you need to extract entities that aren’t part of the <a href=\\"https://docs.aws.amazon.com/comprehend/latest/dg/how-entities.html\\" target=\\"_blank\\">Amazon Comprehend built-in entity types</a>, you can create a custom entity recognition model (also known as custom entity recognizer) to extract terms that are more relevant for your specific use case, like names of items from a catalog of products, domain-specific identifiers, and so on. Creating an accurate entity recognizer on your own using machine learning libraries and frameworks can be a complex and time-consuming process. [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) simplifies your model training work significantly. All you need to do is load your dataset of documents and annotations, and use the [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) console, AWS CLI, or APIs to create the model.</p>\\n<p>To train a custom entity recognizer, you can provide training data to Amazon Comprehend as <a href=\\"https://docs.aws.amazon.com/comprehend/latest/dg/prep-training-data-cer.html\\" target=\\"_blank\\">annotations or entity lists</a>. In the first case, you provide a collection of documents and a file with annotations that specify the location where entities occur within the set of documents. Alternatively, with entity lists, you provide a list of entities with their corresponding entity type label, and a set of unannotated documents in which you expect your entities to be present. Both approaches can be used to train a successful custom entity recognition model; however, there are situations in which one method may be a better choice. For example, when the meaning of specific entities could be ambiguous and context-dependent, providing annotations is recommended because this might help you create an [Amazon Comprehend](https://aws.amazon.com/cn/comprehend/?trk=cndc-detail) model that is capable of better using context when extracting entities.</p>\\n<p>Annotating documents can require quite a lot of effort and time, especially if you consider that both the quality and quantity of annotations have an impact on the resulting entity recognition model. Imprecise or too few annotations can lead to poor results. To help you set up a process for acquiring annotations, we provide tools such as <a href=\\"https://aws.amazon.com/sagemaker/data-labeling/\\" target=\\"_blank\\">Amazon SageMaker Ground Truth</a>, which you can use to annotate your documents more quickly and generate an <a href=\\"https://docs.aws.amazon.com/comprehend/latest/dg/cer-annotation-manifest.html\\" target=\\"_blank\\">augmented manifest annotations file</a>. However, even if you use Ground Truth, you still need to make sure that your training dataset is large enough to successfully build your entity recognizer.</p>\\n<p>Until today, to start training an Amazon Comprehend custom entity recognizer, you had to provide a collection of at least 250 documents and a minimum of 100 annotations per entity type. Today, we’re announcing that, thanks to recent improvements in the models underlying Amazon Comprehend, we’ve reduced the minimum requirements for training a recognizer with plain text CSV annotation files. You can now build a custom entity recognition model with as few as three documents and 25 annotations per entity type. You can find further details about new service limits in <a href=\\"https://docs.aws.amazon.com/comprehend/latest/dg/guidelines-and-limits.html#limits-custom-entity-recognition\\" target=\\"_blank\\">Guidelines and quotas</a>.</p>\\n<p>To showcase how this reduction can help you getting started with the creation of a custom entity recognizer, we ran some tests on a few open-source datasets and collected performance metrics. In this post, we walk you through the benchmarking process and the results we obtained while working on subsampled datasets.</p>\n<h4><a id=\\"Dataset_preparation_10\\"></a><strong>Dataset preparation</strong></h4>\\n<p>In this post, we explain how we trained an Amazon Comprehend custom entity recognizer using annotated documents. In general, annotations can be provided as a <a href=\\"https://docs.aws.amazon.com/comprehend/latest/dg/cer-annotation-csv.html\\" target=\\"_blank\\">CSV file</a>, an <a href=\\"https://docs.aws.amazon.com/comprehend/latest/dg/cer-annotation-manifest.html\\" target=\\"_blank\\">augmented manifest file generated by Ground Truth</a>, or a <a href=\\"https://docs.aws.amazon.com/comprehend/latest/dg/cer-annotation-pdf.html\\" target=\\"_blank\\">PDF file</a>. Our focus is on CSV plain text annotations, because this is the type of annotation impacted by the new minimum requirements. CSV files should have the following structure:</p>\\n<pre><code class=\\"lang-\\">File, Line, Begin Offset, End Offset, Type\\ndocuments.txt, 0, 0, 13, ENTITY_TYPE_1\\ndocuments.txt, 1, 0, 7, ENTITY_TYPE_2\\n</code></pre>\\n<p>The relevant fields are as follows:</p>\n<ul>\\n<li><strong>File</strong> – The name of the file containing the documents</li>\\n<li>**Line **– The number of the line containing the entity, starting with line 0</li>\n<li><strong>Begin Offset</strong> – The character offset in the input text (relative to the beginning of the line) that shows where the entity begins, considering that the first character is at position 0</li>\\n<li><strong>End Offset</strong> – The character offset in the input text that shows where the entity ends</li>\\n<li><strong>Type</strong> – The name of the entity type you want to</li>\\n</ul>\n<p>Additionally, when using this approach, you have to provide a collection of training documents as .txt files with one document per line, or one document per file.</p>\n<p>For our tests, we used the <a href=\\"https://github.com/sonos/nlu-benchmark\\" target=\\"_blank\\">SNIPS Natural Language Understanding benchmark</a>, a dataset of crowdsourced utterances distributed among seven user intents (<code>AddToPlaylist</code>, <code>BookRestaurant</code>, <code>GetWeather</code>, <code>PlayMusic</code>, <code>RateBook</code>, <code>SearchCreativeWork</code>, <code>SearchScreeningEvent</code>). The dataset was published in 2018 in the context of the paper <a href=\\"https://arxiv.org/abs/1805.10190\\" target=\\"_blank\\">Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces</a> by Coucke, et al.</p>\\n<p>The SNIPS dataset is made of a collection of JSON files condensing both annotations and raw text files. The following is a snippet from the dataset:<br />\\ndefine</p>\n<pre><code class=\\"lang-\\">{\\n "annotations":{\\n "named_entity":[\\n {\\n "start":16,\\n "end":36,\\n "extent":"within the same area",\\n "tag":"spatial_relation"\\n },\\n {\\n "start":40,\\n "end":51,\\n "extent":"Lawrence St",\\n "tag":"poi"\\n },\\n {\\n "start":67,\\n "end":70,\\n "extent":"one",\\n "tag":"party_size_number"\\n }\\n ],\\n "intent":"BookRestaurant"\\n },\\n "raw_text":"I'd like to eat within the same area of Lawrence St for a party of one"\\n}\\n</code></pre>\\n<p>Before creating our entity recognizer, we transformed the SNIPS annotations and raw text files into a CSV annotations file and a .txt documents file.</p>\n<p>The following is an excerpt from our <code>annotations.csv</code> file:</p>\\n<pre><code class=\\"lang-\\">File, Line, Begin Offset, End Offset, Type\\ndocuments.txt, 0, 16, 36, spatial_relation\\ndocuments.txt, 0, 40, 51, poi\\ndocuments.txt, 0, 67, 70, party_size_number\\n</code></pre>\\n<p>The following is an excerpt from our <code>documents.txt</code> file:</p>\\n<pre><code class=\\"lang-\\">I'd like to eat within the same area of Lawrence St for a party of one\\nPlease book me a table for three at an american gastropub \\nI would like to book a restaurant in Niagara Falls for 8 on June nineteenth\\nCan you book a table for a party of 6 close to DeKalb Av\\n</code></pre>\\n<h4><a id=\\"Sampling_configuration_and_benchmarking_process_84\\"></a><strong>Sampling configuration and benchmarking process</strong></h4>\\n<p>For our experiments, we focused on a subset of entity types from the SNIPS dataset:</p>\n<ul>\\n<li><strong>BookRestaurant</strong> – Entity types: <code>spatial_relation</code>, <code>poi</code>, <code>party_size_number</code>, <code>restaurant_name</code>, <code>city</code>, <code>timeRange</code>, <code>restaurant_type</code>, <code>served_dish</code>, <code>party_size_description</code>, <code>country</code>, <code>facility</code>, <code>state</code>, <code>sort</code>, <code>cuisine</code></li>\\n<li><strong>GetWeather</strong> – Entity types: <code>condition_temperature</code>, <code>current_location</code>, <code>geographic_poi</code>, <code>timeRange, state</code>, <code>spatial_relation</code>, <code>condition_description</code>, <code>city</code>, <code>country</code></li>\\n<li><strong>PlayMusic</strong> – Entity types: <code>track</code>, <code>artist</code>, <code>music_item</code>, <code>service</code>, <code>genre</code>, <code>sort</code>, <code>playlist</code>, <code>album</code>, <code>year</code></li>\\n</ul>\n<p>Moreover, we subsampled each dataset to obtain different configurations in terms of number of documents sampled for training and number of annotations per entity (also known as shots). This was done by using a custom script designed to create subsampled datasets in which each entity type appears at least k times, within a minimum of n documents.</p>\n<p>Each model was trained using a specific subsample of the training datasets; the nine model configurations are illustrated in the following table.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/7e14a06abc8c4ffea2ca28add38447f8_image.png\\" alt=\\"image.png\\" /></p>\n<p>To measure the accuracy of our models, we collected evaluation metrics that Amazon Comprehend automatically computes when training an entity recognizer:</p>\n<ul>\\n<li><strong>Precision</strong> – This indicates the fraction of entities detected by the recognizer that are correctly identified and labeled. From a different perspective, precision can be defined as tp / (tp + fp), where tp is the number of true positives (correct identifications) and fp is the number of false positives (incorrect identifications).</li>\\n<li><strong>Recall</strong> – This indicates the fraction of entities present in the documents that are correctly identified and labeled. It’s calculated as tp / (tp + fn), where tp is the number of true positives and fn is the number of false negatives (missed identifications).</li>\\n<li><strong>F1 score</strong> – This is a combination of the precision and recall metrics, which measures the overall accuracy of the model. The F1 score is the harmonic mean of the precision and recall metrics, and is calculated as 2 * Precision * Recall / (Precision + Recall).</li>\\n</ul>\n<p>For comparing performance of our entity recognizers, we focus on F1 scores.</p>\n<p>Considering that, given a dataset and a subsample size (in terms of number of documents and shots), you can generate different subsamples, we generated 10 subsamples for each one of the nine configurations, trained the entity recognition models, collected performance metrics, and averaged them using micro-averaging. This allowed us to get more stable results, especially for few-shot subsamples.</p>\n<h4><a id=\\"Results_108\\"></a><strong>Results</strong></h4>\\n<p>The following table shows the micro-averaged F1 scores computed on performance metrics returned by Amazon Comprehend after training each entity recognizer.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/d9cd7f30e96e40849ad0b16663bcc0a6_image.png\\" alt=\\"image.png\\" /></p>\n<p>The following column chart shows the distribution of F1 scores for the nine configurations we trained as described in the previous section.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/5e723f49fcce4025b50e124311cb5248_image.png\\" alt=\\"image.png\\" /></p>\n<p>We can observe that we were able to successfully train custom entity recognition models even with as few as 25 annotations per entity type. If we focus on the three smallest subsampled datasets (<code>snips-BookRestaurant-subsample-A</code>, <code>snips-GetWeather-subsample-A</code>, <code>and snips-PlayMusic-subsample-A</code>), we see that, on average, we were able to achieve a F1 score of 84%, which is a pretty good result considering the limited number of documents and annotations we used. If we want to improve the performance of our model, we can collect additional documents and annotations and train a new model with more data. For example, with medium-sized subsamples (<code>snips-BookRestaurant-subsample-B</code>, <code>snips-GetWeather-subsample-B</code>, and <code>snips-PlayMusic-subsample-B</code>), which contain twice as many documents and annotations, we obtained on average a F1 score of 88% (5% improvement with respect to <code>subsample-A</code> datasets). Finally, larger subsampled datasets (<code>snips-BookRestaurant-subsample-C</code>, <code>snips-GetWeather-subsample-C</code>, and <code>snips-PlayMusic-subsample-C</code>), which contain even more annotated data (approximately four times the number of documents and annotations used for <code>subsample-A</code> datasets), provided a further 2% improvement, raising the average F1 score to 90%.</p>\\n<h4><a id=\\"Conclusion_120\\"></a><strong>Conclusion</strong></h4>\\n<p>In this post, we announced a reduction of the minimum requirements for training a custom entity recognizer with Amazon Comprehend, and ran some benchmarks on open-source datasets to show how this reduction can help you get started. Starting today, you can create an entity recognition model with as few as 25 annotations per entity type (instead of 100), and at least three documents (instead of 250). With this announcement, we’re lowering the barrier to entry for users interested in using Amazon Comprehend custom entity recognition technology. You can now start running your experiments with a very small collection of annotated documents, analyze preliminary results, and iterate by including additional annotations and documents if you need a more accurate entity recognition model for your use case.</p>\n<p>To learn more and get started with a custom entity recognizer, refer to Custom <a href=\\"https://docs.aws.amazon.com/comprehend/latest/dg/custom-entity-recognition.html\\" target=\\"_blank\\">entity recognition.</a></p>\\n<p>Special thanks to my colleagues Jyoti Bansal and Jie Ma for their precious help with data preparation and benchmarking.</p>\n<h4><a id=\\"About_the_author_128\\"></a><strong>About the author</strong></h4>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/a0628c2f12374351a8913b05db96ac63_image.png\\" alt=\\"image.png\\" /></p>\n<p><strong>Luca Guida</strong> is a Solutions Architect at AWS; he is based in Milan and supports Italian ISVs in their cloud journey. With an academic background in computer science and engineering, he started developing his AI/ML passion at university. As a member of the natural language processing (NLP) community within AWS, Luca helps customers be successful while adopting AI/ML services.</p>\n"}