The importance of forgetting in artificial and animal intelligence

神经网络
深度学习
机器学习
海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"![image.png](https://dev-media.amazoncloud.cn/71910cd61b4a4321847dbe6d6b55ee64_image.png)\n\nCritical learning periods are vital for birds developing the ability to sing. Deep neural networks exhibit critical learning periods just like biological systems.\n\nJHUNTER/GETTY IMAGES/ISTOCKPHOTO\n\nDeep neural networks (DNNs) have taken the AI research community by storm, approaching human-like performance in niche learning tasks from recognizing speech to finding objects in images. The industry has taken notice, with adoption growing by 37% in the past four years, according to Gartner, a leading research and advisory firm.\n\nBut how does a DNN learn? What “information” does it contain? How is such information represented, and where is it stored? How does information content in the DNN change during learning?\n\nIn 2016, my collaborators and I (then at UCLA) set out to answer some of these questions. To frame the questions mathematically, we had to form a viable definition of “information” in deep networks.\n\nTraditional information theory is built around Claude Shannon’s idea to quantify how many bits are needed to send a message. But as Shannon himself noted, this is a measure of information for communication. When applied to measure how much information a DNN has in its weights about the task it is trying to solve, it has the unwelcome tendency to give degenerate nonsensical values.\n\nThis paradox led to the introduction a more general notion of the information Lagrangian — which defines information as the trade-off between how much noise could be added to the weights between layers and the resulting accuracy of its input-output behavior. Intuitively, even if a network is very large, this suggests that if we can replace most computations with random noise and still get the same output, then the DNN does not actually contain that much information. Pleasingly, for some particular noise models, we can conduct specializations to recover Shannon’s original definition.\n\nThe next step is related to the computing of information for DNNs with millions of parameters.\n\n![image.png](https://dev-media.amazoncloud.cn/ae7e9b7afdfd4988b5f3466571d36094_image.png)\n\nIMAGE FROM [CRITICAL LEARNING PERIODS IN DEEP NETWORKS](https://arxiv.org/pdf/1711.08856.pdf), ICLR 2019\n\nAs learning progresses, one would expect the amount of information stored in the weights of the network to increase monotonically: the more you train, the more you learn. However, the information in the weights (the blue line in the figure at right) follows a completely different path: First, the information contained in the weights increases sharply, as if the network was trying to acquire information about the data set. Following this, the information in the weights drops — almost as though the network was “forgetting”, or shedding information about the training data. Amazingly, such forgetting is occurring while performance in the learning task, shown in the green dashed curve, continues to increase!\n\nWhen we shared these findings with biologists, they were not surprised. In biological systems, forgetting is an important aspect of learning. Animal brains have a bounded capacity. There is an ongoing need to forget useless information and consolidate useful information. However, DNNs are not biological in nature. There is no apparent reason why memorizing first, and then forgetting, should be beneficial.\n\nOur research uncovered another connected discovery — one that was surprising to our biologist collaborator as well.\n\nBiological networks have another fundamental property: they lose their plasticity over time. If people do not learn a skill (say, seeing or speaking) during a critical period of development, their ability to learn that skill is permanently impaired. This is common when it comes to humans, where, for example, failure to correct visual defects early enough during childhood can result in lifelong amblyopia-impaired vision in one eye, even if the defect is later corrected. The importance of the critical learning period is especially pronounced in the animal kingdom — for example, it is vital for birds developing the ability to sing.\n\nThe inability to learn a new skill later in life is considered a side effect of the loss of neuronal plasticity due to several biochemical factors. Artificial neural networks, on the other hand, have no plasticity. They do not age. Why then would they have a critical learning period?\n\nWe set out to repeat a classical experiment of neuroscience pioneers Hubel and Wiesel, who in the '50s and '60s studied the effect of temporary visual deficit in cats after birth and correlated the phenomenon to permanent visual impairment later in life.\n\nWe “blindfolded” the DNNs by blurring the training images at the beginning of the training. Then we let the network train on clear images. We found that the deficiency introduced in the initial period resulted in permanent deficit (classification accuracy loss), no matter how much additional training the network performed.\n\n![image.png](https://dev-media.amazoncloud.cn/1cbcbe3ab869447aab816baf04fcc91f_image.png)\n\nThe final accuracy of a DNN, plotted as a function of the epoch when the “visual deficit” (blur) was removed, is shown in blue (left), against the normal training accuracy in dashed lines (same as in the previous plot). This bears a puzzling similarity to the visual acuity measured by biologists in cats, as a function of when the visual defect was removed (blue). Also shown in green is the progression of visual acuity of the normal cats. On the right, the same phenomenon is sliced in another way: rather than being removed at a certain time, a defect is applied for a window starting at a particular instant (horizontal axis), measured in days for cats and training epochs for DNNs. The sensitivity of the system (cat or DNN, measured by the percentage decrease in performance relative to normal training) shows a remarkable similarity to the information curve in the previous image: there is a strong sensitivity in the initial critical period (the “information acquisition phase”), past which visual deficits have no long-term effect.\n\nIMAGE FROM [CRITICAL LEARNING PERIODS IN DEEP NETWORKS](https://arxiv.org/pdf/1711.08856.pdf), ICLR 2019\n\nIn other words, DNNs exhibit critical learning periods just like biological systems. If we messed with the data during the “information acquisition” phase, the network would get into a state from which it cannot recover. Altering the data after this critical period has no effect.\n\nWe then performed a process akin to “artificial neural recording” and measured the information flow among different neurons. We found that during the critical period, the way information flows between layers is fluid. However, after the critical period, these ways become fixed. Unlike neural plasticity, a DNN exhibits some form of “information plasticity”, where the ability to process information is lost during learning. But rather than being a consequence of aging or some complex biochemical phenomenon, this “forgetting” appears to be an essential part of learning. This is true for both artificial and biological systems.\n\nOver the subsequent years, we tried to understand and analyze these dynamics related to learning that are common to artificial and biological systems.\n\n![image.png](https://dev-media.amazoncloud.cn/1bad0e09f8214a649a6f017bee33cb3b_image.png)\n\nTask2Vec is a method for transforming learning tasks into vectors, so they can be compared, clustered, and selected based on neighborhood criteria. This plot is a 2-D reduction of the space of learning tasks that shows, for instance, that the tasks of learning different colors cluster together, as do the tasks of learning plants and animals. Some concepts that are visually dissimilar (such as denim and yoga pants) are close to each other, but so are “ripped” and “denim”.\n\nIMAGE FROM [TASK2VEC: TASK EMBEDDINGS FOR META-LEARNING](https://www.amazon.science/publications/task2vec-task-embedding-for-meta-learning), ICCV 2019\n\nWe found a rich universe of findings. Some of our learnings are already making their way into our products. For instance, it is common in AI to train a DNN model to solve a task — say, finding cats and dogs in images — and then fine-tune it for a different task — say, recognizing objects for autonomous-driving applications. But how do we know what model to start from to solve a customer problem? When are two learning tasks “close”? How do we represent learning tasks mathematically, and how do we compute their distance?\n\nTo give just one practical application of our research, Task2Vec is a method for representing a learning task with a simple vector. This vector is a function of the information in the weights discussed earlier. The amount of information needed to fine-tune one model from another is an (asymmetric) distance between the tasks the two models represent. We can now measure how difficult it would be to fine-tune a given model for a given task. This is part of our [Amazon Rekognition Custom Labels](https://aws.amazon.com/rekognition/custom-labels-features/) service, where customers can provide a few sample images of objects, and the system learns a model to detect them and classify them in never-before-seen images.\n\nAI is truly in its infancy. The depth of the intellectual questions raised by the field is invigorating. For now, there’s consolation for those of us aging and beginning to forget things. We can take comfort in the knowledge that we are still learning.\n\nABOUT THE AUTHOR\n\n#### **[Alessandro Achille](https://www.amazon.science/author/alessandro-achille)**\n\nAlessandro Achille is an applied scientist with Amazon Web Services.","render":"<p><img src=\\"https://dev-media.amazoncloud.cn/71910cd61b4a4321847dbe6d6b55ee64_image.png\\" alt=\\"image.png\\" /></p>\n<p>Critical learning periods are vital for birds developing the ability to sing. Deep neural networks exhibit critical learning periods just like biological systems.</p>\n<p>JHUNTER/GETTY IMAGES/ISTOCKPHOTO</p>\n<p>Deep neural networks (DNNs) have taken the AI research community by storm, approaching human-like performance in niche learning tasks from recognizing speech to finding objects in images. The industry has taken notice, with adoption growing by 37% in the past four years, according to Gartner, a leading research and advisory firm.</p>\n<p>But how does a DNN learn? What “information” does it contain? How is such information represented, and where is it stored? How does information content in the DNN change during learning?</p>\n<p>In 2016, my collaborators and I (then at UCLA) set out to answer some of these questions. To frame the questions mathematically, we had to form a viable definition of “information” in deep networks.</p>\n<p>Traditional information theory is built around Claude Shannon’s idea to quantify how many bits are needed to send a message. But as Shannon himself noted, this is a measure of information for communication. When applied to measure how much information a DNN has in its weights about the task it is trying to solve, it has the unwelcome tendency to give degenerate nonsensical values.</p>\n<p>This paradox led to the introduction a more general notion of the information Lagrangian — which defines information as the trade-off between how much noise could be added to the weights between layers and the resulting accuracy of its input-output behavior. Intuitively, even if a network is very large, this suggests that if we can replace most computations with random noise and still get the same output, then the DNN does not actually contain that much information. Pleasingly, for some particular noise models, we can conduct specializations to recover Shannon’s original definition.</p>\n<p>The next step is related to the computing of information for DNNs with millions of parameters.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/ae7e9b7afdfd4988b5f3466571d36094_image.png\\" alt=\\"image.png\\" /></p>\n<p>IMAGE FROM <a href=\\"https://arxiv.org/pdf/1711.08856.pdf\\" target=\\"_blank\\">CRITICAL LEARNING PERIODS IN DEEP NETWORKS</a>, ICLR 2019</p>\\n<p>As learning progresses, one would expect the amount of information stored in the weights of the network to increase monotonically: the more you train, the more you learn. However, the information in the weights (the blue line in the figure at right) follows a completely different path: First, the information contained in the weights increases sharply, as if the network was trying to acquire information about the data set. Following this, the information in the weights drops — almost as though the network was “forgetting”, or shedding information about the training data. Amazingly, such forgetting is occurring while performance in the learning task, shown in the green dashed curve, continues to increase!</p>\n<p>When we shared these findings with biologists, they were not surprised. In biological systems, forgetting is an important aspect of learning. Animal brains have a bounded capacity. There is an ongoing need to forget useless information and consolidate useful information. However, DNNs are not biological in nature. There is no apparent reason why memorizing first, and then forgetting, should be beneficial.</p>\n<p>Our research uncovered another connected discovery — one that was surprising to our biologist collaborator as well.</p>\n<p>Biological networks have another fundamental property: they lose their plasticity over time. If people do not learn a skill (say, seeing or speaking) during a critical period of development, their ability to learn that skill is permanently impaired. This is common when it comes to humans, where, for example, failure to correct visual defects early enough during childhood can result in lifelong amblyopia-impaired vision in one eye, even if the defect is later corrected. The importance of the critical learning period is especially pronounced in the animal kingdom — for example, it is vital for birds developing the ability to sing.</p>\n<p>The inability to learn a new skill later in life is considered a side effect of the loss of neuronal plasticity due to several biochemical factors. Artificial neural networks, on the other hand, have no plasticity. They do not age. Why then would they have a critical learning period?</p>\n<p>We set out to repeat a classical experiment of neuroscience pioneers Hubel and Wiesel, who in the '50s and '60s studied the effect of temporary visual deficit in cats after birth and correlated the phenomenon to permanent visual impairment later in life.</p>\n<p>We “blindfolded” the DNNs by blurring the training images at the beginning of the training. Then we let the network train on clear images. We found that the deficiency introduced in the initial period resulted in permanent deficit (classification accuracy loss), no matter how much additional training the network performed.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/1cbcbe3ab869447aab816baf04fcc91f_image.png\\" alt=\\"image.png\\" /></p>\n<p>The final accuracy of a DNN, plotted as a function of the epoch when the “visual deficit” (blur) was removed, is shown in blue (left), against the normal training accuracy in dashed lines (same as in the previous plot). This bears a puzzling similarity to the visual acuity measured by biologists in cats, as a function of when the visual defect was removed (blue). Also shown in green is the progression of visual acuity of the normal cats. On the right, the same phenomenon is sliced in another way: rather than being removed at a certain time, a defect is applied for a window starting at a particular instant (horizontal axis), measured in days for cats and training epochs for DNNs. The sensitivity of the system (cat or DNN, measured by the percentage decrease in performance relative to normal training) shows a remarkable similarity to the information curve in the previous image: there is a strong sensitivity in the initial critical period (the “information acquisition phase”), past which visual deficits have no long-term effect.</p>\n<p>IMAGE FROM <a href=\\"https://arxiv.org/pdf/1711.08856.pdf\\" target=\\"_blank\\">CRITICAL LEARNING PERIODS IN DEEP NETWORKS</a>, ICLR 2019</p>\\n<p>In other words, DNNs exhibit critical learning periods just like biological systems. If we messed with the data during the “information acquisition” phase, the network would get into a state from which it cannot recover. Altering the data after this critical period has no effect.</p>\n<p>We then performed a process akin to “artificial neural recording” and measured the information flow among different neurons. We found that during the critical period, the way information flows between layers is fluid. However, after the critical period, these ways become fixed. Unlike neural plasticity, a DNN exhibits some form of “information plasticity”, where the ability to process information is lost during learning. But rather than being a consequence of aging or some complex biochemical phenomenon, this “forgetting” appears to be an essential part of learning. This is true for both artificial and biological systems.</p>\n<p>Over the subsequent years, we tried to understand and analyze these dynamics related to learning that are common to artificial and biological systems.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/1bad0e09f8214a649a6f017bee33cb3b_image.png\\" alt=\\"image.png\\" /></p>\n<p>Task2Vec is a method for transforming learning tasks into vectors, so they can be compared, clustered, and selected based on neighborhood criteria. This plot is a 2-D reduction of the space of learning tasks that shows, for instance, that the tasks of learning different colors cluster together, as do the tasks of learning plants and animals. Some concepts that are visually dissimilar (such as denim and yoga pants) are close to each other, but so are “ripped” and “denim”.</p>\n<p>IMAGE FROM <a href=\\"https://www.amazon.science/publications/task2vec-task-embedding-for-meta-learning\\" target=\\"_blank\\">TASK2VEC: TASK EMBEDDINGS FOR META-LEARNING</a>, ICCV 2019</p>\\n<p>We found a rich universe of findings. Some of our learnings are already making their way into our products. For instance, it is common in AI to train a DNN model to solve a task — say, finding cats and dogs in images — and then fine-tune it for a different task — say, recognizing objects for autonomous-driving applications. But how do we know what model to start from to solve a customer problem? When are two learning tasks “close”? How do we represent learning tasks mathematically, and how do we compute their distance?</p>\n<p>To give just one practical application of our research, Task2Vec is a method for representing a learning task with a simple vector. This vector is a function of the information in the weights discussed earlier. The amount of information needed to fine-tune one model from another is an (asymmetric) distance between the tasks the two models represent. We can now measure how difficult it would be to fine-tune a given model for a given task. This is part of our <a href=\\"https://aws.amazon.com/rekognition/custom-labels-features/\\" target=\\"_blank\\">Amazon Rekognition Custom Labels</a> service, where customers can provide a few sample images of objects, and the system learns a model to detect them and classify them in never-before-seen images.</p>\\n<p>AI is truly in its infancy. The depth of the intellectual questions raised by the field is invigorating. For now, there’s consolation for those of us aging and beginning to forget things. We can take comfort in the knowledge that we are still learning.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Alessandro_Achillehttpswwwamazonscienceauthoralessandroachille_62\\"></a><strong><a href=\\"https://www.amazon.science/author/alessandro-achille\\" target=\\"_blank\\">Alessandro Achille</a></strong></h4>\n<p>Alessandro Achille is an applied scientist with Amazon Web Services.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭