3 questions with Seyed Sajjadi: How to utilize a video analytics platform to automate the process of learning

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"![image.png](https://dev-media.amazoncloud.cn/a4ce8352a5024075a2d1c1969cabc897_image.png)\n\nnFlux team members (left to right) Aditya Mukewar, Chenlei Zhang, Karthik Ramkumar, Adam Phillips, Falisha Kanji, Seyed Sajjadi*, Anton Safarevich*, Natan Vargas, Pulin Agrawal, Collin Miller*, Danny Pena* (* indicates co-founder).\n\nCREDIT: NFLUX\n\nEditor’s Note: This interview is the latest installment ++[within a series Amazon Science is publishing](https://www.amazon.science/tag/alexa-fund)++ related to the science behind products and services from companies in which Amazon has invested. In 2019, the Alexa Fund first invested in nflux.ai, and then in 2020 participated in the company’s seed round.\n\nIn 2018, ++[Seyed Sajjadi](https://www.linkedin.com/in/seyedsajjadi/)++ was pursuing a master’s degree in computer science at the University of Southern California (USC) when he decided to drop out and found nFlux.ai. While pursuing his master’s degree, he also was working as a project manager at the Systems Engineering Research Laboratory (SERL) research laboratory at California State University in Northridge, Calif.\n\n![image.png](https://dev-media.amazoncloud.cn/7a97f3207e06497d82db620caecaf253_image.png)\n\nAt the University of Southern California, Seyed Sajjadi focused on the development of Sigma, a cognitive architecture and system. One outcome of the research was this paper, which Sajjadi coauthored with computer science professor Paul Rosenbloom and other USC collaborators.\n\nAt USC, Sajjadi was working as a member of the ++[Cognitive/Virtual Human Architecture lab](https://ict.usc.edu/wp-content/uploads/overviews/CognitiveVirtual%20Human%20Architecture_Overview.pdf)++ under computer science professor ++[Paul Rosenbloom](https://viterbi.usc.edu/directory/faculty/Rosenbloom/Paul)++. There, he focused on the development of Sigma, a cognitive architecture and system that strives to combine what has been learned from four decades of independent work on symbolic cognitive architectures, probabilistic graphic models, and more recently neural models. One outcome of his research there was a paper, “++[Controlling Synthetic Characters in Simulation: A Case for Cognitive Architectures and Sigma](https://arxiv.org/ftp/arxiv/papers/2101/2101.02231.pdf)++”, which Sajjadi coauthored with Rosenbloom and other USC collaborators. The paper was accepted to the 2018 Interservice/Industry Training Simulation and Education Conference (I/ITSEC).\n\nAt SERL, Sajjadi led an interdisciplinary team of more than 90 engineers and human factors researchers focused on building the next generation of robotic search-and-rescue systems with artificial intelligence. It was here that Sajjadi and colleagues began thinking about forming nflux.ai, inspired, he says, by the fictional character J.A.R.V.I.S. (Just A Rather Very Intelligent System) from the Marvel Cinematic Universe film franchise, and a vision for how artificial intelligence systems can augment humans in positive ways.\n\nAmazon Science asked Sajjadi three questions about the challenges of developing cognitive architectures, nFlux’s focus on imitation learning within the manufacturing sector, and how the company’s technology could eventually be relevant to Alexa customers at home.\n\n#### **Q. What is a video analytics platform, and how does it enable what you call procedure monitoring?**\n\nnFlux is the first intelligent video analytics platform that automates the process of learning and generating contextual insights from the unstructured data inside video footage. One of our goals is to pass a Turing test for video comprehension. Imagine there is a woman sitting at a desk looking at a video on her computer. We want to develop a video comprehension system that can answer any question about that video with the same level of comprehension as the woman.\n\nOur first customer was NASA, and right now we’re working to build a system similar to HAL 9000, the fictional AI character in the Space Odyssey series. HAL 9000 is a general AI system that can mimic the way humans think, behave, and take actions. Ironically, Space Odyssey is centered around a deep-space mission. Today, if astronauts have a question, they call Houston, and someone at Johnson Space Center answers their questions. But as we embark on deep space missions, such as Mars, where there is a 40-minute delay in communication, that method of communication isn’t practical. So we want to provide an intelligent system on the spacecraft that can understand what the astronauts are doing and assist them by augmenting what they’re capable of doing on their own.\n\n![image.png](https://dev-media.amazoncloud.cn/ff413bef472d48a49f1ca55a1f128b84_image.png)\n\nSeyed Sajjadi\n\nThat’s what we refer to as procedure monitoring, which is the core of the innovation we’re developing. Our objective is imitation learning, or learning by demonstration. If an astronaut is performing a procedure, our objective is to capture that procedure via video with a minimum number of examples, say 10 or 15, which in machine learning is a tiny sample size. But from that small sample size we develop a computational model so that if another astronaut has to perform that same procedure in the future, we can track that. If in performing that procedure the astronaut deviates from the procedure, perhaps by missing a screw, our system can recognize that in real time and alert the astronaut.\n\nThat’s really the core of what we consider procedure monitoring, or the astronaut-assistant technology we’ve been developing. One of the keys to our video analytics platform is its ability to learn from a minimum number of videos. That’s significant.\n\nBut for those algorithms to infer from a small set of data, they are extracting basic signals from our base models. This is possible since the agent can be augmented with prior semantic knowledge of key activities, such as tethering, drilling components, etc., and can recognize key components — objects, tools — of each step from synthetically generated data. This technique is inspired by the way humans ingest information as they watch a new procedure they have never seen before. We are capable of recognizing the key activity being performed even if we have not previously seen the objects/tools being used, and can deduce the steps required to successfully complete a procedure.\n\n#### **Q. How is nFlux technology being applied within the manufacturing sector?**\n\nDespite the perception that robots have taken over the manufacturing floor, seventy-two percent of manufacturing work is still done by humans. Six million people here in the United States go to work every day to perform a certain set of procedures. As that person on the manufacturing floor is doing her job, we can capture any deviations in real time.\n\nOur system can be a virtual teacher or instructor helping train a new employee, or an existing employee who’s learning a new procedure. This is extremely valuable to manufacturers because it reduces production cycles. If they can train employees faster at their manufacturing facilities that translates into millions of dollars in manufacturing time. It also impacts the quality of their products. The better a manufacturer’s employees are trained, and the more standardized their procedures, the lower their defect rates. Those are two critical elements to any manufacturer.\n\nOur technology also helps in capturing what we refer to as tribal knowledge. In many complicated manufacturing environments, training can’t be provided on a piece of paper, instead you need a computational model derived from video of how the procedure is conducted properly. That computational model can help train new employees as they come on board, monitor their work to ensure they’re following procedures properly, and act as that intelligent assistant for your manufacturing workforce. nFlux isn’t designed to replace the workforce, it’s there to augment the work they’re doing. Ultimately, this reduces the amount of rework required to output high-quality products from that manufacturing plant\n\n#### **Q. The Alexa Fund is an investor. So how could your computational model be relevant to Alexa customers?**\n\n![image.png](https://dev-media.amazoncloud.cn/edd807da2e8d441d925e7f5ac119d574_image.png)\n\nImagine, says Sajjadi, that as you were cooking the Echo Show 10 was watching you and could alert you if you missed an ingredient. That, he says, would be an example of taking procuedure monitoring from the shop floor to the kitchen.\n\nAn Echo Show with a screen was first introduced in 2017, and since there have been subsequent generations, including the new Echo Show 10, which first became available earlier this year. These devices support multimodal experiences, providing Alexa greater context and an understanding with vision. These multimodal Echo devices tend to be in the kitchen and one of the most popular uses is for cooking, and following cooking instructions in real time. Imagine if as you were cooking the Echo Show 10 was watching you cook and alerted you if you missed adding an ingredient. That would be an example of taking procedure monitoring from the shop floor to the kitchen. \n\nEarlier this year, we were awarded another NASA contract to support the health of astronauts. This work is relevant to other Alexa healthcare-related scenarios. If you’re an elderly person living at home or within an assisted living facility, what if an nFlux application noticed that you didn’t take your pills at 9 a.m. as you are supposed to, and alerted you. Or what if you’re under your doctor’s orders to walk for five minutes every two hours. We could recognize that you haven’t been mobile in the past couple of hours, and remind you to walk. These are the kinds of consumer-facing scenarios that complement our commercial approach to procedure monitoring, and could be applied in the home. \n\nABOUT THE AUTHOR\n\n#### **Staff writer**\n\n\n\n\n\n","render":"<p><img src=\\"https://dev-media.amazoncloud.cn/a4ce8352a5024075a2d1c1969cabc897_image.png\\" alt=\\"image.png\\" /></p>\n<p>nFlux team members (left to right) Aditya Mukewar, Chenlei Zhang, Karthik Ramkumar, Adam Phillips, Falisha Kanji, Seyed Sajjadi*, Anton Safarevich*, Natan Vargas, Pulin Agrawal, Collin Miller*, Danny Pena* (* indicates co-founder).</p>\n<p>CREDIT: NFLUX</p>\n<p>Editor’s Note: This interview is the latest installment <ins><a href=\\"https://www.amazon.science/tag/alexa-fund\\" target=\\"_blank\\">within a series Amazon Science is publishing</a></ins> related to the science behind products and services from companies in which Amazon has invested. In 2019, the Alexa Fund first invested in nflux.ai, and then in 2020 participated in the company’s seed round.</p>\n<p>In 2018, <ins><a href=\\"https://www.linkedin.com/in/seyedsajjadi/\\" target=\\"_blank\\">Seyed Sajjadi</a></ins> was pursuing a master’s degree in computer science at the University of Southern California (USC) when he decided to drop out and found nFlux.ai. While pursuing his master’s degree, he also was working as a project manager at the Systems Engineering Research Laboratory (SERL) research laboratory at California State University in Northridge, Calif.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/7a97f3207e06497d82db620caecaf253_image.png\\" alt=\\"image.png\\" /></p>\n<p>At the University of Southern California, Seyed Sajjadi focused on the development of Sigma, a cognitive architecture and system. One outcome of the research was this paper, which Sajjadi coauthored with computer science professor Paul Rosenbloom and other USC collaborators.</p>\n<p>At USC, Sajjadi was working as a member of the <ins><a href=\\"https://ict.usc.edu/wp-content/uploads/overviews/CognitiveVirtual%20Human%20Architecture_Overview.pdf\\" target=\\"_blank\\">Cognitive/Virtual Human Architecture lab</a></ins> under computer science professor <ins><a href=\\"https://viterbi.usc.edu/directory/faculty/Rosenbloom/Paul\\" target=\\"_blank\\">Paul Rosenbloom</a></ins>. There, he focused on the development of Sigma, a cognitive architecture and system that strives to combine what has been learned from four decades of independent work on symbolic cognitive architectures, probabilistic graphic models, and more recently neural models. One outcome of his research there was a paper, “<ins><a href=\\"https://arxiv.org/ftp/arxiv/papers/2101/2101.02231.pdf\\" target=\\"_blank\\">Controlling Synthetic Characters in Simulation: A Case for Cognitive Architectures and Sigma</a></ins>”, which Sajjadi coauthored with Rosenbloom and other USC collaborators. The paper was accepted to the 2018 Interservice/Industry Training Simulation and Education Conference (I/ITSEC).</p>\n<p>At SERL, Sajjadi led an interdisciplinary team of more than 90 engineers and human factors researchers focused on building the next generation of robotic search-and-rescue systems with artificial intelligence. It was here that Sajjadi and colleagues began thinking about forming nflux.ai, inspired, he says, by the fictional character J.A.R.V.I.S. (Just A Rather Very Intelligent System) from the Marvel Cinematic Universe film franchise, and a vision for how artificial intelligence systems can augment humans in positive ways.</p>\n<p>Amazon Science asked Sajjadi three questions about the challenges of developing cognitive architectures, nFlux’s focus on imitation learning within the manufacturing sector, and how the company’s technology could eventually be relevant to Alexa customers at home.</p>\n<h4><a id=\\"Q_What_is_a_video_analytics_platform_and_how_does_it_enable_what_you_call_procedure_monitoring_20\\"></a><strong>Q. What is a video analytics platform, and how does it enable what you call procedure monitoring?</strong></h4>\\n<p>nFlux is the first intelligent video analytics platform that automates the process of learning and generating contextual insights from the unstructured data inside video footage. One of our goals is to pass a Turing test for video comprehension. Imagine there is a woman sitting at a desk looking at a video on her computer. We want to develop a video comprehension system that can answer any question about that video with the same level of comprehension as the woman.</p>\n<p>Our first customer was NASA, and right now we’re working to build a system similar to HAL 9000, the fictional AI character in the Space Odyssey series. HAL 9000 is a general AI system that can mimic the way humans think, behave, and take actions. Ironically, Space Odyssey is centered around a deep-space mission. Today, if astronauts have a question, they call Houston, and someone at Johnson Space Center answers their questions. But as we embark on deep space missions, such as Mars, where there is a 40-minute delay in communication, that method of communication isn’t practical. So we want to provide an intelligent system on the spacecraft that can understand what the astronauts are doing and assist them by augmenting what they’re capable of doing on their own.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/ff413bef472d48a49f1ca55a1f128b84_image.png\\" alt=\\"image.png\\" /></p>\n<p>Seyed Sajjadi</p>\n<p>That’s what we refer to as procedure monitoring, which is the core of the innovation we’re developing. Our objective is imitation learning, or learning by demonstration. If an astronaut is performing a procedure, our objective is to capture that procedure via video with a minimum number of examples, say 10 or 15, which in machine learning is a tiny sample size. But from that small sample size we develop a computational model so that if another astronaut has to perform that same procedure in the future, we can track that. If in performing that procedure the astronaut deviates from the procedure, perhaps by missing a screw, our system can recognize that in real time and alert the astronaut.</p>\n<p>That’s really the core of what we consider procedure monitoring, or the astronaut-assistant technology we’ve been developing. One of the keys to our video analytics platform is its ability to learn from a minimum number of videos. That’s significant.</p>\n<p>But for those algorithms to infer from a small set of data, they are extracting basic signals from our base models. This is possible since the agent can be augmented with prior semantic knowledge of key activities, such as tethering, drilling components, etc., and can recognize key components — objects, tools — of each step from synthetically generated data. This technique is inspired by the way humans ingest information as they watch a new procedure they have never seen before. We are capable of recognizing the key activity being performed even if we have not previously seen the objects/tools being used, and can deduce the steps required to successfully complete a procedure.</p>\n<h4><a id=\\"Q_How_is_nFlux_technology_being_applied_within_the_manufacturing_sector_36\\"></a><strong>Q. How is nFlux technology being applied within the manufacturing sector?</strong></h4>\\n<p>Despite the perception that robots have taken over the manufacturing floor, seventy-two percent of manufacturing work is still done by humans. Six million people here in the United States go to work every day to perform a certain set of procedures. As that person on the manufacturing floor is doing her job, we can capture any deviations in real time.</p>\n<p>Our system can be a virtual teacher or instructor helping train a new employee, or an existing employee who’s learning a new procedure. This is extremely valuable to manufacturers because it reduces production cycles. If they can train employees faster at their manufacturing facilities that translates into millions of dollars in manufacturing time. It also impacts the quality of their products. The better a manufacturer’s employees are trained, and the more standardized their procedures, the lower their defect rates. Those are two critical elements to any manufacturer.</p>\n<p>Our technology also helps in capturing what we refer to as tribal knowledge. In many complicated manufacturing environments, training can’t be provided on a piece of paper, instead you need a computational model derived from video of how the procedure is conducted properly. That computational model can help train new employees as they come on board, monitor their work to ensure they’re following procedures properly, and act as that intelligent assistant for your manufacturing workforce. nFlux isn’t designed to replace the workforce, it’s there to augment the work they’re doing. Ultimately, this reduces the amount of rework required to output high-quality products from that manufacturing plant</p>\n<h4><a id=\\"Q_The_Alexa_Fund_is_an_investor_So_how_could_your_computational_model_be_relevant_to_Alexa_customers_44\\"></a><strong>Q. The Alexa Fund is an investor. So how could your computational model be relevant to Alexa customers?</strong></h4>\\n<p><img src=\\"https://dev-media.amazoncloud.cn/edd807da2e8d441d925e7f5ac119d574_image.png\\" alt=\\"image.png\\" /></p>\n<p>Imagine, says Sajjadi, that as you were cooking the Echo Show 10 was watching you and could alert you if you missed an ingredient. That, he says, would be an example of taking procuedure monitoring from the shop floor to the kitchen.</p>\n<p>An Echo Show with a screen was first introduced in 2017, and since there have been subsequent generations, including the new Echo Show 10, which first became available earlier this year. These devices support multimodal experiences, providing Alexa greater context and an understanding with vision. These multimodal Echo devices tend to be in the kitchen and one of the most popular uses is for cooking, and following cooking instructions in real time. Imagine if as you were cooking the Echo Show 10 was watching you cook and alerted you if you missed adding an ingredient. That would be an example of taking procedure monitoring from the shop floor to the kitchen.</p>\n<p>Earlier this year, we were awarded another NASA contract to support the health of astronauts. This work is relevant to other Alexa healthcare-related scenarios. If you’re an elderly person living at home or within an assisted living facility, what if an nFlux application noticed that you didn’t take your pills at 9 a.m. as you are supposed to, and alerted you. Or what if you’re under your doctor’s orders to walk for five minutes every two hours. We could recognize that you haven’t been mobile in the past couple of hours, and remind you to walk. These are the kinds of consumer-facing scenarios that complement our commercial approach to procedure monitoring, and could be applied in the home.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Staff_writer_56\\"></a><strong>Staff writer</strong></h4>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭