Amazon at WACV: Computer vision is more than labeling pixels

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"Gérard Medioni, an Amazon vice president and distinguished scientist, is the general chair at this year’s IEEE Winter Conference on Applications of Computer Vision (++[WACV](https://www.amazon.science/conferences-and-events/amazon-wacv-2021)++), and in that capacity, he led the recruitment of the conference’s three keynote speakers.\n\nOn Wednesday, Lihi Zelnik-Manor, an associate professor of electrical engineering at Israel’s Technion, described her experiences working on computer vision and artificial-intelligence projects for Alibaba, the leading Chinese e-commerce company. Yesterday, Hao Li, a cofounder and CEO at Pinscreen, addressed the challenges of creating virtual online avatars that move and sound like real people. And today, Raquel Urtasun, chief scientist at Uber’s Advanced Technologies Group and a professor of computer science at the University of Toronto, will discuss the science of self-driving cars.\n\n![image.png](https://dev-media.amazoncloud.cn/748c7c0eef5a454ab5d482b6cdb5ae13_image.png)\n\nThe keynote speakers at this year's WACV. From left to right: Lihi Zelnik-Manor, an associate professor of electrical engineering at Israel’s Technion and head of Alibaba's research lab in Israel; Hao Li, a cofounder and CEO at Pinscreen; and Raquel Urtasun, chief scientist at Uber's Advanced Technologies Group and a professor of computer science at the University of Toronto.\n\nCREDIT: PHOTOS COURTESY OF THE SPEAKERS\n\n“It's really an international mix of people,” Medioni says. “Lihi is representing both Israel and China for Alibaba. Raquel is originally from Spain, educated in Switzerland, and leading the effort for Uber in Canada. Hao Li is from Germany and is working here in the U.S.”\n\nThe speakers’ topics demonstrate how expansive the applications of computer vision have become; they’re no longer just a matter of labeling pixels in an image.\n\n“You have to take computer vision as the interpretation of the scene, not necessarily static, but also dynamic,” Medioni explains. “It’s understanding your environment through visual input. That involves anticipating and understanding actions as well. Activity understanding is a subfield of computer vision: ‘What is this person doing?’”\n\n\n#### **The ideal sandbox**\n\n\nIn that context, “addressing self-driving cars and [Just Walk Out](https://aws.amazon.com/cn/just-walk-out/?trk=cndc-detail) shopping are ideal sandboxes for computer vision,” Medioni says. “You need to solve every sub-problem that you can think of in computer vision. For autonomous driving, you need to understand the scene, which means you need to detect signs, you need to detect people, you need to detect cars, and you need to make inferences about behavior. And in addition to that, you have to provide the motor signals to actuate the car.\n\n“Another interesting part — which is true for both Amazon Go and for self-driving cars — is that the basic case is fairly straightforward. But there is a very, very long tail of complicated cases. And because it's such a long tail, you cannot think in advance of all the cases and solve them in the lab. You have to actually gather tens of thousands of hours of driving experience to address these cases.\n\n“Another part of the complexity is the combination of human drivers and self-driving cars. When you and I get to a stop sign at the same time, I look at you; you look at me. We have established contact. And now I can start going, and I know what's going to happen. This whole interplay that occurs nonverbally doesn't exist if you have a self-driving car and a human driver. There is no eye contact. So this is a very interesting aspect of it, too.”\n\n\n#### **Realistic avatars**\n\n\nIn his keynote, Hao Li discussed the challenge that his company, Pinscreen, is addressing: the synthesis of realistic online avatars. Like the problem of self-driving cars, it’s a computer vision problem whose solution depends on accurately modeling and reproducing human behavior.\n\n“When you and I talk, you’re not just the head,” Medioni explains. “Your hands are moving; your arms are moving; your shoulders are moving. If you have ever seen an avatar that just speaks with the face, and the arms are not moving, it is very disturbing. It looks fake.\n\n“The complexity comes from the fact that we humans are very good at detecting any type of defect. Anything that looks slightly off is going to create this uncanny-valley effect. When a designer is looking to generate expression, for example, that is different from just expression classification. You can say this person is smiling, or this person is frowning; well, that's just a label that you put on it. What Li’s doing is more complicated. Creating an expression involves tens of muscles in the face, and some of this muscle can be very, very subtle activation. Then you have parts that you do not necessarily see. Like when you open your mouth, well, you see part of the tongue and teeth. How do you do that? Li is one of the leaders in producing this type of richness of expression in the face. \n\n“It still continues to amaze me what we are able to accomplish with computer vision today,” Medioni adds. “It's truly great to be in this field and to see the progress on a weekly basis.”\n\nABOUT THE AUTHOR\n\n#### **[Larry Hardesty](https://www.amazon.science/author/larry-hardesty)**\n\nLarry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.","render":"<p>Gérard Medioni, an Amazon vice president and distinguished scientist, is the general chair at this year’s IEEE Winter Conference on Applications of Computer Vision (<ins><a href=\\"https://www.amazon.science/conferences-and-events/amazon-wacv-2021\\" target=\\"_blank\\">WACV</a></ins>), and in that capacity, he led the recruitment of the conference’s three keynote speakers.</p>\n<p>On Wednesday, Lihi Zelnik-Manor, an associate professor of electrical engineering at Israel’s Technion, described her experiences working on computer vision and artificial-intelligence projects for Alibaba, the leading Chinese e-commerce company. Yesterday, Hao Li, a cofounder and CEO at Pinscreen, addressed the challenges of creating virtual online avatars that move and sound like real people. And today, Raquel Urtasun, chief scientist at Uber’s Advanced Technologies Group and a professor of computer science at the University of Toronto, will discuss the science of self-driving cars.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/748c7c0eef5a454ab5d482b6cdb5ae13_image.png\\" alt=\\"image.png\\" /></p>\n<p>The keynote speakers at this year’s WACV. From left to right: Lihi Zelnik-Manor, an associate professor of electrical engineering at Israel’s Technion and head of Alibaba’s research lab in Israel; Hao Li, a cofounder and CEO at Pinscreen; and Raquel Urtasun, chief scientist at Uber’s Advanced Technologies Group and a professor of computer science at the University of Toronto.</p>\n<p>CREDIT: PHOTOS COURTESY OF THE SPEAKERS</p>\n<p>“It’s really an international mix of people,” Medioni says. “Lihi is representing both Israel and China for Alibaba. Raquel is originally from Spain, educated in Switzerland, and leading the effort for Uber in Canada. Hao Li is from Germany and is working here in the U.S.”</p>\n<p>The speakers’ topics demonstrate how expansive the applications of computer vision have become; they’re no longer just a matter of labeling pixels in an image.</p>\n<p>“You have to take computer vision as the interpretation of the scene, not necessarily static, but also dynamic,” Medioni explains. “It’s understanding your environment through visual input. That involves anticipating and understanding actions as well. Activity understanding is a subfield of computer vision: ‘What is this person doing?’”</p>\n<h4><a id=\\"The_ideal_sandbox_17\\"></a><strong>The ideal sandbox</strong></h4>\\n<p>In that context, “addressing self-driving cars and Just Walk Out shopping are ideal sandboxes for computer vision,” Medioni says. “You need to solve every sub-problem that you can think of in computer vision. For autonomous driving, you need to understand the scene, which means you need to detect signs, you need to detect people, you need to detect cars, and you need to make inferences about behavior. And in addition to that, you have to provide the motor signals to actuate the car.</p>\n<p>“Another interesting part — which is true for both Amazon Go and for self-driving cars — is that the basic case is fairly straightforward. But there is a very, very long tail of complicated cases. And because it’s such a long tail, you cannot think in advance of all the cases and solve them in the lab. You have to actually gather tens of thousands of hours of driving experience to address these cases.</p>\n<p>“Another part of the complexity is the combination of human drivers and self-driving cars. When you and I get to a stop sign at the same time, I look at you; you look at me. We have established contact. And now I can start going, and I know what’s going to happen. This whole interplay that occurs nonverbally doesn’t exist if you have a self-driving car and a human driver. There is no eye contact. So this is a very interesting aspect of it, too.”</p>\n<h4><a id=\\"Realistic_avatars_27\\"></a><strong>Realistic avatars</strong></h4>\\n<p>In his keynote, Hao Li discussed the challenge that his company, Pinscreen, is addressing: the synthesis of realistic online avatars. Like the problem of self-driving cars, it’s a computer vision problem whose solution depends on accurately modeling and reproducing human behavior.</p>\n<p>“When you and I talk, you’re not just the head,” Medioni explains. “Your hands are moving; your arms are moving; your shoulders are moving. If you have ever seen an avatar that just speaks with the face, and the arms are not moving, it is very disturbing. It looks fake.</p>\n<p>“The complexity comes from the fact that we humans are very good at detecting any type of defect. Anything that looks slightly off is going to create this uncanny-valley effect. When a designer is looking to generate expression, for example, that is different from just expression classification. You can say this person is smiling, or this person is frowning; well, that’s just a label that you put on it. What Li’s doing is more complicated. Creating an expression involves tens of muscles in the face, and some of this muscle can be very, very subtle activation. Then you have parts that you do not necessarily see. Like when you open your mouth, well, you see part of the tongue and teeth. How do you do that? Li is one of the leaders in producing this type of richness of expression in the face.</p>\n<p>“It still continues to amaze me what we are able to accomplish with computer vision today,” Medioni adds. “It’s truly great to be in this field and to see the progress on a weekly basis.”</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Larry_Hardestyhttpswwwamazonscienceauthorlarryhardesty_40\\"></a><strong><a href=\\"https://www.amazon.science/author/larry-hardesty\\" target=\\"_blank\\">Larry Hardesty</a></strong></h4>\n<p>Larry Hardesty is the editor of the Amazon Science blog. Previously, he was a senior editor at MIT Technology Review and the computer science writer at the MIT News Office.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭