Speaker: Angela Yao
From: National University of Singapore
Videos of procedural activities are goal-oriented, with multiple steps or actions in a sequence over time. In this talk, I will outline our group's efforts in developing methods for segmenting and anticipating actions in procedural videos. We take a look at two extreme approaches, one based on unsupervised discovery and the other based on fully-supervised learning from densely labelled videos. We then explore the variants in between, including semi- and weakly-supervised settings. I will conclude by introducing our newly collected dataset Assembly101 - a large-scale multi- static and ego-centric view dataset of people assembling and disassembling toys.
For more info, please follow this link.