Posted by on
Tags: , , , , , , , , , , , , ,
Categories: Uncategorized

AI systems have become quite competent at recognizing objects (and actions) in videos from diverse sources. But they aren’t perfect, in part because they’re mostly trained on corpora containing clips with single labels. Frame-by-frame tracking isn’t a particularly efficient solution because it would require that annotators apply labels to every frame in each video, and because “teaching” a model to recognize an action it hadn’t seen before would necessitate labeling new clips from scratch.

That’s why scientists at Google propose Temporal Cycle-Consistency Learning (TCC), a self-supervised AI training technique that taps “correspondences” between examples of similar sequential processes (like weight-lifting repetitions or baseball pitches) to learn representations well-suited for temporal video understanding. The codebase is available in open source on GitHub.

Read more here:

https://venturebeat.com/2019/08/08/googles-ai-learns-how-actions-in-videos-are-connected/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.