AI systems have become quite competent at recognizing objects (and actions) in videos from diverse sources. But they aren’t perfect, in part because they’re mostly trained on corpora containing clips with single labels. Frame-by-frame tracking isn’t a particularly efficient solution because it would require that annotators apply labels to every frame in each video, and because “teaching” a model to recognize an action it hadn’t seen before would necessitate labeling new clips from scratch.
That’s why scientists at Google propose Temporal Cycle-Consistency Learning (TCC), a self-supervised AI training technique that taps “correspondences” between examples of similar sequential processes (like weight-lifting repetitions or baseball pitches) to learn representations well-suited for temporal video understanding. The codebase is available in open source on GitHub.
Read more here: