NocabSoftware.com

Proposal: Semantic Aware Video Filter



1. Main Project Idea or Theme

Semantic Aware Video Filter/ SEE – Significant Embedding Explanation

CLIP allows us to encode a picture into a dense, information rich vector. With a somewhat stable video feed, like a live feed of a stationary camera, we can compute an average CLIP space embedding of a scene. Then, over time, we can see how this embedding changes. Changes in clip space represent a change in semantic meaning. Perhaps a bird has flown into the scene, or perhaps the weather suddenly changes.


Further, if we have a few examples of semantically interesting video frames, we can compare the current frame's CLIP embedding to important prior clusters. This way, we can detect what exactly is happening in the vide. In this way, we can differentiate specifically if a bird, squirrel, or weather change is causing the semantic change.


Finally, as a reach goal, if the user provides a text description of what they're interested in, then that text may also be encoded into CLIP space, and the Semantic Aware Video Filter can detect if the current image frame is somewhat similar to the provided text description.


In addition to the calculating nearest clip-space neighbors of each frame in a video stream, we can attempt to build a GRAD-CAM style pixel highlight. GRAD-CAM will compute a heatmap of each pixel's relevant for a given class in a class-prediction model. However, I believe a similar technique may be used for computing the relevant pixels for a clip space embedding.



2. Proposed approach.

Development will require the following steps:

  1. Developing code that can pull a current frame from a live stream.

  2. Passing the frame through a CLIP vision model, to generate the image embedding.

  3. Using the image embedding, we can compare it to:

    • Prior long term average embedding to detect sudden semantic shifts (if something exciting has entered or left the frame)
    • Prior well known frames, to attempt to classify exactly what is in the image
    • CLIP Space embedding of a text description to try and determine if the current frame is similar to a user query

Additionally, we can take the gradients computed by the model, and attempt to produce a GRAD-CAM style heatmap of the frame.



3. Input/Output Data

I have found a few live streams on youtube that seem relevant:



4. Training data --- where are you going to get it

I will be using pertain clip models from hugging face. Initial prototypes currently are using: CLIPModel.from_pretrained("openai/clip-vit-base-patch32") and clip.load("ViT-B/32", device=device)



5. Evaluation plan

I will determine if the semantic filters are able to successfully identify or label the content in a video. By providing human labeled clips with known entities in them, I can evaluate the accuracy of the system.



6. Impact -- if this works completely, who would care and why?

This could be relevant for researchers that collect long streams of video data, to help them filter and identify clips that are semantically relevant.

Further, the GRAD-CAM style heat map could help AI researchers understand how a computer vision model understands images as they evolve over time.