Midterm Prototyping

Part One

Date: October 29th, 2023

Click here for Max patch files 

Continuing with the idea from my last post  I had the following priority tasks ahead of me:

  1. Remove background from git.grab capture
  2. Detect the largest blob and track its size to trigger changes or responses to the viewer’s movement
  3. Increasing pixelation effect as the viewer gets close
A key observations up until now was that if I change my space and camera, the view is different. So I have to change the presets every time I set up my installation. I anticipate this will be crucial for when I set up the project and camera to operate within a larger scale in the media commons space.
At this stage, I am focusing only on the first and most important part of the project, which is the front screen where the output will be showing. Part two of the project will document my attempts at the side view, live capture with some manipulation with chromakey, scissors and glue, and noise, which is an additional component that I hope I can complete by tomorrow.

 

By battle with removing background

While I thought this would be the relatively easier part, it proved anything but. I used the jit.op absolute difference at first but it did not yield the effect I wanted. One of my classmates then told me to turn off the lights of the room and that worked very well. I still tried a few more operations until finally experimenting with another video or image. The result was interesting. While I could not completely remove all the background elements, I did get some cool effects that I think I can play into the theme of the project. 

Mixing with another video

Mixing with a static image (but I have to keep pressing the space bar constantly)

Using blob tracking to trigger changes or responses

I did get to use the jit.op to tweak the output of edge detection and get the spread-out lines effect much better. I refined the blob detection example a bit to account for a moving human and now working on using the increasing size of the bounding box to trigger some transitions. I have yet to achieve good results but the trial and error is still on going.

Updates:

I was able to use the largest blob detected in the window and utilized the width height of the bounding box to track the movement of the viewer. I scaled the it down to between 0 and 1.0 and used if-else statements to trigger different video effects if the scaled box height was between the following 3 ranges: 0-0.5, 0.5-0.75 and 0.75-1.0. The above edge detection effect will appear for the first range (0-0.5) when the person is the furthest from the camera.

When the viewer steps in the middle range (0.5-0.75) the following transition happens. There is video mixing here that signals the viewer that as they move towards the screen, there is some response to their movement.

When the viewer steps in the last range (anything above 0.75) the final transition happens. This is when the viewer is closest to the screen. They see themselves in a pixelated form, as they are completely absorbed into the digital black hole of data.

The final challenge I am still working on is placing all of these transitions into the same window. Currently, they are all in different jit.pwindows.