Midterm Ideation part 2 and Prototyping

Iteration based on what I can achieve within the timeline

Date: Oct 22, 2023

Previous post: Midterm Ideation Part 1

Next Post: Midetrm Prototype Part One

Solidified Idea:

I would describe my piece as:

“Space-time manipulation of our digital memories by a digital black hole. Who are we really, in the densely packed world of binary data?”

Based on the feedback from my first idea (part 1) I realized I needed to prioritize the critical elements of the project I wanted to highlight. For starters, I want there to be a clear indication of where the interaction starts and where it ends. I don’t want my audience to have to turn their head to see the whole interaction but walk towards something they can see. The frames to the side should serve as snapshots or trails of the viewer’s journey, only as a distorted memory. 

First part to accomplish: Squares on the ground to indicate the viewer is stepping into the event horizon. The screen at the front is the black hole. As the viewer moves forward, the edges of the human figure approaching spreads out. At the last step, the figure no longer resembles a human but is represented as white pixels, having completely been disintegrated into the black hole. The openCV example p1 from last class will help with this.

Before the interaction starts, the side frames could be in a state of static noise or flashing. 

Second part to accomplish: The frames to the side serve as a trail across time of the viewer’s journey. As the viewer reaches the “black hole” screen, the frames divide to show distorted camera captures of the viewer moving. The example from last class on detecting zones can help with tracking, taking screengrabs and necessary sound effects for the viewer as they move across the three squares. 

I am anticipating this part to be the most difficult, since slicing the frame to 3 distinct ones will be a challenge. This might be subject to modifications based on the timeline.

 

Layout of devices

I would need two projectors and 2 cameras in the following layout. The Media Commons would be the best place to set this up. 

Things to do:

Get insurance and check out webcams

Get dimensions for whole set up

Book media commons space

First part: Black hole interaction

After some trials (and frustrating fails) I have found the edge detection objects to be more suited for tracing the outline of the viewer. The pixelated effect can be referenced from the openCV tutorial. As for the lines spreading out, I am trying to use edge detection as the shape to be manipulated following openGL references.

Experimenting with edge detection filters.

The final effect the viewer will see as they stop in front of the black hole screen

the effects I want after the edge detection, between the start and end of the interaction. Reference link

Working towards a prototype of the first part

This was a decisive point in my project where I realized how much I could realistically accomplish within the timeline. The main idea of the whole project revolves around the concept of black holes and their effect of bending space-time, something our minds cannot comprehend. In relation to how time and our existence in the 3D space are theorized to be a limitation to how our minds perceive reality, I want my black hole here to stand as a representation of our records in the digital space. That is why at the end of the walkway, on the third square, the viewer will see themselves as bits stored in the densely packed world of data. 

The second part serves as a way to show the distortion of reality in the digital space and near a black hole. It serves as a way to show the viewer their past in the present but with effects that make them question their memory. Were there other people around me? Did my environment look like what I remember? It supplements the idea of past records or digital memories being subject to manipulation very much like our minds.

To achieve this effect I started working with the edge detection filters in OpenCV and Jitter. I really liked the bright neon effect of the Sobel filter and decided to use that as the center/source. Next, I tried to achieve the effect of the outlines spreading out from the source. This took hours of experimenting with different objects until finally, I came up with the solution of zooming into an edge filter result and mixing it with the source.

For this I used the jit.rota object and jit.op addition. I definitely realize there is a more optimized way I can achieve this effect but I am trying to focus on prioritizing the next few operations:

1) Transitioning from the current output to the pixelated form. I can either do this by the zone/hot-corners method or use blob tracking. The advantage of blob tracking would help me start the interaction by detecting a moving blob too. As the viewer approaches the camera, the blob expands, and at a certain size, the pixelated transformation occurs. 

2) Remove background

3) Explore animation and transition effects with jit.rota

4) Work on the second part: using noise, and flashing before interaction starts. Detecting motion when it starts, capturing three 2 seconds of video consecutively, and playing them side by side in loops. Each will have effects with chromakey, scissors and glue, and jit.rota.

Progress documentation continued in Midterm prototype part 1 post