Cyberarts motion-triggered video patch

Saturday May 2, 2009, Apocalypso Trio and Joe’s Bodydrama troupe played at Dead Video/Live Video at Mass Art, part of the Boston Cyberarts Festival. We were part of an evening of performative video, presentations that included sound, video and movement.

The event took place at MassArt in the Pozen Center. It’s a big room with high ceilings and a shallow stage. After exploring a couple of alternatives we went with the stage and set up as indicated below –

pozen

The stage was at least 30′ wide and, with the screen, barely 10′ deep. The screen was huge. An overhead projector was mounted on rigging suspended approximately 30′ in front of the stage. I set my infrared camera and my laptop directly under the projector. I used a patch written in Max/MSP and softVNS to control the video projected onto Betty, Joe and Rachel –

apoclypso_patch

I start by importing twenty images, sww200 – sww219, into a v.buffers object. The buffer, labelled b, is preset to hold 20 rgb 640×480 images. I still have to figure out how to automate this process but, for now, I just ‘click’ on each of the message boxes.

Next I ‘toggle on’ the v.buffertap object sending it a loop message, setting the start to frame 0 and the end to frame 19, and setting the playback speed to 0.01 frames per second. The buffer will output a new image every 10 seconds. The v.stillstream object converts the still images into a continuous video stream.

Finally I ‘toggle on’ the v.dig object, setting it to capture 15 frames per second. The captured video, from the infrared camera, is passed through the v.resize object that downsizes it to 640×480, to the v.motion object.

The output of the video buffer and motion detection information are combined in the v.attachalpha object. The resulting 32bit stream is input to the v.composite object that is set to copy the frame to the output using the alpha info as a mask. The output is set to fade slowly over time using the refresh and diminish.

The result of all this is, that when the dancers move they activate portions of image and when they are still the image slowly fades to black. Since the motion-detection is only looking for portions of the image the viewer sees the image build up frame by frame based on the dancers movements.

Thanks to Greg Kowalski for helping me ‘debug’.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: