Archive for July, 2009

Painting #271

July 31, 2009

sww271
painting #271

Advertisements

Painting #270

July 30, 2009

sww270
painting #270

Joe Burgio @ XFest 2009

July 29, 2009

There are only a few videos of Joe. Steve Albert recorded this last February at XFest 2009.

Joe is accompanied by Dave Ross, guitar, and Brandon Downs, bass.

The second part of the video is a set by Stephanie Lak, voice & objects, Matt Plummer, trombone, and Id m Theft Able, voice & objects. The video projected in the background is by Greg Kowalski.

Painting #269

July 29, 2009

sww269
painting #269

Painting #268

July 28, 2009

sww268
painting #268

Painting #267

July 27, 2009

sww267
painting #267

Painting #266

July 26, 2009

sww266
painting #266

Cyberarts motion-triggered video patch

July 25, 2009

Saturday May 2, 2009, Apocalypso Trio and Joe’s Bodydrama troupe played at Dead Video/Live Video at Mass Art, part of the Boston Cyberarts Festival. We were part of an evening of performative video, presentations that included sound, video and movement.

The event took place at MassArt in the Pozen Center. It’s a big room with high ceilings and a shallow stage. After exploring a couple of alternatives we went with the stage and set up as indicated below –

pozen

The stage was at least 30′ wide and, with the screen, barely 10′ deep. The screen was huge. An overhead projector was mounted on rigging suspended approximately 30′ in front of the stage. I set my infrared camera and my laptop directly under the projector. I used a patch written in Max/MSP and softVNS to control the video projected onto Betty, Joe and Rachel –

apoclypso_patch

I start by importing twenty images, sww200 – sww219, into a v.buffers object. The buffer, labelled b, is preset to hold 20 rgb 640×480 images. I still have to figure out how to automate this process but, for now, I just ‘click’ on each of the message boxes.

Next I ‘toggle on’ the v.buffertap object sending it a loop message, setting the start to frame 0 and the end to frame 19, and setting the playback speed to 0.01 frames per second. The buffer will output a new image every 10 seconds. The v.stillstream object converts the still images into a continuous video stream.

Finally I ‘toggle on’ the v.dig object, setting it to capture 15 frames per second. The captured video, from the infrared camera, is passed through the v.resize object that downsizes it to 640×480, to the v.motion object.

The output of the video buffer and motion detection information are combined in the v.attachalpha object. The resulting 32bit stream is input to the v.composite object that is set to copy the frame to the output using the alpha info as a mask. The output is set to fade slowly over time using the refresh and diminish.

The result of all this is, that when the dancers move they activate portions of image and when they are still the image slowly fades to black. Since the motion-detection is only looking for portions of the image the viewer sees the image build up frame by frame based on the dancers movements.

Thanks to Greg Kowalski for helping me ‘debug’.

Painting #265

July 25, 2009

sww265
painting #265

Painting #264

July 24, 2009

sww264
painting #264