Archive for the ‘softVNS’ Category

New Video Patch

October 14, 2009

Latest images from the mixer –

image13  image14

image15  image16

image17  image18

Check out the image above, painting #347. It’s all about layers. Mixing loops frame by frame over time. Individual frames don’t reflect the real-time output of the mixer. Next time, a short video!

Advertisements

Looks like the Paik/Abe

September 16, 2009

I made a dozen different kinds of video loops to use with the new mixer patch. I have short loops, long loops, slo-mo and speeded-up loops. I have one-shot loops, forward and back loops and ‘bowties.’ In the process turning some of them b&w, posterizing and solarizing.

Here are the results –

image07  image08

image09  image10

image11  image12

I thought, “These images look like they were made on the Paik/Abe!” What’s a Paik/Abe?

The Paik/Abe was the first video synthesizer build at WGBH in Boston in 1969 by Nam June Paik and Shuya Abe. A year or so later Paik and Abe built another synthesizer at the Experimental Television Center in Binghamton, NY. That was the machine I used at the ETC in 1971. It has 2 major components – a magnetic scan modulator and a colorizer. The colorizer has 7 b&w video inputs. Each input can be toggled positive/negative and passes through a non-linear amplifier that on high-gain ‘solarizes’ the image. The signals are then mixed together and the result input to a standard a color encoder. The result is a layered image in which the original b&w signals were subtly washed with color.

Could these images be created on the Paik/Abe? I think not. However, after looking at them for a couple of days I know why I thought of the Paik/Abe. First, the video loops that were input to the mixer were solarized. I can’t remember another video processing system that does this. It’s unique to the Paik/Abe. Second, the video inputs were mixed or blended without keying. Other video systems had keyers built in. The Jones Colorizer had clipping controls that cut out parts of the image. The video output of the Paik/Abe was like a watercolor painting and output of the Jones Colorizer was more like a collage.

New video patch

August 29, 2009

The Video Shredder went belly-up earlier this year. The hard drive died. Yes, I have a backup but … time to move on. I’m working on a new patch using Max/MSP and softVNS –

mixer patch

I’m mixing Quicktime movies with the v.composite object, the long box near the bottom of the window. Three v.movie objects feed the mixer. Each movie is controlled by an array of ‘transport’ messages and a speed control. Each mixer channel has a simple level control. The output of the mixer goes to v.window.

Eventually, I will add the Korg nanoKONTROL. But, at the moment I’m working on editing video loops.

The Video Shredder used a 48 frame internal loop. Video input could be added to the loop in real time using a variety of ‘effects’. This time, instead of a single video input I’m layering prerecorded loops. It’s a different process. It’s more like ‘mixing’ audio samples in real time.

Here are some preliminary results –

image01  image02

image03  image04

image05  image06

Cyberarts motion-triggered video patch

July 25, 2009

Saturday May 2, 2009, Apocalypso Trio and Joe’s Bodydrama troupe played at Dead Video/Live Video at Mass Art, part of the Boston Cyberarts Festival. We were part of an evening of performative video, presentations that included sound, video and movement.

The event took place at MassArt in the Pozen Center. It’s a big room with high ceilings and a shallow stage. After exploring a couple of alternatives we went with the stage and set up as indicated below –

pozen

The stage was at least 30′ wide and, with the screen, barely 10′ deep. The screen was huge. An overhead projector was mounted on rigging suspended approximately 30′ in front of the stage. I set my infrared camera and my laptop directly under the projector. I used a patch written in Max/MSP and softVNS to control the video projected onto Betty, Joe and Rachel –

apoclypso_patch

I start by importing twenty images, sww200 – sww219, into a v.buffers object. The buffer, labelled b, is preset to hold 20 rgb 640×480 images. I still have to figure out how to automate this process but, for now, I just ‘click’ on each of the message boxes.

Next I ‘toggle on’ the v.buffertap object sending it a loop message, setting the start to frame 0 and the end to frame 19, and setting the playback speed to 0.01 frames per second. The buffer will output a new image every 10 seconds. The v.stillstream object converts the still images into a continuous video stream.

Finally I ‘toggle on’ the v.dig object, setting it to capture 15 frames per second. The captured video, from the infrared camera, is passed through the v.resize object that downsizes it to 640×480, to the v.motion object.

The output of the video buffer and motion detection information are combined in the v.attachalpha object. The resulting 32bit stream is input to the v.composite object that is set to copy the frame to the output using the alpha info as a mask. The output is set to fade slowly over time using the refresh and diminish.

The result of all this is, that when the dancers move they activate portions of image and when they are still the image slowly fades to black. Since the motion-detection is only looking for portions of the image the viewer sees the image build up frame by frame based on the dancers movements.

Thanks to Greg Kowalski for helping me ‘debug’.