Modular Video Synthesizer


About

The modular video synthesizer is a project being developed by Steve Wilson with the support of CRATEL. The prototype won third place in CRATEL's 2006 BETA competition and was the only project chosen for further development. The video synthesizer is currently under development by a group of students from the College of Engineering led by Steve Wilson. The goal is to expand the software used to manipulate live video, build a stand-alone unit, and develop an expressive interface that will allow an artist to perform on the video synthesizer as they would a tradition acoustic instrument.

What is it?

The modular video synthesizer is an attempt to explore ways of using technology for the purpose of enhancing artistic expressivity. While there is indeed much fine research related to the arts and technology independently, there is a lack of truly interdisciplinary work looking at how these two activities can enhance each other.

The video synthesizer functions much like a traditional audio synthesizer, except the sound generating electronic oscillators have been replaced with live video feeds. An audio synthesizer begins with some sort of input, usually from a standard wave oscillator, and routes it through various signal processors that sequentially modify the sound before sending the result to an output such as a pair of stereo speakers. This type of device is familiar to anyone who has listened to popular music anytime since the mid seventies. My video synthesizer follows similar principals. A live video feed is sent through a series of digital signal processors set to modulate the video signal in a number of interesting, predetermined ways with the resulting video output being sent to a monitor for viewing. The signal processors encode various forms of video modulation ranging from color filters to sophisticated motion tracking effects.

Currently, the video synthesizer functions as described above, but it runs on a PC. Future developments will include designing a hardware package that allows the synthesizer to function on its own, the addition of more original video effects, a modular design allowing flexible signal routing, and an expressive interface.

Although the proposed video synthesizer would also be computer based, there is an important distinction that sets it apart. A traditional computer handles many functions and becomes the center of every video artist’s setup, whereas my idea would use a small built-in computer responsible only for handling video. A Mini ITX board would be perfect for this application. Without handling a multitude of other tasks, this small, dedicated computer would be free of many problems facing other machines. This also allows the video tools, and not the computer itself, to become the center of attention.

The video synthesizer would consist of a number of firewire camera inputs, video outputs (firewire, S-video, RCA, VGA, and DV), a small LCD display so the artist can monitor the output, a firewire connection so that a portable hard drive could be used to capture the live performance, and a number of video effects and controllers that the signal is routed through. The effects included would come from a number of sources. In addition to the usual array of filters available in applications such as Adobe Photoshop, the video synthesizer would include effects from Kentaro Fukuchi’s EffectTV, compositing tools, color correction, spatial warping, convolution-based filters and some original effects based on the application of traditional audio effects (such as chorusing, phasing, and flanging) to video. Finally, the video feeds would be sent to four independent sets of controllers manipulating brightness, contrast, window placement, window size, and speed of motion.

A very important element of the video synthesizer would be the signal routing. A modular video synthesizer, much like the Moog Modular, would allow the user to control the signal path. The unit would consist of a patch bay and a number of knobs, sliders, and buttons that would control the video effect parameters. (See diagram) The unit would begin with a row of four video outputs, each corresponding to a firewire camera. Those outputs would connect to the inputs of various digital signal processors (DSP), effecting the video in any order desired. This would allow the user to find interesting signal paths to create original video effects. Each DSP unit would feature four inputs and four outputs as well as controls to change how the unit responds. Each camera would have a set of controls for adjusting screen position, window size, brightness, and contrast. Finally, there would be a cross fader with connections for all four cameras so that any two may be combined together. At this point, it seems logical to have the cross fader at the end of the signal chain, but further testing may find it useful to incorporate it earlier in the signal chain for cross fading between effects.

Current Development

Check herefor up-to-date files, info and progress.


Prototype Info

Hardware

The prototype was developed on an AMD Sempron 64 bit 2.7 GHz with 1 gig of RAM. The live video is captured by four Logitech QuickCam 4000 Pro web cameras connected via USB 2.0. Web cams were used because they are very compact, inexpensive, and provide overall good video quality. I have detailed control over the various software parameters with an M-Audio UC-33e - a MIDI control surface with 47 assignable knobs, sliders, and buttons.

Software

My video synthesizer is modeled primarily using software components created using PureData(Pd), an open-source, graphical programming environment developed by Miller Pucket at the University of California San Diego. Pd Provides a rich toolset to express almost any creative concept the design can conceive in the area of signal modulation. Pd is primarily an audio application, but there are a number of external libraries enabling the creation of control devices and filters to manipulate video such as Pure Data Packet (PDP), Graphics Environment for Media (GEM), and "PiDiP is Definitely in Pieces" (PiDiP). Pd uses a free, GNU-licensed c code as well as an already available packet protocol for communicating between the modules. It runs on all platforms, but I found Linux to be the most processor-efficient and the most stable. I developed my software using Debian Ubuntu (Dapper Drake).

Performances

The prototype was used in for live performances during the spring 2006 semester. These performances featured the Fairmount String Quartet, WSU's faculty string ensemble, playing Bela Bartok's String Quartet No. 5, movement 1.

Future performances planned include a collaboration between College of Fine Arts Dean Rodney Miller for his recital in 2007. Dean Miller will be performing Franz Schubert'sWinterreiseOp. 89, a song cycle on poems by Wilhelm Müller. He will be joined by Steve Wilson performing a live video interpretation.

Screen shots from the prototype

WSU performance

Setup

-- -- -- --

Video Synthesizer (last edited 2010-08-04 16:50:21 by localhost)