Holofunk is an open-source, audiovisual, gestural live looper. With no buttons, no remotes, nothing in your hands at all, you can create and conduct a whole ensemble made entirely of yourself.
Holofunk lets you spontaneously and improvisationally create music, that you can directly see and manipulate as you create it. It feels something like fingerpainting with sound. It also supports two players, two-handed fully ambidextrous control, and two different viewpoints (performer and audience).
Here I am presenting it at the Seattle Mini Maker Faire in 2015. If you watch only one Holofunk video, this should be it:
And here is a bit of my friend Dane and I using it together, two-player style, at the Jigsaw Renaissance makerspace in Seattle in early July 2014:
Holofunk is an x64 C# application, built with the following software stack:
- Windows 8
- .NET 4.5
- New Kinect for Windows SDK
- BASS audio library
- SharpDX and the SharpDX toolkit
- ASIO audio driver
The source is on GitHub:
I am running the following hardware, in its current incarnation:
Also a Dell touchscreen monitor, a tripod monitor mount, and a clamp to stick the Kinect on top of the monitor.
Much of this hardware is not necessary to run it; the decent Win8 PC (or laptop), new Kinect, and USB interface are the only must-have items. It works fine with a wired microphone, also.
My name is Rob Jellinghaus. By day I work for Microsoft. Holofunk is my moonlighting project; Microsoft has an explicit moonlighting policy allowing personal projects such as this, for which I’m genuinely grateful. (All Microsoft software components in Holofunk are publicly available.)
I have done a fair amount of a cappella singing, which I’ve always loved, especially harmony. And I’ve also been involved in the rave scene for decades now. But live singing and techno all too seldom came together; my two loves were disunited.
Then I encountered a brilliant UK musician, Beardyman. He recorded himself live, then played himself back, over and over, chopping up the sound in a million ways. This video in particular blew me away when I first saw it four years ago:
There, his entire performance is live, but all the video is (very artfully and painstakingly) cut up and edited after the fact. The concept is so immediately understandable, I started wondering whether it could be done entirely live.
I’ve heard Beardyman say while performing that it’s hard for people to believe he’s doing it all live, because — unlike that video — in his live sets, one can’t see the multiple overlapping loops, but only hear them.
Holofunk tries to fix that, by making each sound into something you can both see and touch. I wanted to make complex music using something that had no buttons, nothing that you had to hold still over and peer down at. Holofunk comes closer to this vision than anything else in the world that I presently know of.
At this writing it is September 2015. I started working on Holofunk over four years ago. In fact, in September 2011, I demonstrated the very first working version — which at the time used the first Kinect, a Wiimote over Bluetooth, and a wired microphone — to Beardyman himself in Vancouver:
Since that time I have added support for multiple players and multiple monitors / viewpoints, increased the resolution, added sound effect support, ditched the Wiimote in favor of the new Kinect’s awesome hand pose detection, moved from hand-held microphones to wireless headsets, built the current performance rig, upgraded to x64 support, and shaken the bugs out of VST plugin support.
I thought I was ready for alpha testers, but it turns out the current technology stack may be too fragile for even me to keep working very stably. The future is further than you think. Still, I am keeping this page intact to describe the state of Holofunk at its original high point.