Holofunk is an open-source, audiovisual, gestural live looper. With no buttons, no remotes, nothing in your hands at all, you can create and conduct a whole ensemble made entirely of yourself.
Holofunk lets you spontaneously and improvisationally create music, that you can directly see and manipulate as you create it. It feels something like fingerpainting with sound. It also supports two players, two-handed fully ambidextrous control, and two different viewpoints (performer and audience).
Here I am presenting it at the Seattle Mini Maker Faire in 2015. If you watch only one Holofunk video, this should be it:
And here is a bit of my friend Dane and I using it together, two-player style, at the Jigsaw Renaissance makerspace in Seattle in early July 2014:
I am currently in the middle of rewriting it to allow it to support HoloLens and networked on-stage performance.
Holofunk’s 2015 incarnation was an x64 C# application, built with the following software stack:
- Windows 8
- .NET 4.5
- New Kinect for Windows SDK
- BASS audio library
- SharpDX and the SharpDX toolkit
- ASIO audio driver
The source is on GitHub:
I was running the following hardware:
Also a Dell touchscreen monitor, a tripod monitor mount, and a clamp to stick the Kinect on top of the monitor.
Much of this hardware was not necessary to run it; the decent Win8 PC (or laptop), new Kinect, and USB interface were the only must-have items.
My name is Rob Jellinghaus. By day I work for Microsoft. Holofunk is my moonlighting project; Microsoft has an explicit moonlighting policy allowing personal projects such as this, for which I’m genuinely grateful. (All Microsoft software components in Holofunk are publicly available.)
I have done a fair amount of a cappella singing, which I’ve always loved, especially harmony. And I’ve also been involved in the rave scene for decades now. But live singing and techno all too seldom came together; my two loves were disunited.
Then I encountered a brilliant UK musician, Beardyman. He recorded himself live, then played himself back, over and over, chopping up the sound in a million ways. This video in particular blew me away when I first saw it four years ago:
There, his entire performance is live, but all the video is (very artfully and painstakingly) cut up and edited after the fact. The concept is so immediately understandable, I started wondering whether it could be done entirely live.
I’ve heard Beardyman say while performing that it’s hard for people to believe he’s doing it all live, because — unlike that video — in his live sets, one can’t see the multiple overlapping loops, but only hear them.
Holofunk tries to fix that, by making each sound into something you can both see and touch. I wanted to make complex music using something that had no buttons, nothing that you had to hold still over and peer down at. Holofunk comes closer to this vision than anything else in the world that I presently know of.
At this writing it is December 2017. I started working on Holofunk over six years ago. In fact, in September 2011, I demonstrated the very first working version — which at the time used the first Kinect, a Wiimote over Bluetooth, and a wired microphone — to Beardyman himself in Vancouver:
Between 2011 and 2015 I added support for multiple players and multiple monitors / viewpoints, increased the resolution, added sound effect support, ditched the Wiimote in favor of the new Kinect’s awesome hand pose detection, moved from hand-held microphones to wireless headsets, built the current performance rig, upgraded to x64 support, and shaken the bugs out of VST plugin support.
I thought I was ready for alpha testers in 2016, but it turns out the current technology stack was too fragile for even me to keep working very stably. I summed up the end-of-2016 situation here: The future is further than you think.
Still, I am continuing to work on it — here is the end-of-2017 progress report. Stay tuned, the mixed reality future is closer than ever….