(Matthew Ragan, Media Maker and Interactive System Designer)

Recently I was a part of the thesis project of Daniel Fine and Adam Vachon. Both of these gentlemen are about to be graduates of Arizona State University. Dan’s MFA is in Interdisciplinary Digital Media and Performance, and Adam’s MFA is in Performance Design with his concentration being in Lighting Design. As they approached their final year in their respective programs they wanted to tackle a thesis project that both had a large scope, and pushed them past their boundaries as designers and practitioners. One of the central ideas that Dan had been exploring during his time at ASU is working inside of immersive projection environments. In this regard he was especially interested in working inside of domes. It was partially out of this interest that the idea of Wonder Dome was born.

Wonder Dome is an attempt to reorganize our ideas of a black box theatre, looking at how we might create an environment that’s immersive, responsive, flexible, and extendible. What does it mean to create an immersive projection environment with sensor systems embedded in the design of the space, with a focus on live performance and interactivity? This is one of the central research questions driving the experience and design of Wonder Dome.

The Team

One of the central questions for a project like this is, “who do you recruit to work on it?” Dan started in the Spring of 2013 by gauging interest in the project and asking interested team artists to commit to the project that was slated for the following Spring. The central design / programming team was Daniel Fine (Direction, Media Design), Adam Vachon (Lighting Design), Alex Oliszewski (Media Design), Matthew Ragan (Media Design, Programmer), and Stephen Christiansen (Sound Design). To this central group of people the team would quickly grow to include scenic design, costume design, production management, and so much more. From the beginning, Wonder Dome was a labor of love and a team effort at all times.

Before trying to reinvent the wheel alone, Dan and Adam began by first reaching out to others companies and practitioners who were already doing dome work. Most notably, the project started working with Vortex Immersion. Vortex has a long history of dome work, and partnering with them seemed like a fairly straightforward choice. In early January the team visited Vortex and had a chance both to see their Dome set up and to talk with their lead programmer Jeff Smith. Vortex primarily works with Derivative’s TouchDesigner as a programming and development environment, and part of our visit was to see if our approach was going to be compatible with the work that they were already doing. In addition to using TouchDesigner as a cuing system, Vortex has also uses this for their warping and blending tool.

Before visiting Vortex, one of the central questions in regards to media was about what the media server / system was going to look like. All three of the contributing Media Designers were very comfortable with Isadora programming, and were ready to draw form the experience in order to create both the cueing system as well as the interactivity. Looking to work within Vortex’s established work the team committed to using TouchDesigner as a core element in the media system, but this also meant that much of our media development was happing outside of the familiar programming environment of two thirds of the media team. Part of the Wonder Dome Team’s challenge then became – how do we work to integrate multiple operating systems, programming environments, and methodologies?

Simultaneously we were in the middle of talking with Barco, InFocus, and Christie to see what our prospects of corporate sponsorship looked like. Barco surprised the whole team with a call one afternoon donating three RLM W-6s along with short throw lenses on a 9 month loan for the project. It was then that suddenly we were gaining traction almost faster than we could keep up. It was only a week later that we heard from NVIDA, who sent us a GTX Titan for our media server.

The Story

All of these developments were, of course, happening while the story for the production was still in the oven. We had talked about a number of different adaptations or stories to draw from for this first production, but nothing felt like it was sticking. Dan eventually brought in his writing partner Carla Stockton to help give the story some real traction. We eventually ended up with a story that was loosely based around the fairy tale of the three little pigs. Pinky (a pig puppet), the oldest, suddenly finds himself in a fairy tale with the wrong ending and sets out to find a story teller to fix his story. He runs into the story teller, and Leo (the dome personified as a character in the play) who then lead him on a wild romp of misadventures as they run from a coyote that’s been masquerading first as the big bad wolf, then as the whole from the Pinocchio story, and finally as the giant from Jack and the Beanstalk. The production involved real puppets, digital puppets, and interactive moments.

Puppets

To make all of this work we started by first thinking about what we need our system to be – what were the requirements to make all the moving parts actually move? Media ended up with a distributed system that was spread across four computers – three macs and one PC. Our media server, the PC, ran our warping and blending tools as well as the show control software. Each Mac was the system for a different puppet in the show.

Leo, the character of the dome, was driven by Faceshift – a program that uses a kinect and to connect a performers face to a digital puppet in real time.

Pinky, the pig, was both a live puppet and a digital puppet. While trapped in the belly of the whale, Pinky gets pigsilated, finding himself suddenly trapped in Leo’s digital world as a digital facsimile of himself. Pinky’s puppet was created as a series of animation sequences made first in AfterEffects. These short movies were then driven by Isadora a Wii remote and Osculator. Alex Oliszewski was the animator, programmer, and magician for all of the puppets, and it was a joy to see him in his element programming in Isadora.

The Coyote, our villain and later friend, was also a digital puppet driven out of Isadora in the same fashion as Pinky. All of the media team has a close relationship with Isadora, and have used it in some capacity in almost every show. It was becomes of this that we reached out to Mark to see if there was any chance we could install some temporary licenses on the machines used for the puppets. Mark very graciously accepted, which allowed us to drive our digital puppets with Isadora.

System

One of the central questions we kept returning to was, “how do we make all of these things work together?” That’s a big question, and one that was incredibly difficult to answer. We explored a number of different solutions, and finally decided that in addition to using our media server to drive multiple outs, we needed to be able to capture multiple streams of video simultaneously. To this end we started considering a number of different black magic studios solutions. We finally opted to install three Black Magic Intensity Pros in our media server. This was a difficult decision, partially because of cost and partially because of the number of PCI slots used, but was ultimately the right one for our server. Our puppet machines were a combination of borrowed 2009 Mac Pro’s and Macbook Pros from ASU. While having this equipment for free was awesome, one of the obstacles we discovered was that the ATI cards installed on these machines didn’t broadcast in a standard that our Blackmagic cards could capture. This meant that we also had to install Intensity Pro’s on the sending towers – our borrowed laptops were sending broadcast standard just fine.

Another challenge here was how to send the video. For our Isadora machines this wasn’t an issue, but this was a huge challenge for our machine using FaceShift (Leo). The FaceShift machine needed a second virtual screen to display only the puppet face. To achieve all of this we used a combination of different tools. We started by using Syphon Virtual Screen to create a virtual screen that was output as a syphon stream. We then used Black Syphon to send this stream to our BlackMagic card, which was then captured by our media server.

Conclusions

There is, of course, so much more that went into Wonder Dome than can be quickly described here – but this at least a start. Please feel free to reach out the the Wonder Dome folks if you have questions about what we did or want to chat about the challenges of working in a dome.

See the original article HERE