cycle one

For cycle one, i keep my goal simple – to explore and figure out my multiple resources and the combined possibilities of arduino (and sensors), isadora, and makeymakey. 

i learned how to connect arduino with isadora, and explored touch-related sensors and actuators. I started with capacitive touch because it can be achieved with just connecting wires to the pins, and doesn’t require me to purchase extra sensors, but it turns out that the fact that it does not give analog output makes it hard to send value to isadora with the Firmata Actor. (while i did find that it could be possible to use serial communication instead of Firmata Actor, but that seems more coding, and I think that’s too much for now, i may return to this in the future). So I decided to use force sensor, flex sensor, and the servo.

I played with layering some videos together (some of the footages i collected), all of them shared something in common (an aesthetic i am looking for) – the meandering/stochastic motion of feral fringe. I tested out some parameters that would allow subtle change to the visual. Then i tried send the value from my sensor to affect the subtle changes.

some tutorials that i find helpful:

(thanks Takahiro for sharing this about Firmata in his blog post!):

https://www.instructables.com/Arduino-Installing-Standard-Firmata/
https://www.instructables.com/How-To-Use-Touch-Sensors-With-Arduino/
https://docs.arduino.cc/learn/electronics/servo-motors/
Force sensor: https://learn.sparkfun.com/tutorials/force-sensitive-resistor-hookup-guide/all
Flex sensor: https://learn.sparkfun.com/tutorials/flex-sensor-hookup-guide/all

I was also thinking about the score for audience, that supports the intention of [shared agency, haptic experience (with the wires as well), subtle sensations], and I really love the sensation of the wires dangling and twitching like tentacles ~ i would love for people to engage with it:

In pairs, one person will close eyes, the other person will guide them in the space. While roaming around in the space, eye-closed person will explore the space with touch, which then affects visuals/sounds/sensations in the space. 

This source from an activity we did in Mexico, also i find another artist did something similar. i find this audiencing quite effective in the sense that, eye-closed enables that people don’t select and manipulate the object by judging its appearance or functions, which makes new “worlding” happen. And, with eyes closed, there is an extra layer of care in our touch and pace. This also resonates with what i studied in dance artist Nita Little’s theory of “chunky attention” versus “thin-sliced attention”. With eyes closed, we don’t come to recognize a thing as a chunky whole thing and assume we already know it, but we are constantly researching every bit of it as our touch goes through it. With the pair format, we are also constantly framing the “scene” for each other.

Envisioning space use for final cycle:

– in molab
– a projector (i am thinking of using my own tiny projector, because it’s light and easy to move, i probably wanna audience to play with touch/move that as well)
– space wise, i am thinking that part where the curtain track goes out (see sketch below, the light green area), but i am also flexible. with the next circles, i need to test out how to hook my wires, how long the power cable would be in order to figure out the realistic spacial design…


Cycle 3: The Sound Station

Hello again. My work culminates into cycle 3 as The Sound Station:

The MaxMSP granular synthesis patch runs on my laptop, while the Isadora video response runs on the ACCAD desktop – the MaxMSP patch sends OSC over to Isadora via Alex’s router (it took some finagling to get around the ACCAD desktop’s firewall, with some help from IT folks).

I used the Mira app on my iPad to create an interface to interact with the MaxMSP patch. This meant that I had the chance make the digital aspect of my work seem more inviting and encourage more experimentation. I faced a bit of a challenge, though, because some important MaxMSP objects do not actually appear on the Mira app on the iPad. I spent a lot of time rearranging and rewording parts of the Mira interface to avoid confusion from the user. Additionally I wrote out a little guide page to set on the table, in case people needed additional information to understand the interface and what they were “allowed” to do with it.

Video 1:

The Isadora video is responsive to the both the microphone input and the granular synthesis output. The microphone input alters the colors of the stylized webcam feed to parallel the loudness of the sound, going from red to green to blue with especially loud sounds. This helps the audience mentally connect the video feed to the sounds they are making. The granular synthesis output appears as the floating line in the middle of the screen: it elongates into a circle/oval with the loudness of the granular synthesis output, creating a dancing inversion of the webcam colors. I also threw a little slider in the iPad interface to change the color of the non mic-responsive half of the video, to direct audience focus toward the computer screen so that they recognize the relationship between the screen and the sounds they were making.

The video aspect of this project does personally feel a little arbitrary – I would definitely focus more on it for a potential cycle 4. I would need to make the video feed larger (on a bigger screen) and more responsive for it to actually have any impact on the audience. I feel like the audience focuses so much more on the instruments, microphone, and iPad interface to really necessitate the addition of the video feed, but I wanted to keep it as an aspect of my project just to illustrate the capacity MaxMSP and Isadora have to work together on separate devices.

Video 2:

Overall I wanted my project to incite playfulness and experimentation in its audience. I brought my flat guitar (“skinned” guitar), a kazoo, a can full of bottlecaps, a deck of cards, and miraculously found a rubber chicken in the classroom to contribute to the array of instruments I offered at The Sound Station. The curiosity and novelty of the objects serves the playfulness of the space.

Before our group critique we had one visitor go around for essentially one-on-one project presentations. I took a hands-off approach with this individual, partially because I didn’t want to be watching over their shoulder and telling them how to use my project correctly. While they found some entertainment engaging with my work, I felt like they were missing essential context that would have enabled more interaction with the granular synthesis and the instruments. In stark contrast, I tried to be very active in presenting my project to the larger group. I lead them to The Sound Station and showed them how to use the flat guitar, and joined in making sounds and moving the iPad controls with the whole group. This was a fascinating exploration of how group dynamics and human presence within a media system can enable greater activity. I served as an example for the audience to mirror, my actions and presence served as permission for everyone else to become more involved with the project. This definitely made me think more about what direction I would take this project in future cycles, if it were for group use versus personal use (since I plan on using the maxMSP patch for a solo musical performance). I wonder how I would have started this project differently if I did not think of it as a personal tool and instead as directly intended for group/cooperative play. I probably would have taken much more time to work on the user interface and removed the video feed entirely!


Cycle 2: MaxMSP Granular Synthesis + Isadora

For Cycle 2 I focused on working on the MaxMSP portion of my project: I made a granular synthesis patch, which cuts up an audio sample into small grains that are then altered and distorted.

2 demonstration clips, using different samples:

I had some setbacks working on this patch. I had to start over from scratch a week before Cycle 2 was due, because my patch suddenly stopped sending audio. Recreating the patch at least helped me better understand the MaxMSP objects I was using and what role they played in creating the granular synthesis.

Once I had the MaxMSP patch built, I added some test-sends to see if the patch will cooperate with Isadora. For now I’m just sending the granular synthesis amplitude through to an altered version of the Isadora patch I had used from Cycle 1. This was an efficient and quick way to determine how the MaxMSP outputs would work in Isadora.

I still have quite a few things to work on for Cycle 3:

  1. Router setup. I need to test the router network between my laptop (MaxMSP) and one of the ACCAD computers (Isadora).
  2. Isadora patch. I plan on re-working the Isadora patch, so that it’s much more responsive to the audio data.
  3. Interactivity. I’ll need to pilfer the MOLA closet for a good microphone and some sound-making objects. I want Cycle 3 to essentially be a sound-making station for folks to play with. I will have to make sure the station is inviting enough and has enough information/instructions that individuals will actually interact with it.
  4. Sample recording. Alongside interactivity, I will need to adjust my MaxMSP patch so that it plays back recorded samples instead of pulled files. According to Marc Ainger this shouldn’t be a challenge at all, but I’ll need to make sure I don’t miss anything when altering my patch (don’t want to break anything!).

Final Mission: three travelers

Brave and Selfless Volunteers at the MoLAB Finals Performance
photo by Alex Oliszewski

During my Cycle 3 of Choose Your Own Adventure: Live Performance Edition, I explored how to allow for more timelines. I realized that the moments of failure for the audience provides excitement and raises the stakes of the performance. How to make a system that encourages and provides feedback for the volunteers while also challenging them?

I feel most creative and myself when creating pieces that play with stakes. I love dance and theatre that encourages heightened reactions to ridiculous situations. The roles of the three travelers started to sink in to me the more we rehearsed. They needed to be both helpless adventurers somewhere distant in time and space while also being all-knowing, somewhat questionably trustworthy narrator-like greek chorus assistants. Tara, Yildiz and I added cheering on the volunteers to blur those lines of where and who we are.

Emily Craver, Yildiz Guventurk, and Tara Burns as three travelers
photo by Alex Oliszewski

The new system for Choose Your Own Adventure included: MIDI keyboard as a controller, Live Webcam for a live feed of adventurers and photo capture of successes, FocusRite Audio hook-up for sound input and sound level watcher, GLSL shaders of all colors and shapes, and Send MIDI show control in order to trigger light cues.

Note on Watcher for Keys on the MIDI keyboard speaking to which song to play in order to reveal a clue

The new system provided more direct signs of sound level watching and cues to the volunteers. The voice overs were louder and aided by flashing text reiterating what the audience should be doing. The three travelers became side coaches for the volunteers as well as self-aware performers trying to gain trust. I found myself fully comfortable with the way the volunteers were being taken care of and started to question and wonder about the audience who was observing all of this. How can an audience be let in while others are physically engaging with the material? I thought about perhaps close camera work of the decisions being made at the keyboard? Earlier suggestions (shout out to Alex Christmas who gave this suggestion) included an applause-o-meter to allow for the non-volunteers to have a say from their seats. A “Who Wants to be a Millionaire” style audience interaction comes to mind with options for volunteers to choose how to interact and have the audience come to their aid. What does giving audience a voice look like? How can it be both respectful, careful and challenging?

photos by Alex Oliszewski
video by Doug Barber

Final Project – Werewolf

For this final project, many aspects changed over its development. Initially, I started work on a voting sort of game. I wanted everyone experiencing to have an app loaded to their mobile device written in ReactNative, then proceed to each have an interface to participate in the game. Essentially I would find through research that I wanted to create what is known as a Crowd Game.

Further reading: http://stalhandske.dk/Crowd_Game_Design.pdf

Through much of my development time, I worked with the concept of a voting game and how to get people to form coalitions. Ultimately, I found it difficult to design something around this concept, because it was difficult to evoke strong emotions without serious content or without just having the experience revolve around collecting points. Shortly before the final few days of development, I had the idea to completely change and base the experience on the party game known as Mafia or Werewolf (Rules example: https://www.playwerewolf.co/rules/). This change better reflected my original desire to have a Crowd Game, but with added intimacy and interaction between the players themselves, as opposed to with the technology. If people are together to play a game, it should leverage the fact that the people are together.

Client / Mobile App
    –  Written in React.js (JavaScript) using the ReactNative and Expo Frameworks. Excellent choice for development, written in a common web language for Android and iOS, able to access system camera, vibration, etc. https://facebook.github.io/react-native/ https://expo.io/
    –  Unique client ID. Game Client would scan and display QR codes so players can select players to kill automatically from distance with consensus. Also randomly assigns all rolls to players.
    –  Expo allowed me to upload code to their site and load it to any device. A website serving HTML/JS would be easier to use if one did not intend to use all the phone functions.
    –  This part of development went smoothly and was fairly predictable with regard to time sink. Would recommend for use.
Isadora
    –  Isadora patch ran in the Motion Lab. Easy to setup after learning software in class.
    –  Night/Day cycle for the game with 3 projectors.
LAN Wi-Fi Router
    –  Ran from laptop connected to Server over ethernet. Ideal for setups with need for high speed/traffic.
Game Server
    –  Written in Java, by far the most taxing part of the project.
    –  Contains game logic, handling rounds, players, etc.
    –  Connects to clients via WebSockets with the Jetty library. I could get individual connections up and running, but it became a roadblock to using the system during a performance because I could not fix the one-to-many server out-messages.
    –  This had a very high learning curve for me, and I would recommend that someone use a ready system like Colyseus for short-term projects like this final. http://colyseus.io/ https://github.com/gamestdio/colyseus

During the final performance, I only used the game rules and Isadora system setup in the Motion Lab, but I feel as though people really enjoyed playing. Certainly more effective than the first game iteration I had, even with the technology fully working. My greatest takeaway and advice I would give to anyone starting a project like this, would be to just get your hands dirty. The sooner you fully immerse yourself in the process, the sooner you can begin to see all it could be.


Cycle I – Voting Game


Final Schematic

DEMS Schematic

My schematic for the final project is in the link.


Final Proposed Planning

Proposed Planning

11/8 – Work in Drake

11/13 – Work in ACCAD

11/15 – Work in Motion Lab 4-5:20pm

11/20 – Projector Hang/Focus – Critique

11/27 – Determined by outcome of 11/20 – Work in Motion Lab

11/29 – Split Motion Lab time

12/4 – Last Minute Problem Solving

12/6 or 12/7 Perform Final TBD


Final Project: Looking Across, Moving Inside

Looking Across, Moving Inside

What are different ways that we can experience a performance? Erin Manning suggests that topologies of experience, or relationscapes, reveal the relationships between making, performing, and witnessing. For example, the relationship between writing a script or score, assembling performers, rehearsing, performing, and engaging the audience. Understanding these associations is an interactive and potentially immersive process that allows one to look across and move inside a work in a way that witnessing only does not. For this project, I created an immersive and interactive installation that allows an audience member to look across and move inside a dance. This installation considers the potential for reintroducing the dimension of depth to pre-recorded video on a flat screen.

 

Hardware and Software

The hardware and software required includes a large screen that supports rear projection, a digital projector, an Xbox Kinect 2, a flat-panel display with speakers, and two laptops—one running Isadora and the other running PowerPoint. These technologies are organized as follows:

A screen is set up in the center of the space with the projector and one laptop behind it. The Kinect 2 sensor is slipped under the screen, pointing at an approximately six-foot by six-foot space delineated by tape on the floor. The flat-panel screen and second laptop are on a stand next to the screen, angled toward the taped area.

The laptop behind the screen is running the Isadora patch and the second laptop simply displays a PowerPoint slide that instructs the participant to “Step inside the box, move inside the dance.”

IMG_0978

 

Screen Shot 2017-12-10 at 2.32.40 PM

Media

Generally, projection in dance performance places the live dancers in front of or behind the projected image. One cannot move to the foreground or background at will without predetermining when and where the live dancer will move. In this installation, the live dancer can move to up or downstage at will. To achieve this the pre-recorded dancers must be filmed singly with an alpha channel and then composited together.

I first attempted to rotoscope individual dancers out from their backgrounds in prerecorded dance videos. Despite helpful tools in After Effects designed to speed up this process, each frame of video must still be manually corrected, and when two dancers overlap the process becomes extremely time consuming. One minute of rotoscoped video takes approximately four hours of work. This is an initial test using a dancer rotoscoped from a video shot in a dance studio:

 

Abandoning that approach, and with the help of Sarah Lawler, I recorded ten dancers moving individually in front of a green screen. This process, while still requiring post-processing in After Effects, was significantly faster. These ten alpha-channeled videos then comprised the pre-recorded media necessary for the work. An audio track was added for background music. Here is an example of a green-screen dancer:

Programming

The Isadora patch was divided into three main function:

Videos

The Projector for each prerecorded video was placed on an odd numbered layer. As each video ends, a random number of seconds passes before they reenter the stage. A Gate actor prevents more than three dancers from being on stage at once by keeping track of how many videos are currently playing.

The Live Dancer

Brightness data was captured from the Kinect 2 (upgraded from the original Kinect for greater resolution and depth of field) via Syphon and fed through several filters in order to isolate the body of the participant.

Calculating Depth

Isadora logic was set up such that as the participate moved forward (increased brightness), the layer number on which they were projected increased by even numbers. As they moved backwards, the layer number decreased. In other words, the live dancer might be on layer 2, behind the prerecorded dancers on layers 3, 5, 7, and 9. As the live dancer move forward to layer 6, they are now in front of the prerecorded dancers on levels 3 and 5, but behind those on levels 7 and 9.

Download the patch here: https://www.dropbox.com/s/6anzklc0z80k7z4/Depth%20Study-5-KinectV2.izz?dl=0

 

In Practice

Watching people interact with the installation was extremely satisfying. There is a moment of “oh!” when they realize that they can move in and around the dancers on the screen. People experimented with jumping forward and back, getting low the floor, mimicking the movement of the dancers, leaving the stage and coming back on, and more. Here are some examples of people interacting with the dancers:

Devising Experiential Media from Benny Simon on Vimeo.

Devising Experiential Media from Benny Simon on Vimeo.

Devising Experiential Media from Benny Simon on Vimeo.

 

Future Questions 

Is it possible for the live participant to be on more than one layer at a time? In other words, could they curve their body around a prerecorded dancer’s body? This would require a more complex method of capturing movement in real-time than the Kinect can provide.

What else can happen in a virtual environment when dancer move in and around each other? What configurations of movement can trigger effects or behaviors that are not possible in a physical world?


Final Project – here4u

There’s something about opening myself up and exposing vulnerability that is important to my creative work and to my life. In laying myself bare, I hope that it encourages others to do the same and maybe we connect in the process. I tell people about me in the hopes that they’ll tell me about them.

20171205_161939[1]20171205_162905[1]

You can read in depth about where I started with this project in this post, but in as concise a way as I can manage with this project, I was working to tell a non-linear narrative about a platonic, long-distance relationship between a mother and son. Using journal entries, film photographs, newspapers, daily planners, notes to self, and random memorabilia from myself and my mother, I scattered QR codes throughout an installation of two desks with a trail of printed text messages between them. In order to enhance the interactivity of my project,  the QR codes led to voicemails and text messages between a mother and a son, digital photos showing silhouetted figures and miscellaneous homes, and footage from anti-Trump protests. I worked to emphasize feelings of distance, loneliness, belonging, and relationship that I have experienced and continue to experience during my time in Columbus.

20171205_170446[1] 20171205_170102_HDR[2]

This final project — here4u — really took it out of me. Working from such a personal place was in some ways freeing, but constantly stressful. I spent a lot of time with this project worrying about whether people would connect with the work, whether they would believe the authenticity of it, and whether I was oversharing to the point of discomfort (for both the audience and for myself). The worry was balanced equally with excitement/anticipation of how it would all be received. To amplify some rawness that I was only beginning to develop after the first showing, I generated much more digital material and matched that with an increase in tangible items in the space. From laying out journals and planners that I carry with me every day, some even including notes and plans for the work, screenshotting and printing out text conversations between my mom and I to create a trail between the two desks, and even brewing the tea that reminds me of home was my way of leaning into opening myself up for the audience. Besides the lights that Oded designed for me, I wanted to strip this installation of all extraneous theatricality in order to get at the personal nature of the work.

The “son” space

The "mother" space

The “mother” space

After some recommendations from Alex after the first cycle presentation, I began to think more choreographically about this project, shifting my mindset around it from the creation of a static installation into the curation of a museum of moments to be experienced. Between the first cycle and the final showing, I invested much more time and effort into crafting the viewer’s journey through the space. Oded’s lighting certainly helped this, and with it I thought critically about which QR codes should be placed where in order to enhance a tangible object, a written note, etc. This felt like mental prototyping and it helped me to conceptualize what I wanted for my final product.

The 'son' space

The ‘son’ space

I’m thankful for this final project (and this course) for giving me an outlet to investigate the concepts I’m researching in my dance-making through other disciplines. Taking themes I work with in the dance studio and translating them into photography, audio/visual art, and digital media design has given me a new perspective on the topics I am diving into for my senior project and beyond. DEMS was a real treat, and I’m glad to have been a part of it.