Cycle 1

My cycle 1 performance revolves around the telling the story of how humans are reshaping the environment and the world around us. The story is told through a sequence of video and audio clips stitched together. In an immersive space of eight panels, each panel features a different video. The system is designed to plunge users into a story of beauty, frustration, and sorrow and to overload their sences with information.

For the cycle 1 performance, there were some technical difficulties that plagued the setup of the piece which caused only half what we now affectionately call the “yurt” and only one scene of my Isadora patch to be displayed. This caused for an awkward yet interesting misreading of my project. Viewers where able to remove themselves more freely from my artificial environment and walk more freely in and out of the space. Users also were able to experience all the information at once since it was only half the circle.

Moving forward I am focusing on the interactivity features of my piece. In the first cycle, the user was limited to only acting as an observer of the events that unfold before them. I would like the piece moving forward to be reactive to users’ gestures and movements. Potentially the user’s silhouette could be incorporated in the piece as well.

 

 


Pressure Project 3: Galaxy Quest

The Other Galaxy Quest

(it has nothing to do with the movie, kids, and is kinda imperialistic, oops!)

Conquer Galaxies by being the first to create a network of outposts before your opponents.

The great creator is has brought to life new galaxies in the universe. The great creator needs someone to colonize these galaxies so that they may begin producing life. Four potential colonizers have been picked for the job. Whichever colonizer is able to create the strongest string of outposts in the galaxy gets the honor of colonizing it.

Players: 4

Instructions:

  1. Four players start by choosing a character and a related number (ex. Darth Vader and 3)
  2. The game is played on a specially made game board, composed of a grid (which you will find in the back of this sheet of paper). The goal of the game is to linearly connect 6 pieces of the grid together either vertically, horizontally, or diagonally by placing your character’s assigned number in a grid square during your turn.
  3. Each game is played in a uniquely different galaxy. Every new game the galaxy is drawn by the “creator” (via isodora). The galaxy’s perimeter is then manually translated by the players to the game board grid using their best judgement. The galaxy is always drawn so that it occupies as much of the game board as possible while simultaneously not being stretched in its proportions. If the galaxy’s perimeter produced by the creator is no closed, it is up to the players to close in the gaps of galaxy so that it is a solid 2d form.
  4. The “creator” then issues 3 special coordinates. These coordinates are marked on the game board by the players with a diamond symbol. These coordinates act as shared outposts by the players for which any player can use as part of their string of 6. If a special coordinate is marked outside of the boundary of the galaxy’s perimeter, the special coordinate becomes null and will not be used in the game.
  5. After board has been created, the game will start. Each player has only 2 seconds per turn to mark their number in one of the squares on the grid. Grid squares that have been intersected by the galaxy’s perimeter are not able to be claimed by any of the players for an outpost. Once a player has successfully connected 6 grid squares in a row by marking them with their number the game is over and the spoils of the galaxy go to them. If no one is able to connect 6 squares together, the game is over and the creator destroys the galaxy.

The game runs in tandem with an Isodora patch that has characters for players to pick and it also has the scene that produces the galaxy for the game board.

The feedback and results of the game were all really positive. It took a couple of rounds for my classmates to get a grasp on the concept of the game, but in the end it made for a very competitive experience. The game was successful in the sense that it not only created a great game to play, but also watch. The pace of the game kept plays and the audience on the edge of their seats as they fought not to miss a single beat. Although some people found the game too quick, I personally found the pace of the game fun.

Moving forward I would want to revisit my wording of the rules because I was unable to hand off the game to my classmates for them to play independently. Also, it may be helpful to refine how Isodora creates the galaxies as some do not create playable boards while others are seeming too big.

Below are some of the results from my games (featuring the lovely doodles of Bita Bell)

board and instructions

img003 img004


Pressure Project 2: We Choose the Moon

In the second pressure project we were asked to create an experience documenting a significant cultural event or story using audio as our primary medium. The story I chose to tell was the story of the Apollo 11 moon landing and the events that led up to its occurrence from the perspective of the astronaut. This historical event is one of my favorites because it is an amazing story ofhuman ingenuity, drive, and creativity. Also when picking this event, I was thinking of the resources I had in hand. Knowing I would be presenting in the Motion Lab, picking a story about a lunar space journey made since because the experience would automatically become very experiential because the room is so dark. In addition to the audio, I added in several suggestive visuals to guide the journey and create a more immersive experience.

The system consisted of 4 scenes. The first scene was merely a trigger that activated the rest of the system when you would press the space bar. The second scene was merely a black scene with JFK’s famous “We Choose the Moon” speech. The third scene is where the visuals start to kick in. The scene opens to a very light “nebula” glsl shader, which misread, could also be interpreted as smoke from the rocket launch. In over top of the shader is another image of stars that is linked to a pulse generator that flickers on and off along with the sound of a beating heart. Overtop of the heart audio is sound of the launch (the test, the countdown, followed by the noise of the fuel being burned upon takeoff). During the entire audio sequence, the heartbeat sound and the star image flash gets steadily faster to dramatize the events. In the fourth scene, once everything from the previous scene has stopped and a nervous exhale of breath is played. An image of the moon starts fading in and growing bigger on the screen. We then hear Neil Armstrong say one of the most iconic quotes in the history of the human race. After that, the system cycles back to the beginning. Other than the first scene, the entire system is free of user input. The system only relies on enter scene triggers and trigger delays to coordinate the occurrence of events.

2017-10-03 (1) 2017-10-03 (2) 2017-10-03 (5) Capture Capture1

Reflecting back on my presentation I was very quick to notice all of its flaws. I felt had clunky my transitions between my audio and visuals and my audio and visuals never exactly matched up how I wanted them too. All of this was almost completely unnoticed by my classmates though. When I brought up the topic for discussion after I presented they said that it was hardly noticeable. Though I know that these flaws exist, my paranoia was maybe uncalled for.

Discussion also led to how to make this system an interactive one. Though my system was deliberately not, I found the ideas interesting and worthy of an introduction in the next iteration of this system if there was one. One of the ideas was to make it controllable though buttons. The user could decide when (or if) to launch, control the speed of the system, or the audio. It was also brought up that I might try to use a stethoscope to control the heartbeat audio of my system.

One last reflection on my system, is the use of silence. Because the experience was limited to 1 minute, I was pressured to put as much audio and visuals into that time period as I could so that the story would be legible. Benny brought up the point of space being devoid of sound. Speaking toward the fourth scene, where we move toward the moon, he suggested it might benefit from the dramatic silence, and I agree. The silence would throw the attention towards the image and highlight the gravity of the experience. If only there was more time!


Pressure Project 1: Fortune Teller

My fortune telling system put visitors through a silly yet engaging interface to reveal to them some their deepest, darkest, most personal fortunes…. or, more than likely, just an odd, trolling remark. From the beginning of the project I wanted to keep the experience fun and not too serious. I wanted to keep some aspects of the stereotypical fortune telling experience but critique the traditional experience in that the questions I ask and the final response have essentially nothing to do with each other. It was also important for me to remove the user from the traditional inputs of the computer (i.e. the keyboard and the mouse) to create a more interactive experience.

download download (6) download (6)

My system started by prompting the user to begin their journey by pressing the space bar (the only time they used the keyboard during the experience) which took them through several scenes that set the up for what was about to happen.  The purpose of these scenes wasn’t to take in user input but set the mood for the experience. The slides were simply just text that had a wave generator applied to its rotation input to give the text a more psychedelic, entrancing feeling that falls in with the theme of the fortune telling experience.  The idea here being to create some predetermined conversation that would carry throughout the entire project.

Once through the initial scenes, the user was introduced to the first scene with a question which had 3 possible responses. On the scene, the top part of the screen was devoted to the text. On the lower portion were 3 icon representing responses to the question.  In the background of the scene is the projection of the user. The projection is using the video input from my computer’s web cam and then running it through the difference actor to calculate the movement of the user. The output of the difference actor was also plugged into an eyes++ actor which tracked the movement of the user through the blob decoder actor. To respond to the question the user was prompted to wave their hand over the icon that matched with their answer. The scene would then jump to the next scene that corresponded with the answer. This happened due to the inside range actor that were placed over each icon that watched for the blob’s movement over the actors. The scene would change once the x and y coordinates of the blob were inside the range prescribed by the inside range actors.

download (2) download (3)download (4)download (5)

This set up for the scene continued throughout the remainder of the project. Each set of questions that followed tried to keep up with the user’s inputs with text presented at the top of each scene. The text continued the conversation that was started at the beginning of the presentation. After the first question, depending on the user’s choice, they could have been brought to any one of three scenes with a question featuring two possible answers. From there, the user was then directed to a third question with another two potential answers. The third question for all possible question paths was the same, but it appeared on six separate scenes as to still be considerate of the user’s previous inputs and the user’s answer to the question. From the last question the user was then fed into a scene containing 1 of 6 possible fortunes. And that was it!

During the presentation of the system, I ran into some tricky bumps in the road. The big one being that the system would skip scenes when users would wave their hands too long over an icon. Additionally, and confusingly, it would in some cases not respond after several seconds of violent waving by a user. When the system’s tolerance was calibrated it was calibrated in a lot of natural light which could have given rise to discrepancies when the system was then turned on in the computer lab which is only lit by artificial light. The system could be fixed using several alternative methods. The first being a trigger delay for when moving from scene to scene. This would prevent that eyes++ actor from prematurely recognizing inputs from the user. The second being a stricter calibration of the eyes++ actor. In the actors inputs you can control size of the recognized blob being tracked and the smoothness of its movement. Both of these inputs would have given greater tolerance to user’s movement. The last solution may have been to consider a different form of input that used a more sensitive camera or Leap Motion.

Additional improvements could be made around how the system interacts with user and the type of outputs the system produces. After watching some of the other presentations it was very clear that my system could have benefited from the introduction of sound. The element of sound created another level of thematic experience that could have played up the concept of my goofy fortune telling experience. The second being the idea of the system looping back to the beginning. After every user finished their interaction the system had to manually prompted back to the being scene.  A jump actor could have easily fixed this.

I feel like I could say more, but I should probably stop rambling, so here is my project, check it out and enjoy! 

https://osu.box.com/s/dd6sopphqnxa5uu8cgjboa0xzsam2u0r