Pressure Project 2: visiting the Other World

This is the system I picked to spend time observing:

I decided to pause at this system because by the time I arrived here, it happened that some people are engaged with solving the puzzles of this interaction. I got intrigued by that. Also, I was very drawn to the design (and the non-interactive components) in the space as well – a poster on the wall indicate the connection between the nervous system and the gut (a topic fascinating to me!), and a glass window, the drawer of which can be opened, where there are many weirdly-interesting ceramic creatures.

Diagram of the system (upper)
Diagram of people interacting (lower)

People begin their experience by arriving at the space from three different directions. They first register their arrival at a new interactive system. Then some would observe the space first, touch the structure, or go directly to push a button on the glass boxes to see what may trigger. The button is the key to interaction, however, it needs to be pushed in a certain order to trigger an ultimate effect, otherwise it is just the light traveling through the channel, while some people might be satisfied by this and go on to the next one, other people are determined to figure out. There is an experimental guideline (with some words highlighted which seems to be the clues) on a wall, it is the key to solving the puzzle (the order of pushing the four buttons). After pushing the buttons in the correct order, a pattern appears on the screen, with some light effects, some people take a quick picture of the screen and go on to the next room. 

Re-design the system:

What I am interested in doing is making people less goal-oriented, it seems that once people solve the puzzle, they are no longer interested in exploring the system again, or not even pay attention to the interesting set design in the room. The modification I wanna make:

– the button can be replaced with the shape of a tentacle, or something in the shape of a laboratory tool (to echo with the overall vibe)

– each time they solve the button puzzle, a different pattern would appear.

– the pattern is in the appearance of a creature in correspondence to the real object in space, and then they need to find that object, and touch it

– the touch has to be in a range of a certain pressure, for a certain length, (I wanna some more sensorial experience, rather than just a trigger – task complete.), preferably, the touch has to be completely between two people. (Does this sounds to nefarious?)


PP2/PP3: Musicaltereality

Hello. Welcome to my Isadora patch.

This project is an experiment in conglomeration and human response. I was inspired by Charles Csuri’s piece Swan Lake – I was intrigued by the essentialisation of human form and movement, particularly how it joins with glitchy computer perception.

I used this pressure project to extend the ideas I had built from our in-class sound patch work from last month. I wanted to make a visual entity which seems to respond and interact with both the musical input and human input (via camera) that it is given, to create an altered reality that combines the two (hence musicaltereality).

So here’s the patch at work. I chose Matmos’ song No Concept as the music input, because it has very notable rhythms and unique textures which provide great foundation for the layering I wanted to do with my patch.

Photosensitivity/flashing warning – this video gets flashy toward the end

The center dots are a constantly-rotating pentagon shape connected to a “dots” actor. I connected frequency analysis to dot size, which is how the shape transforms into larger and smaller dots throughout the song.

The giant bars on the screen are a similar setup to the center dots. Frequency analysis is connected to a shapes actor, which is connected to a dots actor (with “boxes” selected instead of “dots”). The frequency changes both the dot size and the “src color” of the dot actor, which is how the output visuals are morphing colors based on audio input.

The motion-tracking rotating square is another shapes-dots setup which changes size based on music input. As you can tell, a lot of this patch is made out of repetitive layers with slight alterations.

There is a slit-scan actor which is impacted by volume. This is what creates the bands of color that waterfall up and down. I liked how this created a glitch effect, and directly responded to human movement and changes in camera input.

There are two difference actors: one of them is constantly zooming in and out, which creates an echo effect that follows the regular outlines. The other difference actor is connected to a TT edge detect actor, which adds thickness to the (non-zooming) outlines. I liked how these add confusion to the reality of the visuals.

All of these different inputs are then put through a ton of “mixer” actors to create the muddied visuals you see on screen. I used a ton of “inside range”, “trigger value”, and “value select” actors connected to these different mixers in order to change the color combinations at different points of the music. Figuring this part out (how to actually control the output and sync it up to the song) was what took the majority of my time for pressure project 3.

I like the chaos of this project, though I wonder what I can do to make it feel more interactive. The motion-tracking square is a little tacked-on, so if I were to make another project similar to this in the future I would want to see if I can do more with motion-based input.


Lawson: PP2 Inspired by Chuck Csuri

My second pressure project is inspired by the two Chuck Csuri works below: Lines in Space (1996) and Sign Curve Man (1967). I love the way that each work takes the human form and abstract it, making it appear that the figures melt, warp, and fray into geometric shapes and rich, playful colors.

Lines in Space, 1996
Since Curve Man, 1967

For my project, I wanted to allow the audience a chance to imitate Csuri’s digital, humanoid images in a real time self-portrait. I also wanted to build my project around the environmental factors of an art gallery – limited space in front of each art work, a mobile audience with split attention, and ambient noise. In addition to the patch responding to the movement of the audience, I wanted to introduce my interpretation of Chuck Csuri’s work in layers that progressively built into the final composite image. You can see a demonstration of the Isadora self-portrait below.

To draw the audience’s attention to the portrait, I built a webcam motion sensor that would trigger the first scene when a person’s movement was detected in the range of the camera. I built the motion sensor using a chain of a video-in watcher, the difference actor, a calculate brightness actor, the comparator to trigger a jump scene actor. If the brightness of the webcam was determined to be greater than 0.6, the jump scene actor was triggered. So that the jump actor would only be triggered once, I used a gate actor and trigger value actor to stop more than one trigger from reaching the jump actor.

Once the patch had detected a person in the range of the webcam, the remainder of the patch ran automatically using chains of enter scene triggers, trigger delays, and jump scene actors.

To imitate the colored banding of Csuri’s work, I filtered the image of the web came through a difference actor set to color mode. The difference actor was connected to a colorizer actor. In order to create the fluctuating colors of the banding, I connected a series of envelope generators to the colorizer that raised and lowered the saturation of hues on the camera over time.

In the next scene I introduced the sense of melting that I experienced in Csuri’s work by adding a motion blur actor to my chain. At the same time, I attached a soud level watcher to the threshold of the difference actor to manipulate it’s sensitivity to movement. This way the patch is now subtlely responsive to the noise level of the gallery setting. If the gallery is noisy, the image will appear brighter because it will require less movement to be visible. This visibility will then fluctuate with the noise levels in the gallery.

In the next scene I introduced the warping and manipulation I observe in Csuri’s work. I wanted to play with the ways that Csuri turns real forms into abstract representations. To do this, I introduced a kaleidoscope actor to my chain of logic.

My final play scene is a wild card. In this scene, I connected the sound level watcher to the facet element of the kaleidoscope actor. Instead of the clarity of the image being dependent on the noise level of the gallery, the abstraction or warping of the image would be determined by the noise levels. I consider this scene to be a wild card because it’s effectiveness is dependent on the audience realizing that their talking or silence impacts their experience.

The patch ends by showing the audience my inspiration images and then resetting.

In thinking about improving this patch for Pressure Project 3, I want to consider the balance of instructions and discoverability and how to draw in and hold an audience member’s attention. I am unsure as to whether my project is “obvious” enough for an audience member to figure out what is happening without instructions but inviting enough to convince the audience member to stay and try to figure it out. I also know that I need to calibrate the length of my black out scenes and inspiration image scenes to make sure that audience members are drawn to my installation, but also don’t stay so long that they discourage another audience member from participating in the experience.


PP2: Audio storytelling/Wall Fall

I chose to focus only on the audio for this pressure project. In hindsight, after viewing/listening to the other projects, I wonder if having an image of the Berlin wall surrounding us in the motion lab on the white scrims would have been helpful to site the work so there wasn’t a period of guessing for listeners.

My initial goal was to immerse the listener in the experience of the sound of construction and destruction of the Berlin wall. Having grown up in Germany during this point in history, I have a personal experience with this site and my traveling to and from the site by train was a part of it. That part of the story was lost in my layering process. I think that if I were to do this project again, I would pull back and focus in more on some of the powerful moments. Letting them sit with the listener more and not overwhelm them with so much information that is hard to unpack, especially if one is trying to figure out what the story is about.

I appreciated the feedback and want to think more about how we can tell personal stories that also convey meaning for others, for them to find their own way into the story and a connection to it as well.

In my other work outside of this class, I am struggling to translate from text into audio in a non-linear fashion and provide space for a participant in the work to find both their own entry point and way of navigating the space. Also as a way of connecting with the researcher. Thinking about presenting research as a relational experience. How do I challenge the linear and impersonal format of a dissertation, book, or academic text in a way that provides a valuable alternative experience of the story? I have been using a map as one form to explore, but that is also limiting and has boundaries and borders. I am also thinking about the relationship between printed, read text and verbal, spoken word. Preparing this pressure project helped me think about presenting my own process/experience in a way that is not too overwhelming and has some clarity for those participating in engaging with it. This is an ongoing struggle in my process.


Pressure Project 2 (Nick)

I remember hearing about the LEAP motion when it was originally posted on KickStarter. One of the things that I was initially interested in was the idea of interacting with objects in virtual, 3D space. That’s the direction I head with this pressure project as I attempted to build a sort of 3D interaction interface.

I began going about this by first creating a way to visualize hands in the scene, knowing that this had to be handed off to someone I thought that forcing them to launch the LEAP visualizer would bee sort of clunky. I instead used the x and y values from each hand to generate hold values that I changed with a limit scale and fed into projectors on some images of sprite hands I found online. I used rutt etra to rotate them as well, grabbing the roll value from the LEAP and feeding it into that actor.

I spent a lot of time building some clunky logic gates to help make my interactions more intuitive. I wanted it to be so that if you made a circle with a finger one way, it zoomed in, but if you did it the other, it zoomed out. That required quite a bit of playing around with until I devised a system that always let the “circle” trigger through but changed whether it was adding or subtraction from the total zoom based on whether the direction was pumping out a 1 or 2. I then used a similar system to do the same thing for swiping to rotate. All of that data was plugged into a 3D player that was set to a Render which was plugged into a projector. Again, with passing this to someone else in mind, I added some quick instructions that hold on the screen until both hands are present.

Figuring out the logic gates and the 3D were probably the most difficult portions of this pressure project. In the future I think I will go with Alex’s suggestion which is to just build my logic in a javascript actor to save the trouble of using 10+ logic-esque actors passing values back and forth. I’m happy I at least got a taste of 3D as I’d like to pursue working with it further in my final project. Rutt etra was also interesting to play with as it seems to be able to take traditionally 2D objects and add a Z-axis to them, which I think will also be helpful in my final project.


Pressure Project 2 (Maria)

After playing around with making a ‘graphite piano’ with the Makey Makey in class, I realized how fun creating sound through unique interactions was and wanted to try it out with the Leap Motion! The basic idea was to create a piano whose notes could be played by moving your hand back and forth through the air and had accompanying visuals for each note.

At first, I attempted to do a MIDI output from Isadora to play notes through a FL Studio plugin. I’ve never done anything with MIDI before so this was extremely confusing. I decided for the sake of time and usability on other people’s devices without additional software that it would be better to just import individual audio files. At first, I downloaded mp3 files but realized that Idadora interprets these as video files, and the functionality with video files is different than with audio files on Isadora. So I converted them to WAV files and Isadora was able to recognize them as sounds 🙂

I started with programming the sounds to go with my movement with the intent to add in the visuals after. I made individual user actors for each note (see photos below) that trigger the note upon entering a certain number range. (Question: Is there a way to automate the entering of sequential numbers into a series of actors? For example if I wanted the range of the first actor to be 0-100, the next one 100-200 and so on, is there a way to do that without entering in all the values individually?)

Main scene with user actors for sound
Inside Leap Sound Player User Actor

For whatever reason Isadora did not like what I was doing and decided to crash almost every time I tried to test it 🙁 I tried to change things around by adjusting the ranges to be wider (from 50 to 100) and tried putting the sounds on separate channels and not all on one. It didn’t change too much so I decided to build out the visuals separately so they could be played without the audio if I wasn’t able to get it to work during my presentation.

Similar to the sounds, I created a user actor with inputs I could use for the visual associated with each note. They were essentially randomly colored rectangles in a line that showed up when the right hand x pos was inside a specified range and dissapeared when it wasn’t. I had some fun figuring out how to finagle this using the comparator actor plugged into the bypass input of the shape. The output of the range actor was 1 when inside the range and 0 when outside the range, so connecting it straight to the bypass would make the bypass turn ON (turning the shape off) when I was inside the range, and I needed the opposite. I was able to set the comparator to send the ON, or 1, signal to the bypass (turning the shape off) when the output from the range actor was equal to zero!

I also set it so the color would change every time the shape appeared by plugging random triggers (activated when the value exited the range) into the values of a color-maker, that connected to a gate that was opened when the value entered the range.

Main Scene with user actors for visuals
Inside Leap Shape Maker user actor

I then went back to experimenting with the sound to see if I could get it to work any better, and found that it didn’t freeze quite as much if I added the sound inside each shape actor.

Leap Shape Maker User Actor with sound

To top off the visuals, I added a TT Gamma Actor and a TT Grid Warp Actor that were influenced by the Y position of the right hand. It gave it a cool techno feel that I realized probably would’ve aesthetically fit better with a different sounding instrument, but figured it wasn’t worth it at that point to go back and change everything.

Shaders used
Screen grab of visuals

When it came to the in-class critique, there were still a few issues with crashing and Alex suggested I tried copying the scenes over into a fresh file to see if it would help. I did, and it didn’t do much, so there must just be something about the way I set up my actors that Isadora doesn’t like :/

Overall, this project got me really excited about the potential of sound in interactive experiences and has got me thinking about how I can take it further in the cycle projects!!


Pressure Project 2 (Sara)

After opening the project file for the first time in a week, I was mystified by how to even “read” what I had created so far.  I found myself getting hot and panicky when I couldn’t get the Leap Motion to track my hand—it took me an embarrassingly long time to recall that I need to have the Leap Motion Visualizer program open at the same time while using the device.

As it turns out, I need to do a better job of documentation.  Currently, my “documentation” is a mess of contextless screenshots and notes scattered across two separate notebooks I also use for my TTRPG campaigns.  (Like with most things, this would not be the case Pre-Rona.  Your girl is a dutiful note-taker in classroom settings.  Alas.)

So, first things first, I went about bypassing, disconnecting, and reconnecting nodes to determine just what exactly everything does in my patch.  Then, I commented everything out.

I’m pretty satisfied with the results.  When the user twists their right wrist, lightning “shoots” out of the bottle when the angle of rotation is within established parameters.  I can totally see this functionality being expanded into a sort of gamified wrist stretching exercise.  “Hey, exhausted human who spends too much time at the computer!  Twist your wrists to shoot the balloons out of the sky!”  Perhaps that’ll be my final project if I can’t figure out how to develop a spooky Call of Cthulhu-esque Zoom UI?

Anyway, my biggest challenges with this scene entailed masking and image clipping.  As the lightning activates, a low-opacity background appears.  Additionally, the frame of the lightning clip is only so large, so the lightning cuts off before hitting the edge of the projector frame.  When I ran into a similar issue with the bottle PNG being too small to rotate without clipping, Alex helped me add a Scaler actor.  I thought the same method could both hide the sudden appearance of the background and scale out the lightning to the edge of the projector.  Unfortunately, it seemed to only warp the clip.  I managed to obscure the unwanted background with a dark gray square background, but I couldn’t find an answer to my lightning clipping issue.

Bottled Lightning scene

I still had quite a number of hours to go to complete the 5-hour limit for this Pressure Project, and I was fresh out of ideas for what more to do with this Bottled Lightning patch for now.  I began reviewing the mishmash of screenshots I mentioned previously.  Lo and behold, I stumbled upon one I named “Leap Motion Clap Color Change.”  I remembered Alex and Maria puzzling through this practice patch in class, but I hadn’t yet tried it out for myself.  Apologies  if I totally ripped off your idea here, Maria, but I figured I had enough time to kill to try my own spin on it.

At first, I thought I’d try to do a “Clap On/Clap Off” lamp.  When the hands were in range (i.e., clapping), the scene would illuminate.  I even thought it might be neat to randomize the appearance of surprising/spooky/funny images.  

Clap on!

“Oh, it’s a cute lil’ Shy Guy!”  

Clap off!  

[The lights turn off.]

Clap on!

“Ah!  It’s the girl from The Ring!  Clap off!  Clap off!”

If you see the “TEST: Image Generator” scene in the file, you can see my attempts to sort out how to turn on and off Shy Guy.

I didn’t want to throw in the towel entirely, so I shifted to a “Clap On/Clap On” idea.  In Maria and Alex’s original patch, they randomized the color of a square through a Comparator > Random > Color Maker RGBA actor sequence.  Then, the same Comparator and Color Maker RGBA actors were plugged into a Gate actor that hooked up to the Shape actor.  Spinning off of that, I tried to put a street light “on the fritz.”  There was this unsettling path in the woods back in my William and Mary days where the lights would buzz, falter, and go out just as you happened to walk under them.  I plugged the Random actor just into the blue and alpha channels to simulate that change in saturation and strength.  Then, I dumped the whole thing into a shape actor, duplicated it, and scaled it down to make it look like a series of them.

In the Bottled Lightning scene, I was pleased with the Leap Motion functionality but a little vexed by the bare-bones appearance of the product.  With this scene, the reverse proved true.  It looked moody and evocative, but the clapping motion didn’t work as well as I would like.  I’m not certain why this is the case, but I think it may have something to do with the values I input for the Inside Range actors.

Truth be told, this project felt a bit like a “sophomore slump” for me.  My first project was a wacky, freeing, and free-wheeling exploration of what Isadora can do.  This time around, I felt a self-imposed pressure to deliver something bigger and better.  I’m still learning.  I’ll keep at it and try again next time.

Clap On/Clap On User Actor

worlds of wars: Pressure Project 2

Orson Welles (arms raised) rehearses his radio depiction of H.G. Wells’ classic, The War of the Worlds. The broadcast, which aired on October 30, 1938, and claimed that aliens from Mars had invaded New Jersey, terrified thousands of Americans. (© Bettmann/CORBIS) https://www.smithsonianmag.com/history/infamous-war-worlds-radio-broadcast-was-magnificent-fluke-180955180/

A story told through sound alone. When thinking about how to tell a story, I meditated on the idea of context around the story, cultural and historical. How to include the feeling of the story without just telling? How to include the emotional environment without just telling? The idea of a story that is both fictional and influential and a capsule of time lead me to Orson Welles’ War of the Worlds (a retelling of H.G.Wells’ War of the Worlds). The myth around this story is greater than the actual story.

The layers of the hoax run deep. First off, my own knowledge of the story led me to believe that mass panic ensued surrounding the radio broadcast of War of the Worlds, when in fact the panic was largely media-led (specifically the response of newspaper media).

https://www.abc.net.au/radionational/programs/archived/hindsight/war-worlds-3/5674360

Purposeful fake news around purposeful fictionalized fake media.

I then fell into the rabbit hole of 1938 radio. Somehow in my knowledge of this broadcast, I had always pictured panic of alien invasion without ever contextualizing the fact that invasions were indeed occurring; that within the historical context of the U.S. (as well as what was about to be the “European Theatre of WW2), drastic things were erupting. I found a 1938 Charles Lindbergh speech in which an extremely rowdy crowd responded to his jeering and raucous blame of war on three powers: Roosevelt administration, the Jewish people, and Great Britain. The reverberations of the dangers of a mob of people chanting, cheering and angrily jeering can be felt today. Echoed today in rallies, in bombastic political hatred and bigotry, similar sentiments and rhetoric continue to fuel what it means to live in America.

And then there is the absolute gold mine that is radio theater–the compression of live soundmaking into soundwaves that will reach humans gathered around the radio ready for fireside chats and thrilling Orson Welles tales.

Reflecting on the idea of trust/dependency on radio as primary immediate news source, the power structures in place to guarantee only one type of voice to be heard across all radio waves, and the difference in the flood of outlets available to sift through today.

worlds of wars


PP2 – Fortune Teller


My project was to make a fortune telling interface, styled as the Greek/Roman Oracle of Delphi. In its operation it was to detect the input of the user through text or audio, determine what what said, and meaningfully respond to the contextual and emotional charge of the user dialog. I was very much unable to finish, and I think this is as a result of trying too many foreign concepts in this project combined with a 9 hour time limit.

Did start a good discussion in class regarding AI/machine generated response. Due to the nature of the application, it leads to an ethical discussion on how or in what ways we should be (or not at all be) manipulating the experience/emotional states of the participant.

Although, I am familiar with all of the technology and techniques in my plan for the project, I have never directly implemented a few like: language processing, Audio binary, TCP/IP requests (I recommend NEVER doing this by hand, use a wrapper class), Google Cloud, etc. Led me to think about the ways in which I consider what skills I do and do not have a mastery of, and how to construct this knowledge of each into a full project. For instance, I would definitely consider some of the aspects of this project to be outside my comfort zone now. Ideally for a new project, I should be using skills I already have, save for 1 or 2 new concepts only.

Google CPS: https://cloud.google.com/docs/overview/cloud-platform-services
Chatbot Applications: https://www.wordstream.com/blog/ws/2017/10/04/chatbots
Historical Inspiration for Styling: https://www.coastal.edu/intranet/ashes2art/delphi2/misc-essays/oracle_of_delphi.html


Pressure Project 2

Pressure Project 2 was building a fortune telling machine. The time constraint was 9 hours. My goal was to have a machine that would provide a “knowledge nugget” that would be randomly produced when you pressed a key and then after a preset amount of time it would bring up the prompt to select a key again. I spent 3 hours trying to find the actor that would allow you to press a key and then it would produce a result. I finally reached out to Alex and found out how to do this. I wanted to feel like I used my nine hours productively and therefore I reset my time. In the end I used up all nine hours.

During the process of putting together the machine I found that I was unable to get it to produce a random video nor image. Then I went forward with assigning a letter a particular picture. I did overlay two images – the text and a background image. I attempted to get it to loop and was unsuccessful but was able to get it to go back to the first stage when the number 1 key was pressed.

Although I wasn’t able to get everything created that I wanted, I was able to get a machine to work. I wanted to make sure and stay within the nine hours and had to leave it as it was.

I was the first to present my fortune telling machine. It was quite refreshing to see a positive response after not having the most positive response with Pressure Project 1. People said it was cute and that my keyboard, which is multicolored, helped with the experience. One student did try to break the machine. I did find it interesting that although the prompt that you would need to press 1 when done was either forgotten or others would leave without pressing 1. Of course, this would have been eliminated if I was able to figure out how to get it to loop back to the first stage after a predetermined amount of time. It was determined that it needed to be a group setting and not an individual setting for it be successful for any future iterations.

Overall, I am happy with the response and the outcome of the experience. I felt successful as I did this completely in Isadora. If I was to have more time or revisit it, I would definitely have random ability and a timer that would end up looping back to stage 1. I also would like to have incorporated audio into the experience.

Attached are images of different stages I used in my Isadora PP2.

PP2 Image 1 PP2 Image 2 PP2 Image 3 PP2 Image 4