Individual project
Processing&Unity | Kinect
Research | Interaction design | Development
2018.07-2018.08
Abby Lee - front-end development
Sam Krystal - fabrication
p5.js | Arduino&Muse
2019.10-12
Interaction design |
Front-end and hardware development
My team members are born and raised in New York. However, for me, I moved from a small town to New York, I was experiencing a drastic difference. I was like a horseman stuck in underwater world. I followed others' choices and pushed myself so hard to go to every event, info session no matter I like it or not... I felt exhausted and lost at some point.
As the time goes, I was focusing on social topics around digital and physical experience where I found myself enjoyed a lot. I talked with my team about my transition which might happen to a lot of international students or anyone who stepped into a new environment. From here, we started to center around brainstorming the experience simulation.
For our initial idea, we wanted users to have transition between two states : calmness and anxiety. In the interaction, a user is wearing a Muse(which could sent brain wave data) and stands in front of a a projected animation. Based on the user’s state of mind (anxious or calm), he/she could influence the movement of the animation, which would be translating the data from the Muse to the visualized animation.
Our narrative was that if a user is able to understand in a chaotic environment, he/she could initiate this state change. To fulfill the purpose of understanding self, we wanted to use a Kinect, with the user’s silhouette projected onto a screen. When anxious, the user sees his/her silhouette as a blob formed amongst a chaotic background and when calm, the user sees his/her silhouette as a uniform shape with a less chaotic background. We imagined the visualization as smoke, where the user would have to concentrate or clear her “foggy” mind, in order for the smoke to condense in the form of the user’s silhouette.
After speaking with our physical computing professor Jeff, he gave us feedback to incorporate a talisman or a similar type of physical prompt into our project so users can have a tangible interaction. So, we regrouped to discuss what we wanted the talisman to look like and ended up revamping our idea. I quickly drew a sketch to represent our new idea where we had not decided our tangible interaction.
The starting point was almost the same, where users had to focus and changed the smoky blob into a clear silhouette. After users could saw themselves as fiklee we thought about how to incorporate elements of nature into a talisman. Growing up, Abby and Sam used to catch fireflies in their background. This activity shares similarity with gathering energy.
Hence, we developed a concept where users would hold a mason jar and “catch” fireflies on the projected screen. As you “catch” fireflies, or close the lid over the jar, LED lights in the jar will turn on and the jar will vibrate to indicate you’ve caught something.
We conducted user testing in class. We explained the concept and asked users to imagine their silhouette in front of a simple p5.js firefly sketch I created. Our users were told to wear a prototyped headband. We learned a couple of things from user testing :
Usability Testing Insights:
1. People were confused about the story : if they saw a smoke silhouette along with the fireflies they would be focusing on creating their smoke silhouette and not knowing what to do with fireflies in the background.
2. Catching fireflies is not a universal activity. When given the mason jar, some users stood in front of the screen not knowing they were supposed to capture the flies.We determined we had two different projects, smoke silhouette and fireflies and that both of them together was making our story convoluted.
3. There were three input: Kinect(body movement), Muse(brain wave) and jar sensor input, which confused our users.
Therefore, we redesigned the structure and cut the Kinect part off. In our third iteration, we decided to simplify our story and produce our minimum viable product.
We determined to use the opening and closing of the jar as the start and end of an interaction, instead of utilizing the jar to catch fireflies. We kept the idea of transitioning between two states, and adjusted to a more measurable state: concentration. Whether a user is concentrated or not determines two different visual states on the screen.
There's an Arduino inside the jar and we used the unscrewing of the lid as a switch to start the screen projection. In order to be mobile and carry out those tasks, we sent the switch ON/OFF data to p5.js through bluetooth.
In order for the Muse data to control the visualization, we needed the Muse and the p5.js library to work together, alongside utilizing Node.js. There needed to be an established connection between the two programs using OSC (Open Sound Control is a protocol for communication among computers, sound synthesizers, and other multimedia devices). OSC streaming or the conduit between the Muse and the animation would be through an application called “Muse Monitor”.
Abby worked on a prototype of our visualization. We wanted two states: focused and unfocused. To get that effect, we wanted a dispersed groups of objects across the screen for unfocused and then a condensed group in the middle for focused. Abby got a prototype of the code with the state change being mouseX for focused/unfocused and mouseClicked for appearing/disappearing on screen prompted by opening the jar. I translated the Muse data to drive the state change and the jar switch to prompt the release of energy. I coded a threshold utilizing the difference between the alpha and beta brainwaves waves to fluctuate between the states.
1. Need intuitive prompts to focus
2. Add sound to make users easier focus
3. Add more variety to the visualization
4. Switch mechanism – Pressing down on a button is not consistent, and we had a classmate Ben suggest using electrical tape.
Therefore, we surveyed our classmates to determine what they were thinking of when they were successfully focusing. Some answers were focusing on a specific point on the screen, thinking of nothing, looking at the time. So we redesigned the visualization and added some text prompt into visualization.