The Stars Aligned

Pun heavily intended.

FINALLY GOT THIS TO WORK! I feel like a huge rock’s lifted. I was a little worried because, last week, while I got everything to work, it stopped becoming a collaborative activity – which I felt was the most crucial part of the experience.

But now, as shown, you need 2 persons to complete the whole experience:

Next week, I plan to do just minor tweaks on the experience, like prettying the buttons and making them bigger (because it’s hard to hover over the constellations now), and then it’s heads-down on the presentation.

A rough outline of the presentation looks like:

  1. My topic and why I wanted to do this – how I was inspired by the documentary, what I personally enjoy seeing.
  2. Insights heard from interviews and how it informs my approach.
  3. Numerous things I’ve tried.
  4. Demo.

Thanking My Lucky Stars

These are what I intended to do for this week:

  1. Detect multiple wrists
  2. Light up associated points only
  3. Add text instructions.

I’m still not able to get it to detect multiple wrists but I managed to find a way to combine #2 and #3, which are mostly the signifiers of the experience. This is all due to a couple of resources on Github that I found (Thank God!). These not only helped me get closer to a more complete experience, and also a more visually appealing one.

I was honestly getting a little bit panicky because of how amateurish it looked, but the new resources I found significantly helped me in this aspect.

From this:

dot turns blue

To this:

ezgif.com-video-to-gif.gif

It’s not very obvious in the gif, but there’s a subtle particle pattern going on in the background. The buttons become the signifier and also the instructions that initiate the experience.

My only concern is how, now, it becomes less of a collaborative experience. But now that the core experience and visuals are close, I think I’d be able to figure that out quickly.

Next week, I plan to:

  1. Finalize everything
  2. Figure a way to make it more collaborative
  3. Detect multiple wrists

 

Life Lessons from Sucking at Mini-Golf

Gina, Jessica and I went to Stagecoach Greens a few weeks back to play mini golf together. The place was themed—we had to go through each station, and they each had an individual story.

Turns out, knowing how to play golf doesn’t necessarily guarantee that you’ll know how to play mini-golf. I know some basics of golf, and when applied to mini-golf—didn’t work. I kept hitting the ball too hard, and was also not used to using a putter throughout the game.

I was getting visibly frustrated, to which Gina responded,

“Nat, this is literally your first time playing. Of course you’re going to suck.”

In the end, I still came in last out of the three of us, but the gap wasn’t too wide. I got kind of the hang of it mid-way. Jessica and Gina also guided me along the way.

“Slow down.”

“Aim first Nat.”

I’m not competitive, but I hate sucking (or not being able to perform as well as I want to). This mini-golf experience kind of mirrors this semester for me. Experiencing Science Hack Day, and creating a code-heavy deliverable is my attempt to want to be okay with failing, with sucking, with seeking help and letting others help me throughout the process. At the beginning of the semester, I was very antsy with this, but forced myself to stick with it. It has honestly been difficult, as I am completely out of my comfort zone. But it has also been so much fun. With the help of my amazing classmates, I’ve also grown closer to them.

Some days, I still struggle, a lot, especially with my code. But my perspective has also shifted from wanting to get something done perfectly, to being okay with spending the time to explore all the possible options—even if that means it’s not done as how I wanted it to.

Make, make, make

– What did you plan to do?
– What did you actually do?
– Brief description of the activities
– Links to (or images of) what you created (include photos, notes, sketches, working docs and deliverables…whatever you created. Document both the output AND the process.)

The answer to the above question for this week is all the same: Code.

Now that I have a clearer view of how I wanted the experience to be (based on last week’s scrappy user testing), this week was all about making/coding.

The journey looks roughly like this:

  1. Instructions and dots (as signifiers) on screens
  2. Person hovers wrist over dot(s)
  3. Information about constellation appears

Turns out, it’s a lot harder to do this.

I first started with a grid of sorts, and the green point is a trigger point. So if someone hovers over that point, something happens/changes. However, this approach was a lot more complicated than I needed to be.

dotgrid and posenet.gif

So, with the advice of J.D, I made something much simpler (below).

dot turns blue

The dots already laid out in the order of constellation, and when a person hovers, information appears.

I got the main interaction working (hover > change something), but it currently only detects 1 set of wrists, and all the points change colors when hovered. What I want is for only the associated points to change color (not all of them).

For next week, I hope to get the code working so that it can:

  1. Detect multiple wrists
  2. Light up associated points only
  3. Add text instructions.

Scrappy is Key!

This week, I wanted to focus on getting the instructions and journey right. I started really scrappy, with paper pasted on the wall as instructions.

Screen Shot 2019-11-16 at 11.00.16 PM

Screen Shot 2019-11-16 at 11.00.01 PMScreen Shot 2019-11-16 at 11.00.08 PM

Even though I was able to gather valuable feedback, I quickly learned that I needed to get the code working in order to understand if these set of instructions really worked, and whether or not it’s low in learnability.

Initially I had wanted to start the experience by prompting people to do an action first (raise their hands up). But through this quick and scrappy testing, people told me that they wouldn’t be willing to do such a big action in a public space, and for an unknowable experience.

I found that I had to balance between giving people enough information, but also keep them in suspense, and reward them with a mini sense of wonder at the end.

Below is an outline of how it iterated.

V1-3.gif

I was stuck on the code, and getting it to work because I was exploring different technologies (p5, processing, Kinect, etc). And I got frustrated because it couldn’t and I still needed to figure out my instructions. So, I decided to pause on that and just focused on nailing the core interactions. Which worked so much better (for the work and for my soul, lol).

I, however, managed to build a scrappy MVP. This detects if my left wrist is higher than my nose. If it is, then the word ‘Leo’ appears.

MVP1.gif

Next time, I would definitely not start on the code first before I nailed the core interactions. Next week, I plan on finishing up the code (functionality, not aesthetics) so I’m able to test again and get higher-fidelity feedback.

Connectivity vs. Connection

Based on last week’s experiments, I tweaked the Constellation experience so that there’s a galaxical (very legitimate adjective) background to contextualize the experience and so it doesn’t “look like stick figures”.

Screen Shot 2019-11-08 at 1.17.16 PM.png

I showed Lee Cody the above prototype and some helpful feedback he gave me was:

  1. What do you want people to take away from this?

People will interact with it, because it’s out of their norm but they will appreciate knowing the why, so they know why they have participated. In other words, what’s the “reward” for commuters participating in this?

2. Can you point out poses for people to try?

This might encourage collaborative behaviors, and could also help people learn how to interact with this experience.

I think that might also be contributing reasons as to why people weren’t engaging with my prototype more last week. The instructions were too small, and people didn’t really understand what was going on—even though they interacted with it.

A next step for me is to: Figure out how this journey look like. How does someone discover it? How do they know what’s happening? Afterwards, how do they know play with it (and others)?


 

On top of that, I am also continuing my conversations with commuters. Two out of three of them I spoke to this week were very sociable people who have actually made friends on their commute. Some things they said that stood out:

N:

“We have connectivity now but not connection.”

In response to a fight she witnessed, “I didn’t react, I didn’t play a part. Wasn’t too affected by it. It’s not my problem.”

“[on the bus] I think we’re intrinsically connected [with other commuters on the bus] but I wouldn’t ask them how they’re doing and so on.”

R:

“I feel pretty connected to my community… I gesture, smile to people and even though I don’t physically talk to them, I feel like I’m part of a community that way.”

“I don’t put on headphones or only one side in because I want to be more present and more responsive to people around.”

I find this fascinating because as easy as it is for us to feel some sort of connection to our surrounding [sharing the same space feels like it’s enough to provide a sense of connection], it’s also easy for us to disconnect from it [with our devices—headphones and social media, etc].

After this, I feel rather reassured because it gives more ground for what these seemingly  random experiences I’m building. All people need is a common ground, and then a nudge of sorts to get to that foundational level of connection when they are in shared space.

 

Waiting Games

This week just flew by. 

I decided to just put my stuff out for people to try, as I was get antsy with my progress. First, I did it outside the hybrid lab. I was just testing out my set up when a friend came by with her friend, and both of them tried it out.

Some things they mentioned:

“It’s kind of creepy that it’s following me around.”

“It’s kind of stuck.” (frame rate)

Screen Shot 2019-11-01 at 11.42.05 AM

I also put my prototype up in the Nave during Halloween, because I thought it’d be a perfect time for people to do weird stuff—with the help of Gina and Hridae!

Screen Shot 2019-11-01 at 11.42.33 AM

Below are some images I took stealthily. I was sitting by the side (looking after my equipment) and observing how people are reacting to it.

Screen Shot 2019-11-01 at 11.42.53 AMScreen Shot 2019-11-01 at 11.42.46 AMScreen Shot 2019-11-01 at 11.42.40 AMScreen Shot 2019-11-01 at 11.42.22 AM

Some observations:

  1. Some people don’t even notice it. Because it’s placed on the side, some people just pass by it.
  2. Some people are wary of the camera. It pushes them away or makes them a little creeped out.
  3. Proximity (Tech issue). The ML model that I’m using isn’t optimized for closer distances, so when curious cats were coming up close, nothing shows up.
  4. “It looks like stick figures.”

I am also wondering if the Nave isn’t really a space for people to wait—it’s where people are transitioning from place to place. It is a great space to test my prototype, because public transport stations are also transitory places—but it is also a space where people wait.

As a next step, I might try another set up (facing the front of the entrance, instead of sideways) and also figure out how to aesthetically make it look more like constellations (instead of stick figures).

Sidenote: I created an Instagram account for people to follow my process along, and for my own documentation as well. I’m calling my little experiences Waiting Games 😉