PComp: Recap

Some of the basic concept for the E-Books project have remained unchanged since the beginning, but the device has gone through several iterations, altering most radically in response to playtesting (thank god for playtesting).

From the beginning, I wanted to tell stories about physical books, but I wasn’t sure what that would look like. I pictured a flat device that, when a book was somehow “plugged in,” would display information about the book on a screen or projection behind it. I wanted the interaction and triggering of the information display to somehow be related to the way that people naturally interact with books—either by a page-turn or gripping the book, or wiring in the pages. I created an initial prototype to test some of these things.

I. The First Prototype and Playtest
Playtest 1

Main Takeaways: Content, Wiring, and Thank You Playtesting

Content: By simulating the “slides” with pieces of paper in the upper fold of the Macbook case, I could test the kinds of content and information that people responded to most. Some wanted the device to have more utility, providing notes and annotations, but the most common feedback was that they wanted more vivid information about the stories in the specific text—dog-eared pages, where the book has traveled, who owned the book, etc.

Wiring: I had a dummy plug to simulated plugging the book in, but Dominic recommended RFID to keep the books wireless. I was sold.
II. The Second Prototype and Playtest
img_5842img_5855

Main Takeaways: Trigger Interaction, Content (pt. 2), and Thank You Playtesting

Trigger Interaction: I had this idea for the slide trigger that involved a book clamp that the user would slide her hand into, pushing down onto an FSR after she turned to a new page (another iteration had a circuit on the bottom of this device that would be completed by copper tape on certain pages of the book). In playtesting, it became clear that that device was cumbersome and an unnatural way for people to interact with books. It also focused their attention too much on the book, and not enough on the screen. This caused me to revisit an idea I had early on using a light sensor and light that would be interrupted when the user turned pages.

Content: I heard from multiple people that the premise of the project and the content needed to be clearer from the get-go, and they wanted a better picture of the connection between the book and the person who owns it.

III. The Final (with pretty pictures to come):

img_5887

Final Project: Description, Post-Playtesting

img_5767

For my final project, I’m using the electronic medium to highlight the physical book object, playing with the idea that technology will drive paper text to extinction. For a while, I’ve been interested in the artists’ books, whose subject (and often shape) is the book form itself. With those in mind alongside my writing background, I wanted to create a device that, when a certain book is placed in it, projects extratextual information “outward” from the book onto a screen. Page turning would trigger new text on the screen, which would progress from factual publication information to a more personal history of the book as a specific object.

The device, diagrammed above and prototyped in an old Macbook box below, would work by plugging in the book—that would have some identifying feature recognizable to the Arduino—which would trigger the corresponding text from the code to project from a mini projector on the front of the device to a screen on the back. When a light sensor below the screen read an interruption in light from the projector—meaning a turning page was blocking it—a new slide would be triggered.

img_5789
“Projected” text reads: “This book is well-worn; it has traveled from a childhood bookshelf in Boston to college rooms in Maine, in a living room in San Francisco, and a stack of books in Brooklyn.”

A couple questions that I had entering playtesting:

  1. Which of the slides did people respond to best? I had a feeling that the more personal information would engage users more, but I wanted to test if the progression from facts to feeling was a necessary part of the experience.
  1. Is turning pages—a very simple and familiar act—a dynamic enough interaction?
  1. Is the relationship between the text on the screen and the book text clear and does it work? Or is it too flat and/or confusing that the medium is the same between the book and the device?

Playtesting was extremely useful. I got useful feedback on each question, including:

  • My classmates liked the more intimate narrative of the book object, suggesting that I take it even further (Lindsey, for example, recommended the story of stains or rips on different pages). From this feedback, I’m now envisioning a stack of books, each which can be plugged into the device to reveal its story. Others were more excited about the utility of the device, imagining that the device could display notes or annotations. That’s a great idea, but I think I want to stick to a more personal (and, well, useless) narrative.
  • Turning pages is enough if it sparks an unusual reaction. I don’t think there should be a one-to-one relationship between page-turning and switching of text on the screen, though, which means scrapping the light sensor idea.
  • It became clear that the relationship between screen text and book text is a little confusing and flat. One solution recommended a few times is to include different media—specifically, photos and audio. I’m interested in exploring both.
  •  On a separate note about interface, one classmate brought up the fact that the interface needs to be comfortable and conducive to holding and reading the book—which seems obvious enough, but is a crucial part of the interaction.

Remaining and new questions:

• Do I make a mini projector so it can be a self-contained device?

• What shape does this device take? I’m playing around with the idea of an altar, to emphasize the notion of fetishizing the book object.

• Should I include audio?

• How do the books “plug in”? Dominic suggested I look into RFID, so that the device can recognize the books remotely.

• How do I embed information on each page/trigger output when each page is reached? RFID seems unlikely in that scenario.

• How much will this cost and what will fabrication entail?

PComp Final Proposal

For my PComp final, I have two very different ideas. One of the main takeaways of the semester so far has been the idea that the interface and user action should be linked to the function or output of a project. For the midterm Whack-a-Trump, we chose a piezo sensor triggered by a mallet hit for fidelity to the original whack-a-mole game and because it underscored the frustration that informed what is one the surface a silly game. That same sensor had a very different purpose in Nanou and Kripchak’s project, which sensed the vibrations from breath to trigger a fan and “wind” on a p5.js sketch. The connection between breath blown and wind in the physical tree and snow scene made for a truly delightful interaction.

Idea One

With that in mind, I’ve been thinking about making an instrument that highlights the connection between action and output. I play guitar, which, even in its electrified version, has a very clear path between what the player does—pluck, pick, or strum the strings—and the sound that results from it. With synthesizers, the sound is often abstracted from the controls, which mostly take the shape of buttons, switches, or piano keys.

Last year, I was in the MoMA design store and came across this synth called the Seaboard Rise. It resembled a normal keyboard, except that a soft, rubbery black surface covered the keys. When you played it, the sound was smoother and more fluid—exactly what you would expect from the material.

I’m thinking of creating a synthesizer box that creates the same connection between surface and sound by contrasting different actions and sensor inputs to output different noises. In one area, there could be a strip of a gelled surface, similar to the Seaboard Rise, to create a fluid, sliding sound; in another, flat circles that suggest tapping would create percussive noises, and in another, button or key presses would yield distinct notes. I don’t know much about MIDI yet, but I’m interested in exploring the different sounds that Arduino can produce (hopefully some more pleasing ones than the basic headache-inducing tone created in my mac ‘n’ cheese synth project).

Idea Two

The second idea is more of a toy or a project in which the interaction itself is the end goal. I love dioramas models and as a kid, loved puppet shows, and I’m interested in creating a box with objects that the user can play with marionette-style, with strings attached to servo motors at the top of a box controlled by dials and switches at the base of the box. I was intrigued by the list of movements that you can achieve with a single servo motor that we saw briefly in class a few weeks ago, and I think this project could be a lot of fun to fabricate and put together. I’m not sure if I would want the setting and character to be more abstract or representational, but either way, I think it could be an interesting way to explore the storytelling possibilities of physical computing.

Midterm: Whack-a-Trump

The presidential election has been serious, terrifying, and anxiety-provoking—and it has definitely been frustrating. A lot of it, though, has also been absurdly silly.

Enter Whack-a-Trump, a project that combines physical input with a p5.js sketch to make a timely riff on the Whack-a-Mole game.


My partner, Grant, and I first agreed that we wanted to make a game, and then in a moment of inspiration, he came up with the theme. After that, we set about sketching, figuring out the basic mechanics of the project:

img_5632
Early notes about the three “boxes,” which we decided would take the form of a wall between the U.S. and Mexico
img_5663
Test with the three piezo sensors to test sensitivity and light up LEDs

 After brainstorming and sketching, we had the basic outline of the game—including the idea to use piezo sensors to determine the “whack” on each box—and a plan. Grant would handle most of the visual design elements of the game, while I would handle most of the p5.js coding of the game. Each of us experienced setbacks in our separate tasks and in the work we did together. They include:

  1. The laser cutter for the acrylic box tops: It just wasn’t working. Troubleshooting included unplugging and replugging in the USB cable. After two hours and a tiny fire on the cardboard prototype, we we successful. img_5649img_5651
  2. The piezo sensor: They are tricky to work with. Because they measure a few different characteristics, they are extremely sensitive and unpredictable. Initially, the game was supposed to register a “whack” if any of the three piezo sensors rose above a determined threshold. In observing the sensors, though, we saw that when sensing a large knock, sometimes they rose, sometimes they fell, and sometimes they steadily climbed with little intervention. I thought that a better way to register the “whack” would be to look for a change in the sensor value above the norm (a new, smaller threshold). We fixed this in the p5.js side by comparing the last two numbers in an array of values from the Arduino. Even with the change, the code for the piezo values required a lot of tinkering.
    img_5666img_5662
  3.  Coding the game: It was messy and difficult first in figuring out the logical architecture of the game and then in executing all of the details of it. There were many, many frustrations along the way. One of the biggest challenges was in switching the trigger event—how we could tell that the player had attempted a whack—from the prototype “keyPressed()” function, which used “1,” “2,” and “3,” on the keyboard as stand-ins for the boxes, to the actual data from the Arduino. Figuring out that we could determine the whack attempt by subtracting the last two values in the redBox, whiteBox, and blueBox arrays was a break through, but it was just a first step. As of right now, the game still has a logic problem: If the code that makes “myScore” go up is in the reset function, which makes Trump appear in a new, random location, “trumpWhacked” isn’t true (the condition that makes myScore increase) for long enough to register at the reset, since the change in sensor values it too brief. It the myScore code is called every time “trumpWhacked” is true, though, the score will increase in increments larger than one, since the sensor value might increase above the threshold for more than one reading.

Synthesis

screen-shot-2016-10-07-at-11-54-41-am

 

My partner and I began by approaching the basics, and after conquering the initial panic and confusion by carefully reading the instructions, the two examples went off with out a hitch.

We were not so lucky in our own application. We decided to animate a sketch I had created for ICM (below), in which mouseX and mouseY move a floating plastic bag that a man’s eyes follow. Since the sketch used both x and y variables, we made it difficult for ourselves and decided to use two potentiometers to get an analog read of both values.

img_5593

After trying to muddle through the code ourselves, we called over Dan Shiffman, who showed us how to read two values by adding a comma in between the two sensor values on the serial monitor in the Arduino code then separating them out into two arrays in the p5.js editor. Optimistic and excited, we then placed each array in a variable and replaced mouseX and mouseY.

But it didn’t work. Even after we combed through our code and looked at examples by ourselves and with Shawn Van Every, we still couldn’t figure it out the problem.screen-shot-2016-10-07-at-1-46-43-pm

I would still like to redo the lab, either with a sketch whose code is a touch more organized or with the same ICM animation. It would be nice to see that darn smiley face bag move in response to physical input.

Another question, besides what went wrong with the lab: In the example code for analog output in the synthesis lab, the Arduino program sent the full range of the sensor, 0-1023, to the p5.js editor, which then divided by 4, but in the notes for “Interpreting Serial Data,” the variable “sensorData” divides “sensorValue” by 4 before sending it to the serial monitor. Is either practice better or do they work equally well?

Annie’s Mac ‘n’ Cheese Synth

This week, we learned how to work with analog output. The labs went fairly smoothly, though I had a problem with the first. When I moved the servo motor with a potentiometer, the motor would sometimes jitter, continuing to move back and forth quickly even when the variable resistor was still. When I checked the sensor reading on the serial monitor, the numbers were wavering, leading me to believe that the problem lay with the reading of the sensor. After double and triple checking, though, the circuit looked correct:
img_5564One outstanding questions is, Why was the sensor reading off, causing the servo motor to jitter?

The Mac ‘n’ Cheese Box Application

For my application of the lab, I came up with the idea to create a simple instrument using am 8-ohm speaker, a linear touch sensor strip, and LEDs. Using the tone function to change frequency, the instrument would do two things: map a range of frequencies along the sensor strip and light up certain LEDs when the frequencies hit notes in a C-Major scale.

Original drawing
Original drawing
img_5570
Proof of concept for C scale LEDs, stage 1
img_5571
Proof of concept for C scale LEDs, stage 2
Annie's Mac 'n' Cheese box—the perfect size
Annie’s Mac ‘n’ Cheese box—the perfect size (attempted disguise with magazine page)

 

 

Some parts of the project went smoothly; others were more difficult. I found that mapping the sensor strip was harder than expected, in part because I suspect I might have affected the readings when I soldered the strip to wires. Even when there’s no force on the sensor, it produces a reading of around 500, creating a tone even when the user isn’t “playing” the “instrument.” This isn’t ideal.

Construction also proved difficult. The box was a tight space to house the arduino, the breadboard, and the most complicated circuit I’ve created:

img_5575
In the end, despite functional problems with the LED set up (it’s hard to linger on a single frequency long enough to make the LEDs light up for more than a flash), issues with the continuous tone, and the general aesthetic junkiness of the mac ‘n’ cheese box prototype, the project was successful enough to make my musician roommate break out a drum machine app and play a little—in the middle of his Orioles wild card playoff game, no less:

A Few More Outstanding Questions:

  • I think we will go over this more, but I’m having trouble grasping how transistors work. I understand the context in which they’re used, but how exactly do you use them and why?
  • Is it possible to load in audio into the Arduino interface to make instruments with sounds other than that awful tone?
  • A sensor’s reading ranges from 0 to 1023. Does that have a unit, or is it just units of information?
  • Similarly, how does that range translate to memory on the Arduino?

 

Week 4: Labs

Today’s labs went fairly smoothly, besides an issue with a faulty wire or LED (I replaced both without testing to see which was responsible—I’m beginning to see why the multimeter comes in handy). Some process photos:
img_5498 img_5499 img_5500 img_5507 img_5508 img_5510

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A Simple Application: The Frustration Meter

After playing around with programming LEDs with different delays to create a mini light show triggered by a button, I thought of a different application using a force sensor. The Frustration Meter measures the force of the writing utensil against the paper in three levels: good, annoyed, and pencil-breaking anger. I used three separate “if” statements to define the range of analog input on the sensor, which then triggers a brightness variable for each LED. (Note: The “ding” sound was from a text on my computer and unrelated to the circuit.)

 

 

Week 4: That Awkward Moment When…

The doorbell doesn’t work.

In fact, they never seem to work correctly. It’s a very simple interaction—a person presses a button that triggers a sound inside the apartment, thereby alerting the inhabitant to their presence—but there are many steps along the way in which the process seems to derail. First, the frequent lack of immediate feedback causes problems. I observed, and have experienced many times myself, that awkward moment when you press a doorbell, and, hearing no sound from inside the house or apartment and/or receiving no click, light, or other response from the switch, you don’t know if it created the desired effect. You’re then faced with a difficult dilemma: Do you press it again and risk being that impatient person who has buzzed multiple times in quick succession? Do you go a different route altogether and knock on the door like they did in the days of yore? Or do you just flee and never look back?

Doorbells not only have a function in a practical realm, but they also have significance in the world of etiquette, either facilitating smooth interaction between people or, when they fail, fostering social fumbling. This has repeatedly become apparent in my own living situation. The doorbell on my four-apartment, four-story building has no logical structure; it doesn’t correspond to the layout of the building. The basement apartment is the bottom button, which makes sense, but reason fails beyond that. The order from third button down to the top is reversed, with first floor on top and the third towards the bottom. The top button, my floor, receives frequent unwanted buzzes from deliverymen and guests who don’t know which one to press because the system lacks another crucial feature: a guide. We used to have names beside the otherwise featureless buttons, but they disappeared sometime a few months ago (I was going to inform my landlord that this was a problem but then ITP started). A doorbell, in its simplest form, is a very simple mechanism that uses a button switch and digital input to trigger a mechanical sound, but as my roommates and I can attest, it often goes wrong.