For our first assignment in Sketching Communities, we had to choose a community, sketch and observe it, and create a prototype of an object central to that community.
I belong to the community of fairly secular Jewish Americans. In my experience of this community, I’ve always identified more with the cultural markers—humor, movies, food, etc.—than the strict markers of religion, such as belief or ritual. New York has the second highest population of Jewish people in any city in the world (only after Tel Aviv), and, because of that, it has been responsible for the creating its specific brand of “ethnic” culture (think Woody Allen).
There are many different ways to be Jewish, and they’re all represented in this city, from the isolated Hasidic neighborhoods to the delis of the Upper West Side. I’m interesting in exploring some of these communities this semester, focusing on generational differences.
Going Back to the Roots
For the first assignment, I visited Eldridge Street Synagogue, one of the first synagogues in the U.S. and now a museum in Chinatown. It was interesting to see how Eastern European Jews a century ago had already begun forming their own identity—a hybrid of old and new—in objects such as a gavel for meetings and the synagogue “Constitution.” I sat in the sanctuary for a while and sketched. Though I didn’t create anything I liked, I enjoyed sitting there in the quiet drawing.
On my way back, I picked up a bagel and lox, a staple in my Jewish upbringing—and a staple of New York Jewishness—and decided to draw that. Every one of my Jewish contemporaries that I know has a particular way that they prefer their bagel sandwich, so for my interactive prototype, I created a customizable lox sandwich.
For my Nature of Code final project, I wanted to extend the first homework assignment I made six weeks ago, called “Six Degrees of Robert Johnson.” A play on Dan Shiffman’s “Six Degrees of Kevin Bacon” example (itself a play on other “Six Degrees of Kevin Bacon” and the original idea, “Six Degrees of Separation”), the project would illustrate the influence of Robert Johnson—symbolic or real—on rock music.
Johnson, the “King of Delta Blues” is a fascinating character. We know so little about the musician who died at the age of 27, and though he only released a handful of recordings, he has acquired a legendary status in music. He’s so legendary, in fact, that there’s a myth that he earned his guitar skills after he sold his soul to the devil. I wanted to show how his legacy has reverberated in the history of rock, as well as they ways that music evolves—through acts of appropriation.
In this last exercise, I had two main goals: add a larger data set and output a visualization of some kind.
For the data, I turned to two sources, a list of “500 Songs That Shaped Rock” and the SecondHandSongs API. Using the latter, I created a JSON document of all the artist who had created a version of the songs in the former. Though I only made it through the letter J in the list, I ended up with a list of about 10,000 artists.
Adapting Shiffman’s code from his own Kevin Bacon example, I added user inputs, a plug-in that autocompletes user inputs called Awesomplete, and two new classes: NodeViz and Edge to create visuals on P5.js canvas. The complete code can be found here and the Github-hosted page of the project is here.
In the future, I’d be interested in pushing the project further, adding a more complicated network visualization with Robert Johnson in the center with multiple chains of influence extending outward. Because of the size of the data and my less-than-efficient code, the program also runs fairly slowly at the moment. I want to either make that run more smoothly, or, as Shiffman suggested, visualize the process of the search algorithm in some way.
A couple weeks ago, I came across an infographic made by the music blog Pitchfork about the gender breakdown of performers at 2017 music festivals. Male groups comprised 74% of performances, while female groups represented just 14% of acts. The remaining 12% contained had both genders.
I play guitar and write for a music blog, and while I do feel that as a woman, I am in the gender minority in both of those activities, I was still surprised by the extremity of those numbers. I decided to investigate more. Does the 2017 lineup represent a more extreme gender disparity than other musical venues or do men dominate popular music that starkly?
To begin to answer that question, I turned to two different charts: the Billboard Top 100 Artists of the Year, and Pitchfork’s Top 50 Albums of the Year. I chose these two different sources because they are each considered, to a certain degree at least, the authority in their respective musical realms—pop and indie. To start, I collected data from the last five years of charts on each site, making a CSV that included the year, artist name, ranking, and gender.
I suspected before I started visualizing the data that the Billboard charts would have a greater discrepancy between genders than the Pitchfork charts. Pop music, I thought, would reflect less progressive values than alternative music, plus Billboard uses a more scientific method in their rankings (streams, album purchases, etc.) than Pitchfork, which, as far as I can tell, is selected by editors who can subjectively consider factors such as diversity.
When I visualized each data set, though, I discovered that the ratio of male to female performers is similar. In the Billboard Charts, the breakdown is:
349 Male; 132 Female; 18 Both
In the Pitchfork chart, it is:
167 Male; 61 Female; 22 Both
In both of them, all-female acts make up about a quarter of performers, and acts with either all females or a mix make up about a third of acts.
I decided to represent each chart year on a separate line, showing each of the three categories with rectangles of different colors (red for female, purple for both, and dark grey—a kind of default negative space—for male). Here are the two chart sources shown together:
In class last week, I loved the projects that we saw that represented data with sound, and I thought that it might suit this project well. It is about music, after all. Also, seeing the stretches of dark space is one thing, but I thought that hearing a series of notes that always seems to return to a low drone would make the point more effectively.
Using a timer with millis() and the Processing sound library to create a sine oscillator, I made a very boring electronic song with three distinct pitches. Here’s that sequence choreographed (without sound) on the Billboard chart:
Finally, because the sine oscillator doesn’t make the prettiest of sounds and because this project is about female musicians, I decided to play the chart on guitar. I like the idea that, as Jer has talked about throughout the course of the semester, labor—time or physical exertion—can be another expression of the data. In the time that it took to get the notes right, would the finger responsible for strumming the string that represents male groups grow tired? (Yes.) It also brings the semester full circle for me, as it returns to the abstracted, aestheticized data representations that we explored in the first unit.
Here’s the song—not much more exciting than the one with the sine oscillator but with more reverb—that ensued:
For the Data & Publics unit, Rushali and I wanted to create a conceptual project that would take place outside of a screen, on the streets of New York. We started by digging around the NYC Open Data site, looking for interesting data sets, and we found the 2015 Tree Census. It’s an incredible collection of data that contains information about 682,515 trees across the five boroughs, including species names, unique ids, even the status of each tree’s health.
As a jumping-off point, we imagined the city as a giant forest. What if, like in a real forest, the trees were the most pertinent landmarks? We started conceiving of an alternative city map and way-finding system that uses trees instead of street names and house numbers. In this alternative map, trees would become central to our experience of the city.
A Sampling of Inspirations:
Rushali sited her grandparents’ relationship to trees, talking about how in their villages in India, they knew the nearby trees intimately—what fruit they gave, when they bloomed, and so on. We wanted to regain some of this knowledge in our city, where people are detached from nature and don’t rely on it in the same way.
Trail markers: On many hiking routes, trails are marked by symbols on trees. We looked at these markers—sometimes a particular color of paint or a reflective medallion—as a model for using trees as guides.
We also thought about accessibility in devising the form of the tree markers. Rushali talked about her friend’s braille magazine and using symbols that could be touched, to use a more reliable classification system then color.
For our alternative city map, we thought of creating a digital map such as Google or Apple maps (or a toggle option within) that the user could view in “tree mode.”
In our “tree mode” example map, we imported the tree census data set into a custom Mapbox Studio basemap, stripping the map of the usual markers—streets and street names, buildings, etc.—to highlight the paths created by tree dots. The numbers on our example show the IDs for different maples in the city and begin to show how people could adapt their notions of place in the city, from neighborhoods and streets to clusters of tree IDs. In the process of creating our map, we also found a couple of projects that created similar one, including NYC Parks and Jill Hubley.
For the tree markers, we wanted to create labels that would serve multiple functions: showing the tree’s ID and “address,” educating passersby about the kind of tree, and presenting a fact 1 the tree that would emphasize the role of trees in nature and society. Each tree would have its own marker, which would have a color and identifying cut-out shape for the visually impaired corresponding with the tree type.
For the final, I continued to explore the theme of loss, creating a static collection of lost objects from Twitter using the query “I lost my.” In the process, I started to tackle the Twitter API and Node.js for the first time. As a result, the technical aspects of the project took the most time and focus, but the result—a site with a collection of words and a simple interaction that reveals their stories—is a good jumping off point for this theme that I’ve been attached to: the range of everything encompassed in the word “lost.” If you click, the objects will change color according to the categories “people,” “things,” “states of mind,” and “other.”
A couple of lessons:
• I knew learned this in ICM, but rediscovered that Daniel Shiffman’s videos are amazing. His tutorial on Node.js saved the day (many days, for that matter).
• A big lesson throughout this process was that you can learn a lot by Googling things, but you have to know what to Google in the first place. Once I figured out how to collect tweets with Node, I then had to learn how to write the data from the server into a JSON file, but because I didn’t know exactly how to phrase my question, I ended up rigging a formula that would write the tweet text into JSON format with a bunch of quotation marks and commas, basically manually forcing it into the write format. When I ran the project by Jer Thorp at office hours for Data Art, he corrected the very silly error of my ways with JSON.parse() and JSON.stringify(). You don’t know what you don’t know.
Takeaways and hopes for the future:
In the future, I want to tackle the Twitter Stream API, so that the collection of objects can be dynamic and change as new tweets come in. This would add the dimension of time, which is very tied to loss in all its forms.
In its current form, the project relies on a lot of manual work. I ended up collecting the lost objects myself because I couldn’t get my formula for automating it to work (I tried separating each tweet into an array separated by spaces, then finding the word after the phrase “i lost my” but before the index of the next space). The upside to this is that I got to have more editorial say in the objects. Keeping two words in “fucking ball,” “Beats headphones,” and “abortion money” are all choices I could make. Still, more automation would be nice and even necessary if it runs on the Tweet Stream API.
For my final proposal, I want to continue with the theme that I explored in the last project, the Gallery of Lost Objects. In the prototype, I used only my own story and envisioned interviewing others to fill in gallery. There’s a place, though, where an archive of a range of stories about lost objects exists and is constantly updated: Twitter.
In the past few days, I’ve been exploring the tweets that come up with the search “I lost my,” and range of responses is fascinating. There’s the standard social media drama:
Some are slightly more serious:
And some are sad (though I tend to be skeptical of the motivations of anything personal posted to social media):
Using the Twitter API, I want to collect the tweets that contain this phrase, categorizing them by the word that follows “I lost my…” I’m not yet sure how I want to display the text, other than the fact that want to distance them from the Twitter aesthetic and treat them each as a mini story. I’m also in Data Art and a digital mapping class, so I’m also thinking about creating visualizations down the line. For next week, though, I think I’ll stick to writing and a web page.
This week, we had to explore two different mapping tools: Mapbox Studio and QGIS. The custom map is above and also on my github page. I ended up mapping Business Improvement Districts (BIDs), which are collectives of local businesses that take an active role in maintaining and improving their neighborhoods. I got the data from NYC Open Data. After making this, I looked through the gallery of custom maps on Mapbox Studio, and it seems like it’s a super flexible tool that I hope to revisit.
For the QGIS assignment, I took cues from the reading from How To Lie With Maps, choosing the politically charged territory of Israel and Palestine. I isolated and abstracted the territory, leaving the surrounding areas empty except for very lightly drawn roads. Unlike the maps in the reading, though, doesn’t have a clear propaganda message (which is good because my own feelings on the topic are complicated).
Assignment: Create a narrative in response to the City Reliquary Museum, based around objects.
While thinking about this assignment and the stories that emerge from objects, I came across a New Yorker piece by Kathryn Schulz, about the different meanings of “loss” and how it applies to both things and people. In it, she references the poem “One Art,” by Elizabeth Bishop, which I recently rediscovered, about the art of losing things. It begins, “The art of losing isn’t hard to master; / so many things seem filled with the intent / to be lost that their loss is no disaster.” Throughout the course of the poem, she moves from keys to the person she addresses in the poem, whom she has lost. Both the Schulz essay and the Bishop poem show the narrative possibilities of lost objects—the way that the absences become significant, and how sometimes the experience of losing something becomes a story in itself.
In the City Reliquary Museum, as in most exhibitions that display historical artifacts, the objects stand in for a memory or historical event. With lost objects, it’s the other way around. For this project, I wanted to explore the ways that memories and stories represent the objects that we no longer possess, for whatever reason. I created a prototype of an interactive site (made with Illustrator, jQuery, HTML, and CSS) that I envision as a larger collection of different narratives and shadows of lost artifacts.
This week, I mapped my own data from the Moves app. After trying and failing at creating animation with Leaflet.Polyline.SnakeAnim, I decided instead to take up the challenge of creating toggled displays based on time filters. I first attempted to change the display using removeLayer() based on states being true or false, but it only worked when I bound the layer displays to click functions:
The code is a bit clunky, but hey, it works. See evidence here.
Two of my favorite exhibitions that I’ve seen recently aren’t exactly exhibitions. The first, pictured above, is the public storage at the Museo Larco in Lima, Peru, which I saw last summer. I saved this room for last, after meticulously going through the galleries (I tend to be a chronic “studier”), and it was an adjustment to go from displays that highlighted each piece with a pedestal, plaque, and nice lighting, to rows and rows of pottery. Rather than normalize the work, as you might expect from seeing so many, it made the pottery seem even more incredible and special. I always appreciate when museums allow a peak behind the curtain, something even more rewarding when the art is so tied to historical context, as this was.
The other not-exhibition exhibition that I loved recently also involved objects on shelves:
I saw the KIOSK project at the “Greater New York” show at MoMA PS1 a couple years ago. Formerly a consignment shop (and now an online store and a space at the Cooper Hewitt), Kiosk set up their international novelties in plastic shelves that formed a maze-like path through a room on the second floor of the museum. Each object had an identifying number beside it that you could type into your phone after dialing a given number, after which a robotic voice would narrate a little story about the object—where and how it was acquired, what it was, etc. I loved the way that the simple interaction—calling a number then dialing a few digits—elevated these everyday, consumer objects. I stayed for a while and could have stayed longer.