Data and the Superpanopticon

Because I’m interested in working with reality television, I was most intrigued by Lisa Gitelman’s discussion on the problems of ‘dataveillance,’ or the collection and usage of data from individuals regarding their identities and personal attributes. She questions the ethics of this process in our current age, citing Rita Raley’s examination of those who try to push back with online tools that inhibit the ubiquitous data mining of our contemporary Internet. In ‘Dataveillance and Counterveillance’ (Raley’s chapter in the book for which Gitelman provides the eponymous introduction piece), Raley unpacks the typical argument in favor of allowing passive data mining: a “personalized Internet” is only possible if our individual tastes and preferences are fair game for circulation, and “voluntarily surrendering personal information becomes the means by which social relations are established and collective entities supported” (125). The necessity of trading privacy for inclusion is certainly problematic, and I see it as a specific link to my interest in reality television shows wherein contestants formally trade their privacy (as well as their very power of self-representation) for the benefit of inclusion on a television show and potential financial gain. Raley then introduces the idea of a ‘superpanopticon’ which must necessarily exist to register our interpellation by databases; while this for her is a link to a discussion of ‘corpocracy’ and ‘cybernetic capitalism,’ I identify it for my purposes as a term to describe the reality show institution which mines identity and individual action to construct the desired narrative for commercial purposes.

Aiding my consideration of a show like Survivor as one such ‘superpanopticon,’ Raley cites Kevin D. Haggerty and Richard V. Ericson who posit that surveillance “operate[s] through processes of disassembling and reassembling. People are broken down into a series of discrete informational flows which are stabilized and captured according to pre-established classificatory criteria. They are then transported to centralized locations to be reassembled and combined in ways that serve institutional agendas” (127). This is a lengthy excerpt, but I find it so startlingly appropriate to describe how contestants can be treated on reality television shows like Survivor. Contestants’ thoughts and feelings are mined on location through individual interviews known as ‘confessionals,’ which are then transported to a central editing station to be assembled together at the whim of the producers. My initial project idea to explore Survivor confessional frequencies by episode and season draws from this notion (which I could not have stated so well before) that players are mere building blocks in a larger narrative, who can be highlighted or backgrounded as the show desires. After reading Gitelman’s caution regarding the ethics of dataveillance, though, I begin to wonder: Even if I can use the confessional data to learn and expose something about the superpanopticon of Survivor, am I still inescapably guilty of drawing on the identities and representations of others for the alleged benefit of collective knowledge?

Kelly-purple-never-talks

“Purple” Kelly Shinn, infamous as one of the most under-edited contestants in  Survivor history

Data and the Superpanopticon

Overcooked Data

Gitelman’s idea of “raw data” being an oxymoron and the various degrees of which it can be cooked has taken me backward historically in my research area, rather than what I thought would be the case— looking at the digitizing of Islamic literature, and the Quran in particular.  The various cooking methods have gotten me to think about its status as a purely oral text for the first many years of its existence, which then transitioned back and forth between written and oral forms (while remaining primarily oral) until about twenty years after the Prophet Muhammad’s death, when the Caliph Uthman began the process of collecting and canonizing the chapters of the Quran into the version we have today (most scholars agree to this narrative).

If this is the case, the text itself was “generated” (not “discovered”), according to Islamic tradition, in a pure “raw” form through an oral transmission to the prophet over a period of years form the angel Gabriel.  The only true raw form of it can then only be the recited word, which makes sense given the emphasis and importance on this quality of text and its place in the lives of Muslims who pray five times a day, reciting these words aloud or in their own heads.  After revelation it was then taught to Prophet’s companions who memorized the text in its entirety.  Even here there is a certain stage of “cooking” that occurs in terms of the order of the Quran, because the version that is accepted today was not compiled chronologically according to the order of revelation.  The memorizers of the Quran continued the oral tradition and recitation, sometimes with small portions written down as memory aides (the first time that Quran was ever turned into text), until the compilation began under Uthman’s caliphate.

Fast forward many years and the Quran has turned into cassette tapes, cds, audio files, digital versions for computers and smart phones, etc.  The number of forms it takes add different layers to the “cooking” process, sometimes taking the oral component into account and sometimes not.  If we can refer to the original recitation as “raw data” then what we have now has been cooked too many times to count (not even mentioning translation)— which is usually fuel for some scholars (usually from the West) to question the authenticity and/or completeness of the holy book.  In any case, both articles we read this week have got me thinking about whether or not it is truly possible to experience the Quran in a truly raw and uninterrupted way.  Gitelman mentions the lack of objectivity in machines when reproducing pieces of art, and I wonder whether the current way in which most Western Muslims in particular who get their religious literature through bits and pieces on their phones or through social media (twitter handles and memes devoted entirely to spreading inspirational and life advising hadith or verses of the Quran), regardless of the “aggregate quality of data,” are receiving tiny pieces of data so overcooked it is perhaps impossible to grasp the actual essence of the message, which is already an esoteric and lifelong endeavor.

And here’s a recitation by a world famous reciter, there are so many things happening around her interfering, talking over, and muffling the text— that I hardly think “raw” is the right word for this bit of data.

 

 

Overcooked Data

The Wide Cooker

This week, Lisa Gitelman’s “Raw Data is an Oxymoron” and Lev Manovich’s “The Science of Culture?” introduce the data studies as a contested and multifarious field of study. Gitelman’s piece seems to address an imagined scholarly community that presumably treats data like a fresh pair of glasses: a clarifying, objective, and unquestionably useful tool to see the world. The thrust of Gitelman’s argument is to implore scholars to pay attention to the “pre-cooked” hermeneutics of data-vision and to point out the hegomonic ways in which data and other empirical tools have evolved over time. Additionallyshe draws a throughline between science studies and data studies by pointing to how objectivity in both cases is mythical.

On this last point, Gitelman and Manovich both treat data studies as a vibrational nexus between science and the humanistic study—an idea that has really strongly resonated with me in all of my own research. As I think about the possibility of using data visualization as a critical lens from which to consider either climate discourse or the circulation of fetal images, it becomes imperative for me not to pick a scientific view over a humanistic one and vice versa. I really want to hold these two tensions together: the long, globalizing view that data can offer (which will possibly be generalizing but maybe also generative) and the potentialities of data to speak to human actants, or what Manovich refers to as the “individual and particular” (Manovich 2016, 9).

Gitelman says that the imagination of data is always “an act of classification,” (Gitelman 172) and I believe that to be mostly true. But what I sensed after reading Manovich’s Cultural Analytics manifesto is that classification doesn’t have to constrict our study. Manovich’s call for a “wide data” argues against the categorization of data into discrete dimensions or variables. And while he doesn’t state it explicitly in this essay, I think what he means is that there is a use for views of data that are continuous rather than discrete. It reminds me of something Alan Liu once said about his own work in the digital humanities, where rather than trying to classify and plot the sentiments we encounter from large data sets, we can construct heat maps and look for hot spots.

I can think of ways that a wide cooker could be useful in both my projects. Maybe I can mine a lot of instagram images under a hashtag like #globalwarming and allow clusters of affiliation to form in ways that they could not if I just plotted them temporally or by geolocation. Or maybe I cull information from several pro-life websites and try to simulate my own anti-abortion page using textual and visual data. I also think Professor Sakr’s mosaics would actually be a great example of “wide data.” What other kinds of “wide cooking” could we come up with?

The Wide Cooker

Beginning With Friction

Though it seems a bit plain to say so, what I appreciate most about Lisa Gitelman’s approach in “Raw Data is an Oxymoron” is her insistence that data are invariably “cooked,” that they come to us “scrubbed” and involved in relations of “friction” (171). I am particularly struck by the latter of these two terms: friction. In her 2005 ethnography of resource development and extraction practices in late 1980s and early 1990s Indonesia (conveniently called Friction), Anna Lowenhaupt Tsing writes that friction both enables and constrains. To be sure, friction slows and inhibits motion, expressing itself as resistance. But paradoxically, it also makes movement as such possible. Without friction, Tsing reminds us, a tire simply spins in the air, propelling nothing at all. Even as friction holds us back, it moves us forward. It sets things in motion, shifts the scene, moves stuff around, forms, deforms, and reforms relations; keeping open the possibility of invention even as it holds certain arrangements in place.

Approaching data in these terms – as an ensemble of frictive relations, or what Gitelman elsewhere calls “potential connections” (172) – is productive in the context of my own project, which will attempt to visualize the relations between changing legal definitions of Nativeness, Indigenous depopulation, and territorial dispossession in Hawai’i. Beginning with friction – emphasizing the “worries, questions, and contests that assert or affirm what should count as data, or which data are good and less reliable, or how much data is enough” (171) – helps me to understand how two relatively similar datasets, such as the 1897 Kū’e Petitions against the Annexation of Hawai’i (see image below) and the Hawaiian Census of 1896, both of which sought to enumerate the Indigenous population of the Hawaiian islands at the close of the 19th Century, are nonetheless quite distinct. Even as they claim the same populations, they marshall those populations differently, producing them as evidence of different phenomena, siphoning them into discontinuous and even oppositional political projects. In the very production of these populations as data, then, friction is produced; the data become freighted, vexed, and fraught, involved in a pitched contest over who will count, why, and to what ends. The process of turning populations into data, in other words, makes things happen. It moves bodies into and out of legibility, arranging them in particular ways; a frictive shifting that has had and continues to have profound (and often deleterious) consequences for Indigenous Hawaiians.

Beginning With Friction