On Logics and Tactics

This week’s readings offer a nice cap to many of the discussions we’ve has in this class, in particular by addressing the following two issues regarding data epistemologies: history and infrastructure. To use an analogy from Jonathan Sterne, these matters are the water in which the fish (in this case users, producers, coders, etc) swim. I largely agree with the points made in Hamish Robertson and Joanne Travaglia’s piece, and with the throughline they draw between 19th century scientific practices and epistemologies and those “problems” that define the contemporary information age. For that reason, I’m choosing to focus this response on Jose Van Dijck and Thomas Poell’s piece on social media logic which I found both extremely useful and generative in my own work.

The piece opens with in essence a discussion of the role that social media and mass media play as invisible intermediaries that are not so invisible. This point is very much simpatico with the scholarship of Lisa Gitelman, Tartleton Gillespie in his work on platforms, as well as other media industry scholarship on digital intermediaries and infomediaries. That is, all of these scholars  discuss the ways in which services that propose to merely “connect people” do much more than passively enable people to connect, but rather actively change the way that this connection is defined and activated. Van Dijck and Poell build on this idea by weaving a narrative about the ways in which a newer social media logic intertwines with older logics of mass media, offering a detailed and thoughtful discussion of user/producer relationships, programmability, popularity, connectivity, and datafication.

I think individually, all of these sections are successfully argued—particularly through the nuanced discussion of the way users, platforms, advertisers, and online environments shape each other. My confusion arose I think, in the more specific definitions and choice of words to describe these relationships. Social media logic is defined as: “the processes, principles, and practices through which these platforms process information, news, and communication, and more generally, how they channel social traffic” (5). Meanwhile, mass media logics are the “set of principles or common sense rationality cultivated in and by media institutions that penetrates every public domain and dominates its organizing structures” (3). These definitions are certainly not very close, and immediately it felt to me like the argument was set up to compare apples to vegetables, or something along those lines.

The authors do end up explaining the differences more carefully later on by talking about the way mass-media logic is one way editorial strategy and social media logic is two-way dialogic relationship between users and code, but I think ultimately their insistence on maintaining the paradigm of “logic” is a detriment to the overall flow of the essay. Logic does connote invisible infrastructures, but it seems less suitable to a discussion of tactics, or the purposeful navigation of these infrastructures (I’m thinking along the lines of Rita Raley here, who uses the word “tactics” to discuss intervention, disruption, and pre-emption on the part of individual actors). And, if we are to accept that the word “logic” encompasses both of those things, then I have to wonder—why is it the authors feel a need to choose a word that is so broad?

My second gripe with this mass media “logic” vs. social media “logic” paradigm is that it seems to exclude other genealogical explanations for how we interact with social media beyond mass media. What about, for instance, the influence of Google search on our penchant for using keywords and hashtags? What about those classification systems pointed out in the other piece—the libraries and census organizations that first developed the rubrics for data epistemologies? These certainly aren’t the same “logics” as mass media logics.

Anyway, I understand that the argument that this essay missed a lot isn’t that powerful given the word count restraints that all short articles have to deal with, but I do feel that this discussion of logics is narrow in a way that it need not be, particularly given the more nuanced arguments made in the individual sections.

In any case, I did really like this article in that it describes very well the things that I am trying to pay attention to in my data visualization project: namely the way in which particular platforms (twitter in my case) cause users to adjust their tactics of communication. My word cloud for Planned Parenthood is below. I’ll be categorizing groups of words into topics in my final project: calls to action, politics, women’s health, and references to actual abortion (chop, body, parts etc).

word cloud

On Logics and Tactics

Can I get a witness?

Gitelman’s discussions of the “cooked” nature of data allowed me to start drawing connections between the types of discussions surrounding objectivity in news sources with the idea of “raw” data, both of which are caught up in “processes that work to obscure – or as if to obscure – ambiguity, conflict, and contradictions” (172). In just a few brief sentences, Gitelman connects the “imaginative” and “interpretive” nature of historiographical practices with data construction and visualization practices through the idea of the event, stating that “like events imagined and enunciated against the continuity of time, data are imagined and enunciated against the seamlessness of phenomena” (168). These conversations come up time and time again in discussions of mediated history and construction of the event within broadcast news sources. The connections here become further elucidated when Gitelman enlightens us with her discussion of how innocent observation, here conflated with objectivity, “ever came to be associated with epistemological privilege” (169) through the introduction of mechanical objectivity and the photograph as tools of objectivity. According to Gitelman, the photograph becomes the stepping stone by which mechanical evidence becomes the preferred source of objective information, resulting in today’s obsession with data.

However, caught up in these ideas of photographic evidence is also a necessary discussion of the politics of witnessing, a term that implies human agency but subsumes these ideas of detached objectivity. As we’ve continued to see in not only the world of journalism, but also in the age of social media, images continue to be tied to this idea of bearing witness, of being there, that gives authority and power to a voice. At first glance, this seems to validate Gitelman’s analogy between data and events, thereby solidifying her argument that data have become socially embedded into a hierarchy of epistemological practices through this history of reliance on the technological and mechanical to provide the objectivity that supposedly human renderings of reality cannot. However, the strong ties (at least in the US sense) to these ideas of bearing witness hint that a stronger connection to human agency in the creation of information is at play – that objectivity hasn’t been entirely delegated to the world of mechanical and technological innovation. After all, photos and data alike need to be situated and explained to other humans by those deemed closest to the source by politics of power and authority.

However, here maybe there are just simply disciplinary differences of what constitutes “data,” as “data needs to imagined as data to exist” (168). Manovich points out this difference in the subdivisions of data collection depending on discipline, a discussion somewhat missing from Gitelman’s piece. Here, I think Gitelman has a type of number-driven data in mind, the type that informs “governmental and non-governmental authorities” among a smattering of fields that seems to transcend disciplinary boundaries. However, numbers aren’t the only data that inform these decisions. Photographs and human witnesses still act as data in different epistemes, thereby negating the technologically deterministic sense of data presented within “Raw Data.” While addressing the “cooked” origins of data is a vital discussion to negating the myths surrounding objective data, I guess what I find unsettling is the underlying assumption that number-driven data are the be-all-end-all of portraying truth and objectivity while clearly other forms of evidence and information continue to drive informational practices.

In an undergraduate class, we watched some of the US television news coverage of the Romanian Revolution in 1989. Penetrating the media blackout that overcame the state in the throes of revolution became the sole resolution of US cable network news covering the event. It wasn’t good enough to simply tell the good people of the USA how Communism was being overthrown by the Romanian people; they needed to show them as well. They needed to see the horrors of the Ceausescu regime and they needed to see the rejoicing and celebration of the Velvet Revolution in order to reaffirm capitalist ideologies regarding the world behind the Iron Curtain through the act of witnessing (the result of which penetrated perversely into the operating rooms of live abortions and disturbing images of babies dying of AIDS). These images seem to act as data to provide the “objective” and authorial act of witnessing that pervades newcasts and portrayals of history particularly in the US. However, the “cooked” nature of these events becomes revealed in the similitude in the types of images associated with the narrativization of certain events (as Laila hinted towards in her collages that often depict images of child suffering). This is where I’m starting to think about my own project for this class. I’m envisioning a sort of archive of images of “The Children of War” that seem to proliferate newcasts, tweets, and other sources of imagery that perpetuate what I would call the “narratives of intervention” involved in motivating US foreign policy decisions for military intervention.

Women recovering from abortions  at the Filanthropia clinic. Bucharest, Romania. Feb 1990
Women recovering from abortions at the Filanthropia clinic. Bucharest, Romania. Feb 1990


Can I get a witness?

Having an unpleasant reaction to Lisa Gitelman’s bad shrimp analogy

Digesting Lisa Gitelman’s admonition about data and rawness, most of her argument went down fine: after all, disciplinary cuisines aside (171), what she most stridently calls for is the disclosure of epistemological and methodological concerns, or recipes, and for us to discard the notion of any particular palate—whether one makes purportedly subjective or objective claims for a living—being truer than another. However, the jumbo shrimp analogy (168) stuck like a bone in my throat, in spite of how cheerfully Gitelmen provides and then discards it. I found it ill-conceived, and here’s why: “raw” is an absolute term, and while it may be an inaccurate and blinkered way to refer to data, it doesn’t really bear comparison to “jumbo,” which is a relative term for shrimp. These shrimp found here are more hefty than the already-large shrimp found near, which are in turn more generously-proportioned than the pedestrian, moderately-sized shrimp found over there—thus, “jumbo” in the first case. 

Mark Twain famously attributed to Benjamin Disraeli the quip fond to many a wag: “there are three types of lies: lies, damned lies, and statistics.” To cite either as the author, while often done, seems superfluous—it is such a truism that it doesn’t matter on whose authority we have it. And Gitelman and Manovich agree; both argue that the essential problem with data analysis, no matter how clever, is that sampling facts is a ticklish business. Carelessness and perfidiousness in the case of data collection and analysis end the same, and so whether by accident or design, prejudiced samples produce less generalizable results. The operative principle, then, is nuance, and not only acknowledging that data is always “cooked” but learning how to select it, season it, how to render most effectively the particular characteristics of interest. Thus, I would argue that in his call for “wide data” and in hers for frank disclosure of disciplinary predilections, Manovich and Gitelman in fact do apply the wisdom of fishmongers and think about scales of data as useful metrics. The jumbo shrimp, after all, is only oxymoronic in one iteration—elsewhere the langoustine and the tiger prawn produce none of the consternation apparently plaguing the American shopper.

I think it bears pointing out that the issues Gitelman has with jumbo shrimp and raw data seem to come down to the notion that somehow we’re being tricked into thinking the thing at hand it something other than it is: so, the shrimp is not a shrimp, it’s a jumbo shrimp, and data isn’t contingent observation about the material world, it is unbracketed, unadulterated truth. And while conflating industrial aquaculture and the empirical or positivist bent may work on many levels—after all, industrial aquaculture and many of its devastating consequences originate in notions that the world is entirely knowable and always improvable—there are some close to the surface in which that shorthand just doesn’t.

Having an unpleasant reaction to Lisa Gitelman’s bad shrimp analogy

Data and the Superpanopticon

Because I’m interested in working with reality television, I was most intrigued by Lisa Gitelman’s discussion on the problems of ‘dataveillance,’ or the collection and usage of data from individuals regarding their identities and personal attributes. She questions the ethics of this process in our current age, citing Rita Raley’s examination of those who try to push back with online tools that inhibit the ubiquitous data mining of our contemporary Internet. In ‘Dataveillance and Counterveillance’ (Raley’s chapter in the book for which Gitelman provides the eponymous introduction piece), Raley unpacks the typical argument in favor of allowing passive data mining: a “personalized Internet” is only possible if our individual tastes and preferences are fair game for circulation, and “voluntarily surrendering personal information becomes the means by which social relations are established and collective entities supported” (125). The necessity of trading privacy for inclusion is certainly problematic, and I see it as a specific link to my interest in reality television shows wherein contestants formally trade their privacy (as well as their very power of self-representation) for the benefit of inclusion on a television show and potential financial gain. Raley then introduces the idea of a ‘superpanopticon’ which must necessarily exist to register our interpellation by databases; while this for her is a link to a discussion of ‘corpocracy’ and ‘cybernetic capitalism,’ I identify it for my purposes as a term to describe the reality show institution which mines identity and individual action to construct the desired narrative for commercial purposes.

Aiding my consideration of a show like Survivor as one such ‘superpanopticon,’ Raley cites Kevin D. Haggerty and Richard V. Ericson who posit that surveillance “operate[s] through processes of disassembling and reassembling. People are broken down into a series of discrete informational flows which are stabilized and captured according to pre-established classificatory criteria. They are then transported to centralized locations to be reassembled and combined in ways that serve institutional agendas” (127). This is a lengthy excerpt, but I find it so startlingly appropriate to describe how contestants can be treated on reality television shows like Survivor. Contestants’ thoughts and feelings are mined on location through individual interviews known as ‘confessionals,’ which are then transported to a central editing station to be assembled together at the whim of the producers. My initial project idea to explore Survivor confessional frequencies by episode and season draws from this notion (which I could not have stated so well before) that players are mere building blocks in a larger narrative, who can be highlighted or backgrounded as the show desires. After reading Gitelman’s caution regarding the ethics of dataveillance, though, I begin to wonder: Even if I can use the confessional data to learn and expose something about the superpanopticon of Survivor, am I still inescapably guilty of drawing on the identities and representations of others for the alleged benefit of collective knowledge?


“Purple” Kelly Shinn, infamous as one of the most under-edited contestants in  Survivor history

Data and the Superpanopticon

Overcooked Data

Gitelman’s idea of “raw data” being an oxymoron and the various degrees of which it can be cooked has taken me backward historically in my research area, rather than what I thought would be the case— looking at the digitizing of Islamic literature, and the Quran in particular.  The various cooking methods have gotten me to think about its status as a purely oral text for the first many years of its existence, which then transitioned back and forth between written and oral forms (while remaining primarily oral) until about twenty years after the Prophet Muhammad’s death, when the Caliph Uthman began the process of collecting and canonizing the chapters of the Quran into the version we have today (most scholars agree to this narrative).

If this is the case, the text itself was “generated” (not “discovered”), according to Islamic tradition, in a pure “raw” form through an oral transmission to the prophet over a period of years form the angel Gabriel.  The only true raw form of it can then only be the recited word, which makes sense given the emphasis and importance on this quality of text and its place in the lives of Muslims who pray five times a day, reciting these words aloud or in their own heads.  After revelation it was then taught to Prophet’s companions who memorized the text in its entirety.  Even here there is a certain stage of “cooking” that occurs in terms of the order of the Quran, because the version that is accepted today was not compiled chronologically according to the order of revelation.  The memorizers of the Quran continued the oral tradition and recitation, sometimes with small portions written down as memory aides (the first time that Quran was ever turned into text), until the compilation began under Uthman’s caliphate.

Fast forward many years and the Quran has turned into cassette tapes, cds, audio files, digital versions for computers and smart phones, etc.  The number of forms it takes add different layers to the “cooking” process, sometimes taking the oral component into account and sometimes not.  If we can refer to the original recitation as “raw data” then what we have now has been cooked too many times to count (not even mentioning translation)— which is usually fuel for some scholars (usually from the West) to question the authenticity and/or completeness of the holy book.  In any case, both articles we read this week have got me thinking about whether or not it is truly possible to experience the Quran in a truly raw and uninterrupted way.  Gitelman mentions the lack of objectivity in machines when reproducing pieces of art, and I wonder whether the current way in which most Western Muslims in particular who get their religious literature through bits and pieces on their phones or through social media (twitter handles and memes devoted entirely to spreading inspirational and life advising hadith or verses of the Quran), regardless of the “aggregate quality of data,” are receiving tiny pieces of data so overcooked it is perhaps impossible to grasp the actual essence of the message, which is already an esoteric and lifelong endeavor.

And here’s a recitation by a world famous reciter, there are so many things happening around her interfering, talking over, and muffling the text— that I hardly think “raw” is the right word for this bit of data.



Overcooked Data

The Wide Cooker

This week, Lisa Gitelman’s “Raw Data is an Oxymoron” and Lev Manovich’s “The Science of Culture?” introduce the data studies as a contested and multifarious field of study. Gitelman’s piece seems to address an imagined scholarly community that presumably treats data like a fresh pair of glasses: a clarifying, objective, and unquestionably useful tool to see the world. The thrust of Gitelman’s argument is to implore scholars to pay attention to the “pre-cooked” hermeneutics of data-vision and to point out the hegomonic ways in which data and other empirical tools have evolved over time. Additionallyshe draws a throughline between science studies and data studies by pointing to how objectivity in both cases is mythical.

On this last point, Gitelman and Manovich both treat data studies as a vibrational nexus between science and the humanistic study—an idea that has really strongly resonated with me in all of my own research. As I think about the possibility of using data visualization as a critical lens from which to consider either climate discourse or the circulation of fetal images, it becomes imperative for me not to pick a scientific view over a humanistic one and vice versa. I really want to hold these two tensions together: the long, globalizing view that data can offer (which will possibly be generalizing but maybe also generative) and the potentialities of data to speak to human actants, or what Manovich refers to as the “individual and particular” (Manovich 2016, 9).

Gitelman says that the imagination of data is always “an act of classification,” (Gitelman 172) and I believe that to be mostly true. But what I sensed after reading Manovich’s Cultural Analytics manifesto is that classification doesn’t have to constrict our study. Manovich’s call for a “wide data” argues against the categorization of data into discrete dimensions or variables. And while he doesn’t state it explicitly in this essay, I think what he means is that there is a use for views of data that are continuous rather than discrete. It reminds me of something Alan Liu once said about his own work in the digital humanities, where rather than trying to classify and plot the sentiments we encounter from large data sets, we can construct heat maps and look for hot spots.

I can think of ways that a wide cooker could be useful in both my projects. Maybe I can mine a lot of instagram images under a hashtag like #globalwarming and allow clusters of affiliation to form in ways that they could not if I just plotted them temporally or by geolocation. Or maybe I cull information from several pro-life websites and try to simulate my own anti-abortion page using textual and visual data. I also think Professor Sakr’s mosaics would actually be a great example of “wide data.” What other kinds of “wide cooking” could we come up with?

The Wide Cooker

Beginning With Friction

Though it seems a bit plain to say so, what I appreciate most about Lisa Gitelman’s approach in “Raw Data is an Oxymoron” is her insistence that data are invariably “cooked,” that they come to us “scrubbed” and involved in relations of “friction” (171). I am particularly struck by the latter of these two terms: friction. In her 2005 ethnography of resource development and extraction practices in late 1980s and early 1990s Indonesia (conveniently called Friction), Anna Lowenhaupt Tsing writes that friction both enables and constrains. To be sure, friction slows and inhibits motion, expressing itself as resistance. But paradoxically, it also makes movement as such possible. Without friction, Tsing reminds us, a tire simply spins in the air, propelling nothing at all. Even as friction holds us back, it moves us forward. It sets things in motion, shifts the scene, moves stuff around, forms, deforms, and reforms relations; keeping open the possibility of invention even as it holds certain arrangements in place.

Approaching data in these terms – as an ensemble of frictive relations, or what Gitelman elsewhere calls “potential connections” (172) – is productive in the context of my own project, which will attempt to visualize the relations between changing legal definitions of Nativeness, Indigenous depopulation, and territorial dispossession in Hawai’i. Beginning with friction – emphasizing the “worries, questions, and contests that assert or affirm what should count as data, or which data are good and less reliable, or how much data is enough” (171) – helps me to understand how two relatively similar datasets, such as the 1897 Kū’e Petitions against the Annexation of Hawai’i (see image below) and the Hawaiian Census of 1896, both of which sought to enumerate the Indigenous population of the Hawaiian islands at the close of the 19th Century, are nonetheless quite distinct. Even as they claim the same populations, they marshall those populations differently, producing them as evidence of different phenomena, siphoning them into discontinuous and even oppositional political projects. In the very production of these populations as data, then, friction is produced; the data become freighted, vexed, and fraught, involved in a pitched contest over who will count, why, and to what ends. The process of turning populations into data, in other words, makes things happen. It moves bodies into and out of legibility, arranging them in particular ways; a frictive shifting that has had and continues to have profound (and often deleterious) consequences for Indigenous Hawaiians.

Beginning With Friction