April 2011


Michael Zimmer offers an excellent discussion of this week’s controversy regarding Facebook’s removal of an image of two men kissing. I want to put this up next to the recent article by Mike Ananny in the The Atlantic, where he interrogates the possible reasons why, when he went to load the gay social networking app Grindr, Google’s App Market ‘recommended’ an app that tracks sex offenders.

As we begin to unravel how and why content platforms and app stores make curatorial decisions about the content they provide, we are asking the kinds of questions both Zimmer and Ananny ask about these instances. Are we looking at the result of a human intervention or an algorithmic one? (Is it even possible or productive to draw this distinction so clearly?) Was this intentional or accidental? (And is it too simple to equate human judgment with an intentional choice and an algorithmic conclusion with a lack of intention?) Does this judgment, however it was made, adhere to or exceed the site’s stated rules and expectations? (And, implied in that, is a reprehensible judgment acceptable simply because it isn’t hypocritical?) And, perhaps the hardest question, what are the consequences of these decisions, for users and for the contours of public discourse? Does the removal of images of men kissing, while allowing thousands of images of heterosexual kisses to remain, help to marginalize public expressions of gay intimacy? Does the recommendation link between gay social life and sex offenders reinforce an association in some people’s minds about gay men as sexual predators?

I find all of these questions intensely important ones to ask, and am struck by the fact, or at least perception, that this issue has been more publicly visible as of late. Facebook has faced trouble over the years for how it applies its rules, particularly around nudity: much of its trouble came from the disputed removal of images of women breastfeeding. Livejournal faced a similar controversy in 2006. Apple has drawn scrutiny and sometimes ire for recent removals of apps from anti-gay churches, political satire, and an app for Wikileaks, but questions about what their review criteria are and when they’ll be exerted have been raised since the app store first opened.

But perhaps what is trickiest here is to consider both of these examples together. What is the comprehensive way of understanding both kinds of interventions, both the removal of content and the shaping of how content that does remain in an archive will be found and presented? In my own research I have been focusing on the former: the decisions about and justifications for removing content perceived as objectionable, or disallowing it in the first place. But in some ways, this kind of border patrol, of what does and does not belong in the archive, is the most mundane and familiar of these interventions. We know how to raise questions about what NBC will or will not show, what The New York Times will or will not print. We need to examine these kinds of judgments together with a spectrum of choices these sites and providers are increasingly willing and able to make:

- techniques for dividing the archive into protective categories (age barriers, nation-specific sub-archives)
- mechanisms for displaying or blocking content based on explicitly indicated user preferences
- making predictive adjudications on whether to display something based on aggregate user data (national origin, previously viewed or downloaded content, aggregate judgments based on the preferences of similar users)
- categorization and tagging of content to direct its flow
- search and recommendation mechanisms based on complex algorithmic combinations of aggregated user purchases or activity, semantic categories and meta-information
- value- or activity-based mechanisms for navigating content, such as ‘bestseller’ or ‘most emailed’ lists, offered as objective criteria
- structural mechanisms for the preferred display of content to first-time and to returning users
- choices about ‘featured’ or otherwise prioritized content

All of these are complex combinations of technical design, human judgment (whether in anticipation of problematic content or in the moment of its encounter, all struggle with values, both the provider’s economic priorities and legal obligations, their assessment of the wants and hesitations of their community, and the broader cultural norms they believe or claim they are approximating.

These sites, I believe, still have the appearance of neutrality and totality on their side (maybe not Apple). Despite the increasing occurrence of these incidents, most users still experience these sites as open and all-encompassing, and most will not run into an edge in their own surfing, where something is simply disallowed. So the complex curation of these sites, along all of the dimensions I mentioned above, quietly shapes archives that, by and large, still feel unmediated — every video one could imagine, or whatever users want to post. To the degree that this perception continues to persist (and is actively maintained by the providers themselves) it will remain difficult to raise the questions that Zimmer and Ananny and others are trying to raise, about not just the fact that these sites are curated, but the way that the mechanisms by which they’re curated, the subtle forces that shape what is available and how it is found, and the way different justifications for curating at all, shape the digital cultural landscape, and subtly shape and reinforce what Mary Gray (scroll down in the comments to Ananny’s essay) called the “cultural algorithms,” the associations and silences in our culture around controversial viewpoints, images, and ways of life.

This blog is coming back to life.