platform


Michael Zimmer offers an excellent discussion of this week’s controversy regarding Facebook’s removal of an image of two men kissing. I want to put this up next to the recent article by Mike Ananny in the The Atlantic, where he interrogates the possible reasons why, when he went to load the gay social networking app Grindr, Google’s App Market ‘recommended’ an app that tracks sex offenders.

As we begin to unravel how and why content platforms and app stores make curatorial decisions about the content they provide, we are asking the kinds of questions both Zimmer and Ananny ask about these instances. Are we looking at the result of a human intervention or an algorithmic one? (Is it even possible or productive to draw this distinction so clearly?) Was this intentional or accidental? (And is it too simple to equate human judgment with an intentional choice and an algorithmic conclusion with a lack of intention?) Does this judgment, however it was made, adhere to or exceed the site’s stated rules and expectations? (And, implied in that, is a reprehensible judgment acceptable simply because it isn’t hypocritical?) And, perhaps the hardest question, what are the consequences of these decisions, for users and for the contours of public discourse? Does the removal of images of men kissing, while allowing thousands of images of heterosexual kisses to remain, help to marginalize public expressions of gay intimacy? Does the recommendation link between gay social life and sex offenders reinforce an association in some people’s minds about gay men as sexual predators?

I find all of these questions intensely important ones to ask, and am struck by the fact, or at least perception, that this issue has been more publicly visible as of late. Facebook has faced trouble over the years for how it applies its rules, particularly around nudity: much of its trouble came from the disputed removal of images of women breastfeeding. Livejournal faced a similar controversy in 2006. Apple has drawn scrutiny and sometimes ire for recent removals of apps from anti-gay churches, political satire, and an app for Wikileaks, but questions about what their review criteria are and when they’ll be exerted have been raised since the app store first opened.

But perhaps what is trickiest here is to consider both of these examples together. What is the comprehensive way of understanding both kinds of interventions, both the removal of content and the shaping of how content that does remain in an archive will be found and presented? In my own research I have been focusing on the former: the decisions about and justifications for removing content perceived as objectionable, or disallowing it in the first place. But in some ways, this kind of border patrol, of what does and does not belong in the archive, is the most mundane and familiar of these interventions. We know how to raise questions about what NBC will or will not show, what The New York Times will or will not print. We need to examine these kinds of judgments together with a spectrum of choices these sites and providers are increasingly willing and able to make:

- techniques for dividing the archive into protective categories (age barriers, nation-specific sub-archives)
- mechanisms for displaying or blocking content based on explicitly indicated user preferences
- making predictive adjudications on whether to display something based on aggregate user data (national origin, previously viewed or downloaded content, aggregate judgments based on the preferences of similar users)
- categorization and tagging of content to direct its flow
- search and recommendation mechanisms based on complex algorithmic combinations of aggregated user purchases or activity, semantic categories and meta-information
- value- or activity-based mechanisms for navigating content, such as ‘bestseller’ or ‘most emailed’ lists, offered as objective criteria
- structural mechanisms for the preferred display of content to first-time and to returning users
- choices about ‘featured’ or otherwise prioritized content

All of these are complex combinations of technical design, human judgment (whether in anticipation of problematic content or in the moment of its encounter, all struggle with values, both the provider’s economic priorities and legal obligations, their assessment of the wants and hesitations of their community, and the broader cultural norms they believe or claim they are approximating.

These sites, I believe, still have the appearance of neutrality and totality on their side (maybe not Apple). Despite the increasing occurrence of these incidents, most users still experience these sites as open and all-encompassing, and most will not run into an edge in their own surfing, where something is simply disallowed. So the complex curation of these sites, along all of the dimensions I mentioned above, quietly shapes archives that, by and large, still feel unmediated — every video one could imagine, or whatever users want to post. To the degree that this perception continues to persist (and is actively maintained by the providers themselves) it will remain difficult to raise the questions that Zimmer and Ananny and others are trying to raise, about not just the fact that these sites are curated, but the way that the mechanisms by which they’re curated, the subtle forces that shape what is available and how it is found, and the way different justifications for curating at all, shape the digital cultural landscape, and subtly shape and reinforce what Mary Gray (scroll down in the comments to Ananny’s essay) called the “cultural algorithms,” the associations and silences in our culture around controversial viewpoints, images, and ways of life.

Last week, YouTube announced on its company blog (in an entry titled “A YouTube for All of Us”) that it is tightening its restrictions on sexual content and profanity. Of course, YouTube has always had limits, mostly for pornography, spam, and gratuitous violence, handled primarily through automatic filtering that can spot X-rated scenes, and through the user community itself flagging inappropriate content for review. Now that user community is in an uproar about the recent announcement, because the restrictions will extend to sexually suggestive video and video that uses profanity. It’s not a surprise that sites like YouTube have to strike their own balance, between being an open platform for whatever users choose to post, and building a user community (not to mention a public brand) that’s acceptable to mainstream users and to the sponsors eager to sell to them. Censorship is hardly new to the Internet. What is new is the way YouTube intends to handle inappropriate videos: not only by removing some videos and placing age restrictions on others, but through “demotion.” “Videos that are considered sexually suggestive, or that contain profanity, will be algorithmically demoted on our ‘Most Viewed,’ ‘Top Favorited,’ and other browse pages.” This means that videos with too much profanity or sexually suggestive content will not be removed, but their popularity will be mathematically reduced, so they don’t show up on the lists of what’s most popular – censorship through technical invisibility. And we won’t know which videos, for what reasons. That YouTube can bury the rules, and their judgments, into the mechanisms by which users know what’s available and popular, points to the kinds of free speech dilemmas we’re likely to face in a digital future, and that we’re hardly prepared to think through.

I’ve been thinking a lot lately about the shape of cultural and political discourse in the contemporary digital environment. And there’s been no better place to consider it than the current U.S. presidential campaign. Sometimes I feel like the campaigns are simply working to fill my lectures – Obama Girl, the CNN/YouTube debates, The Hillary Clinton 1984 parody. The latest volley was the McCain web ad that called Obama the world’s biggest celebrity, with flashes of Paris Hilton and Britney Spears, then wondering whether he is ready to lead. (Of course, there’s no logical connection between the two claims, and none is actually made in the ad. But whatever.) The video gets all sorts of play, making it to the top of online circulation sites like Google News and getting picked up and replayed by the traditional media. Then Paris Hilton responds on FunnyorDie.com, with a surprisingly dry and pointed response ad – that itself makes the rounds, enough that the McCain campaign has to respond.

But this note from Crooks and Liars is even more intriguing. A web ad released by the McCain campaign during the primary, trumpeting McCain as the “true conservative” in the vein of Ronald Reagan, has been removed from their site and from YouTube. John Perr notes that the removal is timely, considering McCain’s recent ads present him in his “maverick” role, a reach for independent voters. Not only is the video gone, but the press releases that originally accompanied the video are gone as well. But the curiosity is that the video is still available, and bloggers noting the removal can still point to it — being posted back to YouTube by others, available in Google’s cache, or in the Internet Archive.

Political campaigns are turning to online platforms for an array of modes of comunicating to their base, to undecideds, to the press, to donors. Posting a video onto a campaign website and to YouTube can happen quickly and circulate widely, and with any luck gets repeated on TV newscasts. It can take advantage of the social networks and email mailing lists being cultivated by the campaigns to keep supporters linked in, to whatever degree they’re willing. But there are some points of jeopardy in these online environments. And one is visible here, the way that the record remains, even when a candidate might want to shift the tone of their campaign or the emphasis on certain talking points.

It is not as if YouTube simply retains all submissions. Videos can be removed by their posters, by YouTube itself, or by YouTube on behalf of others (for instance, copyright holders). But, because of the material workings of the web (caching) and the efforts of users (saving the stremed video and reposting it) it cannot be scrubbed clean. What exactly is kept, and when it will reappear, is unpredictable. But it cannot be erased with certainty. And its return can be fast and vast, if the moment calls for it. What you post can always return to haunt you — whether its The Daily Show calling it up to point out hypocrisy, or bloggers digging out a statement once made and since repudiated, or a journalist finding a position statement the preceded financial support from someone who may have benefited from it. The contours of political discourse is only now accomodating this particualr feature of online environments, and whole industries (late night comedy, for instance) are emerging in the space provided by this phenomenon: the uncanny return of the once published and never removed.

Here’s the video: