uncategorized


In a recent post over at Microsoft Research’s “Social media Collective” blog, danah boyd posed a query to her readers: she’s in the midst of a project with Henry Jenkins and Mimi Ito, a back-and-forth meant to advance their thinking on the idea of “participatory culture.” They are trying to pose their different ideas on the matter against one another, the goal being “to unpack our differences and agreements and identify some of the challenges that we see going forward.” This is no idle chatter; the dialogue that develops will eventually be published by Polity Press. So she wrote,

For the next three weeks, we’re going to individually reflect before coming back to begin another wave of deep dialoguing in the hopes that the output might be something that others (?you?) might be interested in reading. And here’s where we’re hoping that some of our fans and critics might be willing to provoke us to think more deeply.

  • What questions do you have regarding participatory culture that you would hope that we would address?
  • What criticisms of our work would you like to offer for us to reflect on?
  • What do you think that we fail to address in our work that you wish we would consider?

Given that I am interested in their work, and that I am at this very moment spending the week consulting with the Microsoft Research social media group, I felt a certain responsibility to lend a hand. This also let me push on some issues I have with the scholarly attention to participatory culture, that are bouncing off my own research on platforms and their curation of content. (If you’re unfamiliar with the way “participatory culture” is being used in this context, there’s a brief description in danah’s post, and if you want more you could read Jenkins’ characterization of it here.) For whatever it’s worth, I thought I would share my comments here; they may be of interest to some, and perhaps they’ll spur you to visit danah’s post, to add your own two cents. (If you do add your thoughts there, I’d love to know about them, feel free to add a comment here or just a trackback link.)

~ ~ ~

danah, sorry for the delayed response; I hope you’re already getting some good feedback on this. It’s a hard request that you’ve posed, of course, because it’s a big topic you’re tackling, and it’s hard to guess what you have and have not already put on the agenda for your disucssions with Mimi and Henry. But a couple of things come to mind that, while you’re probably already considering them, I don’t mind reinforcing just in case.

* emphasizing the historical dimension

this is probably a no-brainer for you three, but (a) it’s so easy in the discussions for people to slip into a presentism that paints this all as a phenomenon coterminous with the web, that I think it has to be said again and again; but more importantly, (b) I’d like to hear your argument about the shape of that historical trajectory. So it’s one thing to say “zines, cable access, amateur radio, etc…” and show that there are precedents; it’s another to say something about that history. For instance, is Lessig right, that this was a mode of culture that was dominant for many centuries, until it was squashed by the “read-only” model of the major entertainment industries? Or was it alive and well all along, and its just that media culture offered something over and above it, i.e., the shared objects that a “mass” form seems uniquely able to offer? Did media culture sit alongside and overshadow participatory culture, or did it eat it, by drawing amatuer talent into its routines and institutional obligations, by characterizing its path as the one artists should aspire to, and by building legal structures (i.e. modern copyright) that funadmentally disenfranchised folk forms of culture? Is participatory culture reemeerging because of the web, or because of the concomitant shrinking of the media industries, or because of a political shift in Western public culture — or is “reemerging” the wrong word, because it was never gone in the first place, it just that it changes and rejuvenates with every available medium?

* clarifying the new power of the platforms that cater to participatory culture

this is my own angle coming through, for sure, but I noticed that the only place where the new media industries appear in your list of topics is in the issue of “privatization of culture.” My own current research is asking about the curatorial role these platforms and providers are playing, making decisions about what counts as “bad content” and developing modes of governance for managing that stuff away. That’s one angle. Another might be the kind of political and moral legitimacy these providers have because they play host to this participatory culture — something you can see in the way that Twitter gets heralded as playing a role in the Arab Spring, sometimes by Twitter PR itself. My main impulse in this work is to pull against the tendency for these stakeholders to disappear, to hide beneath the seemingly frictionless flow of content we’re all making. So discussing the ways in which they oversee participatory culture, the benefits they accrue from doing so, and the financial windfall they build as a result, would I think be an important element of the discussion. This might also require including the ecosystem of other private stakeholders and public standard-setters who are involved in and also benefit from this, from the consumer object makers like Apple to the organizations that set Internet governance policies.

* the materialities and modalities of participatory culture

I’m thinking here of the superb new book by Jonathan Sterne, MP3: The Meaning of a Format. He argues in his introduction for paying greater attention to “formats” — to examine where they came from, what assumptions were built into them along the way, and how those assumptions drive some of the cultural shifts that ride on them. You could expand that to think about all the little technologies, tools, formats, and the like that are now part and parcel of this participatory culture, at least at the moment. in my mind this requires more than simply noting that kids making YouTube videos can do so in part because digital cameras and video editing software have gotten cheaper, easier to use, and more widely available. It should go further, to ask about the design assumptions built into digital cameras or video editing software, to ask how these tools embed assumptions about what amateur production should look like and how it should circulate, how those assumptions are materialzied into the artifacts themselves and circulated around them as promotional claims and instructional guidance. A smartphone app that not only let’s you easily edit your video but also has a one-click upload to Facebook matters materially for who makes, who sees, and in what ways.

Crossposted from Culture Digitally.

This is about the fourth Olympics that’s been trumpeted as the first one to embrace social media and the Internet — just as, depending on how you figure it, it’s about the fourth U.S. election in a row that’s the first to go digital. It may be in the nature of new technologies that we appear perpetually, or at least for a very long time, to be just on the cusp of something. NBC has proudly trumpeted its online video streaming, its smartphone and tablet apps, and most importantly its partnership with microblogging platform Twitter. NBC regularly displays the #Olympics hashtag on the broadcasts, their coverage includes tweets and twit pics from athletes, and their website has made room for sport-specific Twitter streams.

It feels like an odd corporate pairing, at least from one angle. Twitter users have tweeted about past Olympics, for sure. But from a user’s perspective, its not clear what we need or get from a partnership with the broadcast network that’s providing exclusive coverage of the event. Isn’t Twitter supposed to be the place we talk about the things out there, the things we experience or watch or care about? But from another angle, it makes perfect sense. Twitter needs to reinforce the perception that it is the platform where chatter and commentary about what’s important to us should occur, and convince a broader audience to try it; it gets to do so here as “official narrator” of the Games. NBC needs ways to connect its coverage to the realm of social media, but without allowing anything digital to pre-empt its broadcasts. From a corporate perspective, interdependence is a successful economic strategy; from the users’ perspective, we want more independence between the two.

This makes the recent dustup about Twitter’s suspension of the account of Guy Adams, correspondent for The Independent (so perfect!), so troubling to so many. Adams had spent the first days of the Olympics criticizing NBC’s coverage of the games, particularly for time-delaying events to suit the U.S. prime time schedule, trimming the opening ceremony, and for some of the more inane commentary from NBC’s hosts. When Adams suggested that people should complain to Gary Zenkel, executive VP at NBC Sports and director of their Olympics coverage, and included Zenkel’s NBC email address, Twitter suspended his account.

Just to play out the details of the case, from the coverage that has developed thus far, we can say a couple of things. Twitter told Adams that his account had been suspended for “posting an individual’s private information such as private email address, physical address, telephone number, or financial documents.” Twitter asserts that it only considers rule violations if there is a complaint filed about them, suggesting that NBC had complained; in response, NBC says that Twitter brought the tweet (or tweets?) to NBC’s attention, who then submitted a complaint. Twitter has since reinstated Adams’ account, and reaffirmed the care and impartiality it takes in enforcing its rules.

Much of the conversation online, including on Twitter, has focused on two things: expressions of disappointment in Twitter for the perceived crime of shutting down a journalist’s account for criticizing a corporate partner, and a debate about whether Zenkel’s email should be considered public or private, and as such, making Twitter’s decision (despite its motivation) a legitimate or illegitimate interpretation of their own rules. This second question is an interesting one: Twitter’s rules not clarify the difference between the “private email addresses” they prohibit, and whatever the opposite is. Is Zenkel’s email address public because he’s a professional acting in a professional capacity? because it has appeared before on the web? Because it can be easily figured out (by the common firstname.lastname structure of NBC’s emails addresses? Alexis Madrigal at The Atlantic has a typically well-informed take on the issue.)

But I think this question of whether Twitter was appropriately acting on its own rules, and even the broader charge of whether its actions were motivated by their economic partnership with NBC, are both founded on a deeper question: what do we expect Twitter to be? This can be posed in naïve terms, as it often is in the heat of debate: are they an honorable supporter of free speech, or are they craven corporate shills? We may know these are exaggerated or untenable positions, both of them, but they’re still so appealing they continue to frame our debates. For example, in a widely circulated critique of Twitter’s decision, Jeff Jarvis proclaimsthat

For this incident itself is trivial, the fight frivolous. What difference does it make to the world if we complain about NBC’s tape delays and commentators’ ignorance? But Twitter is more than that. It is a platform. It is a platform that has been used by revolutionaries to communicate and coordinate and conspire and change the world. It is a platform that is used by journalists to learn and spread the news. If it is a platform it should be used by anyone for any purpose, none prescribed or prohibited by Twitter. That is the definition of a platform.

Adams himself titled his column for The Independent about the incident, “I thought the internet age had ended this kind of censorship.”

I want Jarvis and Adams to be right, here. But the reality is not so inspiring. We know that Twiiter is neither a militant guardian of free speech nor a glorified corporate billboard, that Twitter’s relationship to NBC and other commercial partners matters but does not determine, that Twitter is attempting to be a space for contentious speech and have rules of conduct that balance a many communities, values, and legal obligations. But exactly what we expect of Twitter in real contexts is imprecise, yet it matters for how we use it and how we grapple with a decision like the suspension of Adams’ account for the comments he made. And what these expectations are help to reveal, may even constitute, or experience of digital culture as a space for public, critical, political speech.

What if we put these possible expectations on a spectrum, if only so we can step away from the extremes on either end:

  • Social media are private services; we sign up for them. Their rules can be arbitrary, capricious, and self-serving if they choose. They can partner with content providers, including priviliging that content and protecting them from criticism. Users can take a walk if they don’t like it.
  • Social media are private services; we sign up for them. Their rules can be arbitrary and self-serving, but they should be fairly enforced. They can partner with content providers, including priviliging that content and protecting them from criticism, but they should be transparent about that promotion.
  • Social media are private services used by the public; Their rules are up to them, but should be justifiable and necessary; they should be fairly enforced, though taking into account the logistical challenges. They can partner with content providers, including priviliging that content, but they should be demarcate that content from what users produce.
  • Social media are private services used by the public; because of that public trust, those rules should balance honoring the public’s fair use of the network and protecting the service’s ability to function and profit; they should be fairly enforced, despite the logistical challenges. They can partner with content providers, including priviliging that content; they should be demarcate that content from what users produce.
  • Social media are private services and public platforms; because of that public trust, those rules should impartially honor the public’s fair use of the network; they should be fairly enforced, despite the logistical challenges. They can partner with sponsors that support this public forum through advertising, but it has a journalistic commitment to allow speech, even if its critical of its partners or of itself.
  • Social media are private but have become public platforms; the only rules it can set should be in the service of adhering to the law, and protecting the public forum itself from the harm users can do to it (such as hate speech). They can partner with sponsors that support this public forum through advertising, but it has a journalistic commitment to allow speech, even if its critical of its partners or of itself.
  • Social media are public platforms; and as such must have a deep commitment to free speech. While they can curtail the most egregious content under legal obligations, they should otherwise err on the side of allowing and protecting all speech, even when it is unruly, disrespectful, political contentious, or critical of itself. Sponsors and other corporate partnerships are nearly anathema to this mission, and should be constrained to the only the most cordoned off forms of advertising.
  • Social media should facilitate all speech and block none, no matter how reprehensible, offensive, dangerous, or illegal. Any commercial partnership is a suspicious distortion of this commitment. Users can take a walk if they don’t like it.

While the possibilities on the extreme ends of this spectrum may sound theoretically defensible to some, they are easily cast aside by test cases. Even the most ardent defender of free speech would pause if a platform allowed or defended the circulation of child pornography. And even the most ardent free market capitalist would recognize that a platform solely and capriciously in the service of its advertisers would undoubtedly fail as a public medium. What we’re left with, then, is the messier negotiations and compromises in the middle. Publicly, Twitter has leaned towards the public half of this spectrum: many celebrated when the company appealed court orders requiring them to reveal the identity of users involved in the Occupy protests, and Twitter has regularly celebrated itself for its role in protests and revolutions around the world. At the same time, they do have an array of rules that govern the use of their platform, rules that range from forbidding inappropriate content, limiting harassing or abusive behavior, prohibiting technical tricks that can garner more followers, establishing best practices for automated responders, and spelling out privacy violations. Despite their nominal (and in practice substantive) commitment to protecting speech, they are a private provider, that retains the rights and responsibilities to curate their user content according to rules they choose. This is the reality of platforms that we are reluctant to, but in the end must, accept.

What may be most uncharacteristic in the Adams case, and most troubling to Twitter’s critics, is not that Twitter enforced a vague rule, or did so when Adams was criticizing their corporate partner, in a way that, while scurrilous, was not illegal. It was that Twitter proactively identified Adams as a trouble spot for NBC — whether for his specific posting the Zenkel’s email or for the whole stream of criticism — and brought it to NBC’s attention. What Twitter did was to think like a corporate partner, not like a public platform. Of course it was within Twitter’s right to do so, and to suspend Adams’ account in response. And yes, there is a some risk of lost good will and public trust. But the suspension is an indication that, while Twitter’s rhetoric leans towards the claim of a public forum, their mindset about who they are and what purpose they serve remains enmeshed with their private status and their private investments than users might hope.

This is the tension lurking in Twitter’s apology about the incident, where they acknowledge that they had in fact alerted NBC about Adams’ post and encouraged therm to complain, then acted on that complaint. “This behavior is not acceptable and undermines the trust our users have in us. We should not and cannot be in the business of proactively monitoring and flagging content, no matter who the user is — whether a business partner, celebrity or friend.” Twitter can do its best to reinstate that sense of quasi-journalistic commitment to the public. But the fact that the alert even happened suggests that this promise of public commitment, and the expectations we have of Twitter to hold to it, may not be a particularly accurate grasp of the way their public commitment is entangled with their private investment.

Cross posted at Culture Digitally.

Last week, Gawker received a curious document. Turned over by an aggrieved worker from the online freelance employment site oDesk, the document iterated, over the course of several pages and in unsettling detail, exactly what kinds of content should be deleted from the social networking site that had outsourced its content moderation to oDesk’s team. The social networking site, as it turned out, was Facebook.

The document, antiseptically titled “Abuse Standards 6.1: Operation Manual for Live Content Moderators” (along with an updated version 6.2 subsequently shared with Gawker, presumably by Facebook) is still available from Gawker. It represents the implementation of the Facebook’s Community Standards, which present Facebook’s priorities around acceptable content, but stay miles back from actually spelling them out. In the Community Standards, Facebook reminds users that “We have a strict ‘no nudity or pornography’ policy. Any content that is inappropriately sexual will be removed. Before posting questionable content, be mindful of the consequences for you and your environment.” But, an oDesk freelancer looking at hundreds of pieces of content every hour needs more specific instructions on what exactly is “inappropriately sexual” — such as removing “Any OBVIOUS sexual activity, even if naked parts are hidden from view by hands, clothes or other objects. Cartoons / art included. Foreplay allowed (Kissing, groping, etc.). even for same sex (man-man / woman-woman”. The document offers a tantalizing look into a process that Facebook and other content platforms generally want to keep under wraps, and a mundane look at what actually doing this work must require.

It’s tempting, and a little easy, to focus on the more bizarre edicts that Facebook offers here (“blatant depictions of camel toes” as well as “images of drunk or unconscious people, or sleeping people with things drawn on their faces” must be removed; pictures of marijuana are OK, as long as it’s not being offered for sale). But the absurdity here is really an artifact of having to draw this many lines in this much sand. Any time we play the game of determining what is and is not appropriate for public view, in advance and across an enormous and wide-ranging amount of content, the specifics are always going to sound sillier than the general guidelines. (It was not so long ago that “American Pie’s” filmmakers got their NC-17 rating knocked down to an R after cutting the scene in which the protagonist has sex with a pie from four thrusts to two.)

Lines in the sand are like that. But there are other ways to understand this document: for what it reveals about the kind of content being posted to Facebook, the position in which Facebook and other content platforms find themselves, and the system they’ve put into place for enforcing the content moderation they now promise.

Facebook or otherwise, it’s hard not to be struck by the depravity of some of the stuff that content moderators are reviewing. It’s a bit disingenuous of me to start with camel toes and man-man foreplay, when what most of this document deals with is so, so much more reprehensible: child pornography, rape, bestiality, graphic obscenities, animal torture, racial and ethnic hatred, self-mutilation, suicide. There is something deeply unsettling about this document in the way it must, with all the delicacy of a badly written training manual, explain and sometimes show the kinds of things that fall into these categories. In 2010, the New York Times reported on the psychological toll that content moderators, having to look at this “sewer channel” of content reported to them by users, often experience. It’s a moment when Supreme Court Justice Potter Stewart’s old saw about pornography, “I know it when I see it,” though so problematic as a legal standard, does feel viscerally true. It’s a disheartening glimpse into the darker side of the “participatory web”: no worse or no better than the depths that humankind has always been capable of sinking to, though perhaps boosted by the ability to put these coarse images and violent words in front of the gleeful eyes of co-conspirators, the unsuspecting eyes of others, and sometimes the fearful eyes of victims.

This outpouring of obscenity is by no means caused by Facebook, and it is certainly reasonable for Facebook to take a position on the kinds of content it believes many of its users will find reprehensible. But, that does not let Facebook off the hook for the kind of position it takes: not just where it draws the lines, but the fact that it draws lines at all, the kind of custodial role it takes on for itself, and the manner in which it goes about performing that role. We may not find it difficult to abhor child pornography or ethnic hatred, but we should not let that abhorrence obscure the fact that sites like Facebook are taking on this custodial role — and that while goofy frat pranks and cartoon poop may seem irrelevant, this is still public discourse. Facebook is now in the position of determining, or helping to determine, what is acceptable as public speech — on a site in which 800 million people across the globe talk to each other every day, about all manner of subjects.

This is not a new concern. The most prominent controversy has been about the removal of images of women breastfeeding, which has been a perennial thorn in Facebook’s side; but similar dustups have occurred around artistic nudity on Facebook, political caricature on Apple’s iPhone, gay themed books on Amazon, and fundamentalist Islamic videos on YouTube. The leaked document, while listing all the things that should be removed, is marked with the residue of these past controversies, if you know how to look for them. The document clarifies the breastfeeding rule, a bit, by prohibiting “Breastfeeding photos showing other nudity, or nipple clearly exposed.” Any commentary that denies the existence of the Holocaust must be escalated for further review, not surprising after years of criticism. Concerns for cyber-bullying, which have been taken up so vehemently over the last two years, appear repeatedly in the manual. And under the heading “international compliance” are a number of decidedly specific prohibitions, most involving Turkey’s objection to their Kurdish separatist movement, including prohibitions on maps of Kurdistan, images of the Turkish flag being burned, and any support for PKK (The Kurdistan Workers’ Party) or their imprisoned founder Abdullah Ocalan.

Facebook and its removal policies, and other major content platforms and their policies, are the new terrain for longstanding debates about the content and character of public discourse. That images of women breastfeeding have proven a controversial policy for Facebook should not be surprising, since the issue of women breastfeeding in public remains a contested cultural sore spot. That our dilemmas about terrorism and Islamic fundamentalism, so heightened over the last decade, should erupt here too is also not surprising. The dilemmas these sites face can be seen as a barometer of our society’s pressing concerns about public discourse more broadly: how much is too much; where are the lines drawn and who has the right to draw them; how do we balance freedom of speech with the values of the community, with the safety of individuals, with the aspirations of art and the wants of commerce.

But a barometer simply measures where there is pressure. When Facebook steps into these controversial issues, decides to authorize itself as custodian of content that some of its users find egregious, establishes both general guidelines and precise instructions for removing that content, and then does so, it is not merely responding to cultural pressures, it is intervening in them, reifying the very distinctions it applies. Whether breastfeeding is made more visible or less, whether Holocaust deniers can use this social network to make their case or not, whether sexual fetishes can or cannot be depicted, matters for the acceptability or marginalization of these topics. If, as is the case here, there are “no exceptions for news or awareness-related content” to the rules against graphic imagery and speech, well, that’s a very different decision, with different public ramifications, than if news and public service did enjoy such an exception.

But the most intriguing revelation here may not be the rules, but how the process of moderating content is handled. Sites like Facebook have been relatively circumspect about how they manage this task: they generally do not want to draw attention to the presence of so much obscene content on their sites, or that they regularly engage in “censorship” to deal with it. So the process by which content is assessed and moderated is also opaque. This little document brings into focus a complex chain of people and activities required for Facebook to play custodian.

The moderator using this leaked manual would be looking at content already reported or ‘flagged” by a Facebook user. The moderator would either “confirm” the report (thereby deleting the content), “unconfirm” it (the content stays) or “escalate” it, which moves it to Facebook for further or heightened review. Facebook has dozens of its own employees playing much the same role; contracting out to oDesk freelancers, and to companies like Caleris and Telecommunications On Demand, serves as merely a first pass. Facebook also acknowledges that it looks proactively at content that has not yet been reported by users (unlike sites like YouTube that claim to wait for their users to flag before they weigh in). Within Facebook, there is not only a layer of employees looking at content much as the oDesk workers do, but also a team charged with discussing truly gray area cases, empowered both to remove content and to revise the rules themselves.

At each level, we might want to ask: What kind of content gets reported, confirmed, and escalated? How are the criteria for judging determined? Who is empowered to rethink these criteria? How are general guidelines translated into specific rules, and how well do these rules fit the content being uploaded day in and day out? How do those involved, from the policy setter down to the freelance clickworker, manage the tension between the rules handed to them and their own moral compass? What kind of contextual and background knowledge is necessary to make informed decisions, and how is the context retained or lost as the reported content passes from point to point along the chain? What kind of valuable speech gets caught in this net? What never gets posted at all, that perhaps should?

Keeping our Facebook streets clean is a monumental task, involving multiple teams of people, flipping through countless photos and comments, making quick judgments, based on regularly changing proscriptions translated from vague guidelines, in the face of an ever-changing, global, highly contested, and relentless flood of public expression. And this happens at every site, though implemented in different ways. Content moderation is one of those undertakings that, from one vantage point, we might say it’s amazing that it works at all, and as well as it does. But from another vantage point, we should see that we are playing a dangerous game: the private determination of the appropriate boundaries of public speech. That’s a whole lot of cultural power, in the hands of a select few who have a lot of skin in the game, and it’s being done in an oblique way that makes it difficult for anyone else to inspect or challenge. As users, we certainly cannot allow ourselves to remain naive, believing that the search engine shows all relevant results, the social networking site welcomes all posts, the video platform merely hosts what users generate. Our information landscape is a curated one. What is important, then, is that we understand the ways in which it is curated, by whom and to what ends, and engage in a sober, public conversation about the kind of public discourse we want and need, and how we’re willing to get it.

This article first appeared on Salon.com, and is cross-posted at Culture Digitally and Social Media Collective.

Just a quick follow up on the discussion of SOPA; people keep asking me what kind of legislation would be more appropriate than SOPA and PIPA, and that might have a better chance of gaining the support of the technology industries, users, and Congress. I’m not in the business of writing laws, but as a start, my sense of it is that there are two kinds of infringement: first, there are underground sites and networks dedicated to trading copyrighted music, software, games, and movies; they are determined to elude regulations, they often move offshore or spread their resources across national jurisdictions to make prosecution harder, and they are technologically sophisticated enough to work just with numerical IP addresses, set up mirror sites, and move when one site gets shut down. The second kind of infringement is when some fan, who may not know or appreciate the rules of copyright, uploads a clip to YouTube.

The mistake the entertainment industry continues to make is that they want to stop both kinds of piracy, and they seem unwilling to admit that they are different and start dealing with them as separate problems, with different tools, and with a different “threat level” in their rhetoric. SOPA was problematic in so may respects, but in particular because it tried to address both kinds of piracy at once, and failed to handle either appropriately. The kinds of measures it was suggesting for “rogue, foreign websites” (let’s assume they meant the hardcore piracy networks) wouldn’t be enough: if the DoJ got a court order to remove these sites from Google’s search and the major ISPs, you or I might not be able to access these sites. But determined file traders don’t find them through Google. And SOPA got so much blowback because it also tried to include the second kind of piracy at the same time – which, in fact, is handled relatively well with the “notice-and-takedown” rules that already apply to content platforms like YouTube.

It’s not only that these two kinds of piracy are so different that they require distinctly different approaches: it’s that the entertainment industry needs to let go of trying to squelch them both in the same breath. If they could start distinguishing the two, and make clear that they don’t want to catch up YouTube and Facebook in their net in the process, I think the technology industries will be more willing to develop and uphold gentle norms and procedures for the kinds of infringement that may happen on their networks.

[Cross posted on Culture Digitally and MSR Social Media Collective]

Since I supported the blacking out of the MSR Social Media Collective blog to which I sometimes contribute, and the blacking out of Culture Digitally, which I co-organize, in order to join the SOPA protest led by the “Stop American Censorship” effort, the Electronic Frontier Foundation, Reddit, and Wikipedia, I though I should weigh in with my own concerns about the proposed legislation. 

While it’s reasonable for Congress to look for progressive, legislative ways to enforce copyrights and discourage flagrant piracy, SOPA (the Stop Online Piracy Act) and PIPA (the Protect IP Act) now under consideration are a fundamentally dangerous way to go about it. Their many critics have raised many compelling reasons for why [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. But in my eyes, they are most dangerous because of their underlying logic: policing infringement by rendering sites invisible.

Under SOPA and PIPA, if a website is even accused of hosting or enabling infringing materials, the Attorney General can order search engines to delete that site from their listings, require ISPs to block users’  access to it, and demand payment services (like PayPal) and advertising networks to cancel their accounts with it. (This last step can even be taken by copyright holders themselves, with only a good faith assertion that the site in question is infringing.) What a tempting approach to policing the Internet: rather than pursuing and prosecuting this site and that site, in an endless game of whack-a-mole, just turn the large-scale intermediaries, and use their power to make websites available, in order to make them unavailable. It shows all too plainly that the Internet is not some wide open, decentralized, unregulatable space, as some have believed. But, it undercuts the longstanding American tradition of how to govern information, which has always erred on the side of letting information, even abhorrent or criminal information, be accessible to citizens, so we can judge for ourselves. Making it illegal to post something is one thing, but wiping the entire site clean off the board as if it never existed is another.

Expunging an infringing site from being found is problematic in itself, a clear form of “prior restraint.” But it is exacerbated by the fact that whole sites might be rendered invisible on the basis of just bits of infringing content they may host. This is a particular troubling to sites that host user-generated content, where one infringing thread, post, or community might co-exist amidst a trove of other legitimate content. Under SOPA and PIPA, a court order  could remove not just the offending thread, but the entire site from Google’s search engine, from ISPs, and from ad networks, all in a blink.

These are the same strategies, not only that China, Iran, and Vietnam currently use to restrict political speech (as prominent critics have charged), but that were recently used against Wikileaks right here at home. When Amazon kicked Wikileaks off its cloud computing servers, when Wikileaks was de-listed by one DNS operator, when Mastercard and Paypal refused to take donations for the organization, they were attempting to render Wikileaks invisible before a court ever determined, or even alleged, that Wikileaks had broken any laws. So it is not a hypothetical that this tactic of rendering invisible will not only be dangerous for commercial speech, or the expressive rights of individual users, but for vital, contested, political speech. SOPA and PIPA would simply organize these tactics into a concerted, legally-enforced effort to erase, to which all search engines and ISP would be obligated to impose.

A lighthearted aside: In the film Office Space, the soulless software company chose not to fire the hapless Milton. Instead, they took away his precious stapler, moved him to the basement, and simply stopped sending him paychecks. We laughed at the blank-faced cruelty, because we recognized how tempting this solution would be, a deft way to avoid having to someone to their face. Congress is considering the same “Bobs” strategy here. But while it may be fine for comedy, this is hardly the way to address complex legal challenges around the distribution of information that should be dealt with in the clear light of a court room. And it risks rendering invisible elements of the web that might deserve to remain.

We are at a point of temptation. The Internet is both so powerful and so unruly because anyone can add their site to it (be it noble or criminal, informative or infringing) and it will be found. It depends on, and presumes, a principle of visibility. Post the content, and it is available. Request it, from anywhere in the world, and the DNS servers will find it. Search for it in Google, and it will appear. But, as those who find this network most threatening come calling, with legitimate (at least in the abstract) calls to protect children / revenue / secrets / civility, we will be sorely tempted to address these challenges simply by wiping them clean off the network.

This is why the response to SOPA and PIPA, most prominently in the January 18 blackouts by Reddit, Wikipedia, and countless blogs, are so important. Removing their content, even for a day, is meant to show how dangerous this forced invisibility could be. It should come as no surprise that, while many other Internet companies have voiced their concerns about SOPA, it is Wikipedia and Reddit that have gone the farthest in challenging the law. Not only do they host, i.e. make visible, an enormous amount of user-generated content. But they are themselves governed in important ways by their users. Their decisions to support a blackout were themselves networked affairs, that benefited from all of their users having an ability to participate — and recognized that commitment to openness as part of their fundamental mission.

Whether you care about the longstanding U.S. legal tradition of information freedoms, or the newly emergent structural logic of the Internet as a robust space of public expression, both require a new and firm commitment in our laws: to ensure that the Internet remains navigable, that sites remain visible, that pointers point and search engines list, regardless of the content. Sites hosting or benefitting from illegal or infringing content should be addressed directly by courts and law enforcement, armed with a legal scalpel that’s delicate enough to avoid carving off huge swaths of legitimate expression. We might be able to build a coalition of content providers and technology companies willing to partner on anti-piracy legislation, if copyright holders could admit that they need to go after the determined, underground piracy networks bent on evading regulation, and not in the same gesture put YouTube at risk for a video of a kid dancing to a Prince tune — there is a whole lot of middle ground there. But a policy premised on rendering parts of the web invisible is not going to accomplish that. And embracing this strategy of forced invisibility is too damaging to what the Internet is and could be as a public resource.

(Cross-posted at Culture Digitally and MSR’s Social Media Collective.)

Moveon.org began circulating this infographic yesterday; The (much more detailed) original is from OWNI.eu. It tells a now-familiar-but-still-important story about the increasing consolidation of commercial media (and by implication, a concern about its impact on public discourse). Despite the times, the attention here is not on online media or new forms of information distribution, though that attention would shift the image only slightly — where might we add Hulu as a “notable property”… under News Corp, GE, and Disney? We might also have to add Google, Apple, and Facebook. But would that change the basic concern? Do they shift the “staggering” percentage listed at the top? And what does “control” mean when we talk about not just content providers but distributors, platforms, and networks as well?

(Cross-posted at Hacktivision and Culture Digitally)

Benjamin Franklin, “Apology for Printers” (1731)

I’m going back to read some scholarship on joirnalistic objectivity; this quote was cited in Michael Schudson’s essay “The Objectivity Norm in American Journalism.” This is the best articulation I’ve come across of the idea of the “marketplace of ideas” and, with it, the call for editorial neutrality. Unfortunately, I can only agree with the first half of the statement. Still, well said.

I was interviewed by the NPR program To The Best Of Our Knowledge, for a program on trends. It just went up, if you want to take a listen: “What’s Hot and Why Not?” Mine is the first segment. Also pretty cool that they paired me with Grant McCracken, Butch Vig, and Dr Seuss! This was an extension of my Culture Digitally blog post that first addressed Twitter Trends: “Can an algorithm be wrong?”. (It was also cross-posted here and on Microsoft’s Research’s “Social Media Collective” blog, and was reprinted by Salon.com). I also just finished a piece for Limn that pushes on these concerns a bit more.

NPR just ran their interview with me on “Morning Edition” — you can hear the piece and read the transcript online. The piece makes the nice, general point that algorithms like Twitter Trends make choices about what kinds of topics to highlight and present back to users, though they might seem like neutral calculations. I try to make a more substantive claim, that to even think of it as “bias” is too simple a means for understanding both the politics of the algorithm and the politics of how we represent the public back to itself, in the Culture Digitally blog post that first addressed Twitter Trends: “Can an algorithm be wrong?”. (It was also cross-posted here and on Microsoft’s Research’s “Social Media Collective” blog, and was reprinted by Salon.com). I’m also working on a piece for Limn that pushes on these two concerns a bit more.

(Cross-posted from Culture Digitally)

This conference call looked particularly interesting.

The Nonhuman Turn in 21st Century Studies
May 3-5, 2012
Center for 21st Century Studies, University of Wisconsin, Milwaukee

abstracts due, Dec 19, 2011 (CFP)

This conference takes up the “nonhuman turn” that has been emerging in the arts, humanities, and social sciences over the past few decades. Intensifying in the 21st century, this nonhuman turn can be traced to a variety of different intellectual and theoretical developments from the last decades of the 20th century:

- actor-network theory, particularly Bruno Latour’s career-long project to articulate technical mediation, nonhuman agency, and the politics of things

- affect theory, both in its philosophical and psychological manifestations and as it has been mobilized by queer theory

- animal studies, as developed in the work of Donna Haraway, projects for animal rights, and a more general critique of speciesism

- the assemblage theory of Gilles Deleuze, Manuel DeLanda, Latour, and others

- new brain sciences like neuroscience, cognitive science, and artificial intelligence

- new media theory, especially as it has paid close attention to technical networks, material interfaces, and computational analysis

- the new materialism in feminism, philosophy, and marxism

- varieties of speculative realism like object-oriented philosophy, vitalism, and panpsychism

- and systems theory in its social, technical, and ecological manifestations

Such varied analytical and theoretical formations obviously diverge and disagree in many of their aims, objects, and methodologies. But they are all of a piece in taking up aspects of the nonhuman as critical to the future of 21st century studies in the arts, humanities, and social sciences.

(This is cross-posted from Culture Digitally.)

I just came across a notice for two new books, both of which seemed relevant to the ideas that get discussed on this blog. Thought I would pass them along.

Art Platforms and Cultural Production on the Internet
Olga Goriunova
http://www.routledge.com/books/details/9780415893107/
Routledge Research in Cultural and Media Studies

In this book, Goriunova offers a critical analysis of the processes that produce digital culture. Digital cultures thrive on creativity, developing new forces of organization to overcome repetition and reach brilliance. In order to understand the processes that produce culture, the author introduces the concept of the art platform, a specific configuration of creative passions, codes, events, individuals and works that are propelled by cultural currents and maintained through digitally native means. Art platforms can occur in numerous contexts bringing about genuinely new cultural production, that, given enough force, come together to sustain an open mechanism while negotiating social, technical and political modes of power.

Software art, digital forms of literature, 8-bit music, 3D art forms, pro-surfers, and networks of geeks are test beds for enquiry into what brings and holds art platforms together. Goriunova provides a new means of understanding the development of cultural forms on the Internet, placing the phenomenon of participatory and social networks in a conceptual and historical perspective, and offering powerful tools for researching cultural phenomena overlooked by other approaches.

Olga Goriunova is Senior Lecturer in Media Practices at London Metropolitan University, curator of the recent show Funware (Arnolfini, Mu, Baltan) and an editor of Computational Culture.

Wirelessness, Radical Empiricism in Network Cultures
Adrian Mackenzie
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=12285
The MIT Press

How has wirelessness—being connected to objects and infrastructures without knowing exactly how or where—become a key form of contemporary experience? Stretching across routers, smart phones, netbooks, cities, towers, Guangzhou workshops, service agreements, toys, and states, wireless technologies have brought with them sensations of change, proximity, movement, and divergence. In Wirelessness, Adrian Mackenzie draws on philosophical techniques from a century ago to make sense of this most contemporary postnetwork condition. The radical empiricism associated with the pragmatist philosopher William James, Mackenzie argues, offers fresh ways for matching the disordered flow of wireless networks, meshes, patches, and connections with felt sensations.

For Mackenzie, entanglements with things, gadgets, infrastructures, and services—tendencies, fleeting nuances, and peripheral shades of often barely registered feeling that cannot be easily codified, symbolized, or quantified—mark the experience of wirelessness, and this links directly to James’s expanded conception of experience. “Wirelessness” designates a tendency to make network connections in different times and places using these devices and services. Equally, it embodies a sensibility attuned to the proliferation of devices and services that carry information through radio signals. Above all, it means heightened awareness of ongoing change and movement associated with networks, infrastructures, location, and information.

The experience of wirelessness spans several strands of media-technological change, and Mackenzie moves from wireless cities through signals, devices, networks, maps, and products, to the global belief in the expansion of wireless worlds.

Adrian Mackenzie is Reader and Codirector at the Centre for Science Studies at Lancaster University, U.K, author of Cutting Code, software and society and Transductions, bodies and machines at speed and an editor of Computational Culture.

(Cross posted from Culture Digitally.)

I keep running into this question, such that it feels like it is making a resurgence: concerns about the sexualization of avatars, in both comics, graphic novels, and video games. Here is a pointed discussion on Racalicious about a video game design conference where one set of panelists made a little too plain what the criteria are for designing female video game heroes:

“After making a semi-disparaging remark about female characters drawn in a North American style, he concludes “I’d rather have female characters from Final Fantasy or Soul Caliber to sleep with.” This draws chuckles from the crowd. And there it was, the truth about character design that so many players know but most designers wouldn’t usually articulate: most of the egregiously sexist character designs are based on fuckability, rather than playability.

Drawing attractive characters isn’t a crime. But it starts to become grating when characters are not only attractive, but hypersexualized and mostly defined by their appearance. Even when characters aren’t hypersexualized, they can still be boring and flat in execution if there is more attention paid to animating her curves than the character herself.” (excerpt)

This follows on the heels of some discussion I’ve run across about the way female comic book heroes are being re-booted, in a way that over-emphasizes their sexuality. Ire and rebuke have arisen over the reboot of Starfire character from Teen Titans in DC Comics’ Red Hood and the Outlaws, including a post at Comics Alliance in which the author’s seven-year old daughter expresses her troubled ambivalence on the highly sexualized version of her once favorite character. There has also been recent discussion of the both sexualized and degraded version of Catwoman in her graphic novel reboot and in the new Batman video game, Arkham City. Laura Hudson puts it this way:

“In Catwoman, this is what DC Comics tells me a male hero looks like, and what a female hero looks like:

This is not an anomaly. This is the primary message that I hear. And it is one that I only hear about the people who are like me — the women — and not the men…

Female characters are only insatiable, barely-dressed aliens and strippers because someone decided to make them that way. It isn’t a fact. It isn’t an inviolable reality, especially in a comic book universe that has just been rebooted. In the end, what matters is what you choose to show people and how you show them, not the reasons you make up to justify it. Because this is comics, everybody. You can make up anything.”

In a number of these comments, though the criticism is reserved for the publishers, there is often a comment that these sexualized portrayals have emerged from fanfiction. Hudson highlights “the aggressively fanfictiony on-panel sex between Batman and Catwoman” as indicative of the problem; Peterson’s quote from the Racalicious post continues, ”But the model for art in our fandom communities is often sex appeal first, to the detriment of characters.”

The question of sexualized womens’ bodies in media is certainly not a new one. But this reminds me of a conversation we have been having, about what questions of “content” look like in new media schoalrship: I keep having this itchy feeling that, though we’ve nominally charged ourselves with talking about “digital cultural production,” we’ve had little substantive discussion of digital cultural productions. We seem to have a lot of strength in examining technologies, distribution, producers, and user practices, but the dimension of what in the end gets made seems absent. Perhaps we are too quick to drop persistent questions about the character of the content, and the complexity of how these images emerge both from professional media organizations and from fan communities, and are appealing and troubling among both.

The interesting question is not whether Twitter is censoring its Trends list. The interesting question is, what do we think the Trends list is, what it represents and how it works, that we can presume to hold it accountable when we think it is “wrong?” What are these algorithms, and what do we want them to be?

(Cross posted from Culture Digitally.)

It’s not the first time it has been asked. Gilad Lotan at SocialFlow (and erstwhile Microsoft UX designer), spurred by questions raised by participants and supporters of the Occupy Wall Street protests, asks the question: is Twitter censoring its Trends list to exclude #occupywallstreet and #occupyboston? While the protest movement gains traction and media coverage, and participants, observers and critics turn to Twitter to discuss it, why are these widely-known hashtags not Trending? Why are they not Trending in the very cities where protests have occurred, including New York?

The presumption, though Gilad carefully debunks it, is that Twitter is, for some reason, either removing #occupywallstreet from Trends, or has designed an algorithm to prefer banal topics like Kim Kardashian’s wedding over important contentious, political debates. Similar charges emerged around the absence of #wikileaks from Twitter’s Trends when the trove of diplomatic cables were released in December of last year, as well as around the #demo2010 student protests in the UK, the controversial execution of #TroyDavis in the state of Georgia, the Gaza #flotilla, even the death of #SteveJobs. Why, when these important points of discussion seem to spike, do they not Trend?

Despite an unshakeable undercurrent of paranoid skepticism, in the analyses and especially in the comment threads that trail off from them, most of those who have looked at the issue are reassured that Twitter is not in fact censoring these topics. Their absence on the Trends listings is a product of the particular dynamics of the algorithm that determines Trends, and the misunderstanding most users have about what exactly the Trends algorithm is designed to identify. I do not disagree with this assessment, and have no particular interest in reopening these questions. Along with Gilad’s thorough analysis, Angus Johnston has a series of posts (1, 2, 3, and 4) debunking the charge of censorship around #wikileaks. Trends has been designed (and re-designed) by Twitter not to simply measure popularity, i.e. the sheer quantity of posts using a certain word or hashtag. Instead, Twitter designed the Trends algorithm to capture topics that are enjoying a surge in popularity, rising distinctly above the normal level of chatter. To do this, their algorithm is designed to take into account not just the number of tweets, but factors such as: is the term accelerating in its use? Has it trended before? Is it being used across several networks of people, as opposed to a single, densely-interconnected cluster of users? Are the tweets different, or are they largely re-tweets of the same post? As Twitter representatives have said, they don’t want simply the most tweeted word (in which case the Trend list might read like a grammar assignment about pronouns and indefinite articles) or the topics that are always popular and seem destined to remain so (apparently this means Justin Bieber).

The charge of censorship is, on the face of it, counterintuitive. Twitter has, over the last few years, enjoyed and agreed with claims that has played a catalytic role in recent political and civil unrest, particularly in the Arab world, wearing its political importance as a red badge of courage (see Shepherd and Busch).  To censor these hot button political topics from Trends would work against their current self-proclaimed purposes and, more importantly, its marketing tactics. And, as Johnston noted, the tweets themselves are available, many highly charged - so why, and for what ends, remove #wikileaks or #occupywallstreet from the Trends list, yet  let the actual discussion of these topics run free?

On the other hand, the vigor and persistence of the charge of censorship is not surprising at all. Advocates of these political efforts want desperately for their topic to gain visibility. Those involved in the discussion likely have an exaggerated sense of how important and widely-discussed it is. And, especially with #wikileaks and #occupywallstreet, the possibility that Twitter may be censoring their efforts would fit their supporters’ ideological worldview: Twitter might be working against Wikileaks just as Amazon, Paypal, and Mastercard were; or in the case of #occupywallstreet, while the Twitter network supports the voice of the people, Twitter the corporation of course must have allegiances firmly intertwined with the fatcats of Wall Street.

But the debate about tools like Twitter Trends is, I believe, a debate we will be having more and more often. As more and more of our online public discourse takes place on a select set of private content platforms and communication networks, and these providers turn to complex algorithms to manage, curate, and organize these massive collections, there is an important tension emerging between what we expect these algorithms to be, and what they in fact are. Not only must we recognize that these algorithms are not neutral, and that they encode political choices, and that they frame information in a particular way. We must also understand what it means that we are coming to rely on these algorithms, that we want them to be neutral, we want them to be reliable, we want them to be the effective ways in which we come to know what is most important.

Twitter Trends is only the most visible of these tools. The search engine itself, whether Google or the search bar on your favorite content site (often the same engine, under the hood), is an algorithm that promises to provide a logical set of results in response to a query, but is in fact the result of an algorithm designed to take a range of criteria into account so as to serve up results that satisfy, not just the user, but the aims of the provider, their vision of relevance or newsworthiness or public import, and the particular demands of their business model. As James Grimmelmann observed, “Search engines pride themselves on being automated, except when they aren’t.” When Amazon, or YouTube, or Facebook, offer to algorithmically and in real time report on what is “most popular” or “liked” or “most viewed” or “best selling” or “most commented” or “highest rated,” it is curating a list whose legitimacy is based on the presumption that it has not been curated. And we want them to feel that way, even to the point that we are unwilling to ask about the choices and implications of the algorithms we use every day.

Peel back the algorithms, and this becomes quite apparent. Yes, a casual visit to Twitter’s home page may present Trends as an unproblematic list of terms, that might appear a simple calculation. But a cursory look at Twitter’s explanation of how Trends works – in its policies and help pages, in its company blog, in tweets, in response to press queries, even in the comment threads of the censorship discussions - Twitter lays bare the variety of weighted factors Trends takes into account, and cops to the occasional and unfortunate consequences of these algorithms. Wikileaks may not have trended when people expected it to because it had before; because the discussion of #wikileaks grew too slowly and consistently over time to have spiked enough to draw the algorithm’s attention; because the bulk of messages were retweets; or because the users tweeting about Wikileaks were already densely interconnected. When Twitter changed their algorithm significantly in May 2010 (though, undoubtedly, it has been tweaked in less noticeable ways before and after), they announced the change in their blog, explained why it was made – and even apologized directly to Justin Bieber, whose position in the Trends list would be diminished by the change. In response to charges of censorship, they have explained why they believe Trends should privilege terms that spike, terms that exceed single clusters of interconnected users, new content over retweets, new terms over already trending ones. Critics gather anecdotal evidence and conduct thorough statistical analysis, using available online tools that track the raw popularity of words in a vastly more exhaustive and catholic way than Twitter does, or at least is willing to make available to its users. The algorithms that define what is “trending” or what is “hot” or what is “most popular” are not simple measures, they are carefully designed to capture something the site providers want to capture, and to weed out the inevitable “mistakes” a simple calculation would make.

At the same time, Twitter most certainly does curate its Trends lists. It engages in traditional censorship: for example, a Twitter engineer acknowledges here that Trends excludes profanity, something that’s obvious from the relatively circuitous path that prurient attempts to push dirty words onto the Trends list must take. Twitter will remove tweets that constitute specific threats of violence, copyright or trademark violations, impersonation of others, revelations of others’ private information, or spam. (Twitter has even been criticized (1, 2) for not removing some terms from Trends, as in this user’s complaint that #reasonstobeatyourgirlfriend was permitted to appear.) Twitter also engages in softer forms of governance, by designing the algorithm so as to privilege some kinds of content and exclude others, and some users and not others. Twitter offers rules, guidelines, and suggestions for proper tweeting, in the hopes of gently moving users towards the kinds of topics that suit their site and away from the kinds of content that, were it to trend, might reflect badly on the site. For some of their rules for proper profile content, tweet content, and hashtag use, the punishment imposed on violators is that their tweets will not factor into search or Trends - thereby culling the Trends lists by culling what content is even in consideration for it. Twitter includes terms in its Trends from promotional partners, terms that were not spiking in popularity otherwise. This list, automatically calculated on the fly, is yet also the result of careful curation to decide what it should represent, what counts as “trend-ness.”

Ironically, terms like #wikileaks and #occupywallstreet are exactly the kinds of terms that, from a reasonable perspective, Twitter should want to show up as Trends. If we take the reasonable position that Twitter is benefiting from its role in the democratic uprisings of recent years, and that it is pitching itself as a vital tool for important political discussion, and that it wants to highlight terms that will support that vision and draw users to topics that strike them as relevant, #occupywallstreet seems to fit the bill. So despite carefully designing their algorithm away from the perennials of Bieber and the weeds of common language, it still cannot always successfully pluck out the vital public discussion it might want. In this, Twitter is in agreement with its critics; perhaps #wikileaks should have trended after the diplomatic cables were released. These algorithms are not perfect; they are still cudgels, where one might want scalpels. The Trends list can often look, in fact, like a study in insignificance. Not only are the interests of a few often precisely irrelevant to the rest of us, but much of what we talk about on Twitter every day is in fact quite everyday, despite their most heroic claims of political import. But, many Twitter users take it to be not just a measure of visibility but a means of visibility – whether or not the appearance of a term or #hashtag increases audience, which is not in fact clear. Trends offers to propel a topic towards greater attention, and offers proof of the attention already being paid. Or seems to.

Of course, Twitter has in its hands the biggest resource by which to improve their tool, a massive and interested user base. One could imagine “crowdsourcing” this problem, asking users to rate the quality of the Trends lists, and assessing these responses over time and a huge number of data points. But they face a dilemma: revealing the workings of their algorithm, even enough to respond to charges of censorship and manipulation, much less to share the task of improving it, risks helping those who would game the system. Everyone from spammers to political activist to 4chan tricksters to narcissists might want to “optimize” their tweets and hashtags so as to show up in the Trends. So the mechanism underneath this tool, that is meant to present a (quasi) democratic assessment of what the public finds important right now, cannot reveals its own “secret sauce.”

Which in some ways leaves us, and Twitter, in an unresolvable quandary. The algorithmic gloss of our aggregate social data practices can always be read/misread as censorship, if the results do not match what someone expects. If #occupywallstreet is not trending, does that mean (a) it is being purposefully censored? (b) it is very popular but consistently so, not a spike? (c) it is actually less popular than one might think? Broad scrapes of huge data, like Twitter Trends, are in some ways meant to show us what we know to be true, and to show us what we are unable to perceive as true because of our limited scope. And we can never really tell which it is showing us, or failing to show us. We remain trapped in an algorithmic regress, and not even Twitter can help, as it can’t risk revealing the criteria it used.

But what is most important here is not the consequences of algorithms, it is our emerging and powerful faith in them. Trends measures “trends,” a phenomena Twitter gets to define and build into its algorithm. But we are invited to treat Trends as a reasonable measure of popularity and importance, a “trend” in our understanding of the term. And we want it to be so. We want Trends to be an impartial arbiter of what’s relevant… and we want our pet topic, the one it seems certain that “everyone” is (or should be) talking about, to be duly noted by this objective measure specifically designed to do so. We want Twitter to be “right” about what is important… and sometimes we kinda want them to be wrong, deliberately wrong – because that will also fit our worldview: that when the facts are misrepresented, it’s because someone did so deliberately, not because facts are in many ways the product of how they’re manufactured.

 

We don’t have a sufficient vocabulary for assessing the algorithmic intervention a tool like Trends. We’re not good at comprehending the complexity required to make a tool like Trends – that seems to effortlessly identify what’s going on, that isn’t swamped by the mundane or the irrelevant. We don’t have a language for the unexpected associations algorithms make, beyond the intention (or even comprehension) of their designers. We don’t have a clear sense of how to talk about the politics of this algorithm. If Trends, as designed, does leave #occupywallstreet off the list, even when its use is surging and even when some people think it should be there: is that the algorithm correctly assessing what is happening? Is it looking for the wrong things? Has it been turned from its proper ends by interested parties? Too often, maybe in nearly every instance in which we use these platforms, we fail to ask these questions. We equate the “hot” list with our understanding of what is popular, the “trends” list with what matters. Most importantly, we may be unwilling or unable to recognize our growing dependence on these algorithmic tools, as our means of navigating the huge corpuses of data that we must, because we want so badly for these tools to perform a simple, neutral calculus, without blurry edges, without human intervention, without having to be tweaked to get it “right,” without being shaped by the interests of their providers.

(Cross posted from Culture Digitally.)

By the end of the first workshop, we had turned a enormous range of ideas into five groupings of discussion topics. If we started with a sky full of stars, by the end we had formed them into loose constellations. Now, looking ahead towards the second workshop in April 2012, we want to sharpen this picture. So Hector and I have gone back through the workshop discussion and the more recent online conversations, and identified a “point of light” that stands out in each of those constellations (to overextend the metaphor). Each of these seem, to us, to be some kind of focal point in the conversations we’ve developed, something that seemed to emerge from our discussion. They are:

Affordances, technical agency, and the politics of technologies of cultural production: How do we develop our thinking about the way technologies shape behavior? In an attempt to avoid deterministic claims, have we overcorrected, leaving us unable to make sophisticated claims about technical agency? In what ways do we, quite regularly and deliberately, solicit being “determined” by the media and information technologies we use in producing and consuming culture and knowledge? [technical_agency]

Social and professional imaginaries: who is in a position to construct the visions of technology and digital culture that circulate around us? How do social imaginings of our relationship to digital cultural production become the professional imaginaries of designers, producers, information providers, critics? How do those imaginaries get embedded into the technologies and cultural texts we engage with? [social_imaginaries]

Theorizing practice: making do, in the shadow of ideologies: In trying to understand the micro-practices of cultural production, how do we get at the lived rhythms of people, the way their efforts to produce are embedded in, beholden to, and driven by their need to make do — whether that means economically, politically, culturally, personally, morally? What do the ideologies of “participation,” “engagement,” and “voice” mean in the mundane realities of lived experience? What does making do with digital technology and culture mean from different social positions, e.g. entitled and marginalized? [making_do]

Advancing the user debate: the dynamics of micro-participation: User participation in new media has been an important topic in recent years.  Various theoretical perspectives from “Participatory Culture” to political economy have been deployed to understanding UGC and its culture.  A new and interesting phase in this phenomenon is emerging, however.  “Micro-participation” is increasingly becoming important.  In this case it’s not the labor intensive production of novel content such as machinima or user produced videos on YouTube and their attendant communities that are the subject of study.   Rather micro participatory actions such as “likes,” status updates, images in profiles, mobile data, tags, and other quick direct or incidental contributions that in their aggregate amount to a wealth of UGC, giving value to social networks and other media business and compelling increasing forms of users surveillance and behavior modeling.  What are the theoretical perspectives that would help map the power flows within incidental matrices of participation?  What laws, policies, norms govern user and corporate expectations regarding micro participatory data or incidental data?  [micro_practices]

A methodological quandary: our place in the research, as makers/participants/users ourselves: New media researchers are increasingly finding themselves in the position of participant observers.  Often they are also technical experts, users, and participants in the communities/technologies they study.  To what degree can method be augmented by this expertise? What are the boundary/identity problems confronted by researchers in this situation?  Advantages? Can we do research that is interventionist? Action oriented?  Can we devise research agendas that embrace these research identities?  What are the sites, theories, and approaches that would facilitate that agenda? [participatory_research]

Each has been given a tag [in the brackets above] so that future blog posts that resonate with them can be identified as such.

Carry on.

(cross-posted to Culture Digitally)

In my continuing effort to think carefully about the digitization of distribution, Here’s a pretty helpful little bit of infographic frippery, that documents the growth of digital distribution models in relation to the brick and mortar counterparts.

Might also be interesting to think about the explosion of “infographics” as a contemporary form of information presentation. And as much as I’d like to be critical, as is my tendency, I might have to first sing their praises from the rooftops first.

(Cross post with Culture Digitally)

Venturing again into an example I do not know enough about, I wanted to recommend this Wired article on how the comics industry has been managing the shift to digital formats and distribution. The title, “The iPad Could Revolutionize the Comic Book Biz — or Destroy It” is deeply misleading, a predictable gloss from a magazine that has long trafficked in technological determinism. But the body of the article quietly goes about understanding the comic book industry in a much more nuanced way, refusing to frame this change as either sermon or eulogy. The key move, crucial to avoiding these traps, and so rare not just in the press but even in talk from information industries themselves, is to recognize that the audience is not homogenous, that they will not all live or die by print or by digital. The move to digital formats and distribution, while not inevitable, has clear economic momentum, but this does not mean that the only choices for “comic book fans” (as a homogenous block) are migration or exodus.

The article notes that, as independent comics publishers (here “independent” means not Marvel or DC) experiment with digital forms, particularly on tablets like the iPad, not only might this convince some collectors to migrate to digital, but more importantly it could reach “lapsed fans,” those who have dabbled in an interest in comics in the past but did not become regular buyers. Digital versions could reach an audience that the print form does not. Pulling these readers back into the fold (no pun intended), industry optimists suggest, might actually expand the readership for comics, smooth the way for digital form comics, while not immediately eating into sales at brick-and-mortar stores. Traditional fans raised on visiting the local store, having the paper copies in hand, and lovingly storing them in slipcovers, will for a while need to continue to have this form, suggesting that the transition to digital need not be total or instant. The article also notes the surprising power of vendors, who can still exact vengeance on a publisher who is too eager to privilege the digital, and the oversized influence of the two giants in the field, Marvel and DC, who can invest in digital formats without worrying about that blowback, and without bringing independents along with them.

Much of this may be wishful thinking, naive predictions designed to believe that the comics industry can prosper even as newspapers and magazines struggle. It could be dead wrong in its prognostications. And the lessons here may not hold for other industries. The reason I highlight this piece is that, too often, discussion of culture industries in transition fail to tell a complex story of what the industry already was. They fail to notice how different parts of the industry, the market, or the form itself will or could respond differently to emerging digital venues. They often fail to understand why the transition to digital is rarely a night-to-day switchover, that digital and print (or digital and analog, or online and material distribution) are likely to co-exist for some time. The wild overstatements around the transition in music, from the record industry, from Napster, from fans, from Apple, and from bands, were all wildly off the mark: music did not shrivel up and die, lawlessness did not prevail, and creativity was not set free. Yet things did change, and in ways that, when we look back in twenty years may appear almost as momentous as everyone was saying — but not in such simple terms as those claims suggested.

Following Bourdieu, I find it more helpful to think about cultural production as a “field,” with many actors, organizations, and genres nestled together like bubbles in a glass, jostling for space, reaching momentary equilibrium before a new actor or technology or form pushes into the space and everything else has to adjust, re-settle, and sometimes pop. Digital formats and distribution opportunities may be a very large push, or be many little shoves all at once, but it does not wipe the field clean, and the adjustments made in response are myriad, complex, in competing directions, and to some degree unpredictable. It is those adjustments, more than “digital,” that explain what new equilibrium that field of cultural production will find in response.

The only thing that is notably absent in this article, is the question of how the format and availability of comics affect current and future creators of comics. I suspect that the comic book store and the yearly Comic-con convention in San Diego are important environments that help kids become comic artists, that not only nurture skills but also offer a sense of community and purpose. Digital distribution lacks the lived locations in which comics circulated. On the other hand, they may offer different kinds of social spaces. When we think about digital distribution we too often tend to think about the business of providing culture and the audiences to which it is provided, but the cultivation of amateurs and professionals in the process of culture’s production and distribution must be another important piece of the puzzle.

With the help of Hector Postigo and a stellar team of scholars, I’ve started a blog called Culture Digitally. Our aspiration is to develop a centerpoint for the scholarly discussion of cultural production in a digital environment. Below is my introductory post. If you’ve read this blog or my work, I’d encourage you to visit the blog.

*

Those of us who study cultural production in the digital age for a living face a number of distinct challenges: how to gain some perspective on the technologies and practices in which we are immersed; how to resist the seductive claims of  revolutions and catastrophes that new media are sure to spark; how to convince funding organizations, tenure committees, and skeptical family members that studying { YouTube // online gaming // Twitter // lolcats } is a worthwhile and scholarly pursuit.

But perhaps the most pressing challenge is simply keeping up. It’s easy, too easy, to say that technology outpaces our ability to make sense of it. This is perhaps true, in a way, though it is also somewhat true of many things, and it is also true that our thinking outpaces our technology’s ability to embody it. Still, the practical and institutional mechanisms for turning academic research and insight into a material form that can be circulated to other scholars, or to interested readers outside of academia, or to policymakers and software designers and venture capitalists and amateur filmmakers, are traditionally not swift. Publication in journals is a shockingly slow process. Trade books and monographs take 2-3 years from concept to bookshelf. White papers and institutional reports are slow-going. Collaboration between scholars adds its own time challenges. Workshops take months to organize, collaborative research stalls at the slightest hitch, conference papers are submitted months before they’ll be presented — and true interdisciplinary engagement, when it happens at all, incurs all of these delays. And all the while, the cultural phenomena, the sociological rhythms, the technological innovations, and the policy debates that we hope to investigate move on, with little concern for our efforts to grab hold of them.

When Hector Postigo and I started thinking about how to bring together the scholars we’d come to recognize as important voices in this area, and how to link them and their home disciplines into a rich conversation, we immediately began struggling with this problem. We considered the traditional approaches, eliciting these scholars to contribute to an anthology, forging research collaborations that could seek external funding for innovative projects. These things may all happen. But, particularly in light of the desire to speak in a more timely way on the pressing issues we’re examining, we wanted to develop a more responsive way to share our ideas. This meant not only developing a space in which to speak and interact, but also being willing to offer our work before its polished to a publishable sheen. There are some models out there for doing this. Law scholars seem to be ahead of the rest of us on this regard, having moved not only most of their published work to an open, online collection at SSRN, but embracing group blogs (see Madisonian, Balkinization, Stanford CIS, or Univ. of Chicago Law): among our colleagues we looked to, among others, Crooked Timber, the Networked Publics project, and Terra Nova as exemplars.

With the generous support of the National Science Foundation, who also sponsored the workshop that helped us initate this collaboration, we have developed Culture Digitally to serve a number of purposes. First and foremost, it is meant to be a gathering point around which scholars who study of cultural production and information technologies can think together. We come from a range of fields, including Communication, Sociology, Media Studies, Science & Technology Studies, and Anthropology. But we are connected by our research interests, an emerging area of scholarship that currently lives across, or sometimes falls between, our home disciplines. It’s our sense that this emerging discussion needs more homes, be they virtual or institutional.

On this group blog, we hope to offer thought-provoking scholarly conversations, provocations and starting points for intellectual inquiry in our field, discussions of problems and tensions in the arena of cultural production and in the scholarship addressing it, links to references vital to the scholarship we do. We hope to comment on current events, emerging cultural trends, new laws and policies, and technological innovations. We hope to deepen the discussion of new media and digital culture by bringing to it historical, comparative, and ethnographic perspective. And we hope to enliven the blog modality, by incorporating tools that allow for synchronous collaboration and creative engagements with ideas, objects, and images.

At the start, the authors of the blog will be those who took part in the initial workshop. We hope, in the comment space beneath posts and in the twitter-verse (our Twitter hashtag is: #cultd) this group of participants can expand. in addition, for those who want to contribute to the discussion more substantially, we will be looking to invite guest bloggers to join in. We hope you’ll help us move these ideas forward.

This blog is coming back to life.


This Is Where We Live from 4th Estate on Vimeo.

I’ve found myself not blogging because all I want to talk about is the election, and it seemed somehow not part of this blog. Plus its probably preaching to the choir, which seems a waste of energy, especially now. But if its going to kill my blog not to, then here we go.

There was a moment that I thought was telling in the event where Obama and McCain spoke with Pastor Rick Warren at Saddleback church last month. Warren asked them each the same question: “Does evil exist, and if it does, do we ignore it, negotiate with it contain it, or do we defeat it?” Obama answered in a way I more or less liked, that we have to be soldiers in the fight against evil, but with a little humility about it, an awareness that much evil has been perpetrated in the name of good. McCain simply said “defeat it” – and the audience roared. Now, it was his audience more than Obama’s, for sure. But McCain’s is the wrong answer, and it’s the seductive answer. What I wanted Obama to say, and McCain for that matter, is “Are you kidding? You do everything you can. Why would you choose one tool for the greatest challenge in human existence? The reality is, you negotiate with it, you contain it, and you defeat it, and the wise man knows which when.” But right now, we still want the kneejerk reaction that we’re going to go out there and kill all the bad guys, Its so stupid, so regressive, so naive, so dangerous.

We’re at such a desperate time, economically, internationally, culturally. I think we need someone who really understands the complexity of what the world is right now, and what America needs to be doing. I think Obama has that – its not about experience, as in years served, its a combination of (a) years served, (b) the world in which those years were served, and (c) insight. I think Obama has emerged in and of a political time in which new lessons are just beginning to be learned. And I think he has the insight to see how things are complicated, to make some important choices. Biden seems to be that as well, despite how much of his career was in an earlier political era.

I thought McCain had it too, once, I really did; but he has spun out in the last two years as a reactionary dressed as a stern realist, with a worldview that has become entirely militarized. He used to be a smart politician, with his focus on making government better, and I admired him for it. But now, and I feel bad saying this, I think the current political climate summons up his POW mindset, where the world seems an essentially dangerous place. (It is, but you can’t let that become fear or hubris or demagoguery.) Palin’s worse. She’s a product of her time, which is even more recent: a panicky, fundamentalist post-9/11 moment that lets her lean on the fear that the terrorist attacks produced and use it to trade complexity for moral certitude, even when the world speaks otherwise. She’s an unprepared, evangelical, anti-science, hyperconservative, deceitful fundamentalist. Really, how dare he — McCain has thirty years in goverment, plenty of time to really know who among his colleagues would be a great leader — even from his side of the aisle.

I feel like its long overdue for the US to take a deep breath, and accept the following facts. (a) It’s a violent world, where our enemies are elusive and dangerous, (b) it’s a complex world, where our actions, however justified, have ripple effects, (c) it’s a messy world, where there simply are no easy solutions, and (d) it’s a world-in-progress, where we can’t just drop everything and go on a revenge crusade, and forget that we’ve got to keep our society running, our economy functioning, our children learning, our society healthy, our knowledge growing, and our eyes open. McCain and Palin are exactly the two wrong answers for this moment: he’s a well-informed but unyielding Cold Warrior who urges us unrealistically to simply extinguish our foes, and recently wrapped in the icky neo-con self-assuredness about good and evil; she’s a uninformed zealot who hides her extremism under an aw-shucks small-town America values pitch.
McCain-Palin is a reversal of the last ticket; it’s Cheney-Bush.

(Thanks to Gary Kamiya’s terrific Salon article for noting the Palin = Bush equation so forcefully.)

Next Page »