How Facebook is Helping U.S. Courts to Recognize Free Speech in Digital Media
Jennifer Petersen / University of Virginia
On June 16, the Supreme Court agreed to hear a case regarding threats made on Facebook—a case that commentators predict will help clarify individuals’ speech rights on the social media platform as well as the fuzzy legal category of “true threats.”1 The case hinges on whether or not threats over social media are as threatening as those over other forms of media or in person, a question with difficult gendered dimensions as Amanda Hess outlines so well. It will also hinge on how the Court understands communication in and through the social media platform.
To get an idea of the types of questions the Court will be dancing around in deciding whether Facebook threats are true threats or free speech, it might be helpful to look at how the law has been applied to computer-mediated communication. To date, these decisions are contradictory, betraying a deep uncertainty about what constitutes speech when humans and computers come together.
It would make sense to start with what the courts have had to say about Facebook so far. Most pertinent2 here is the fall 2013 case in which a federal appeals court ruled that a Facebook like was protected speech under the First Amendment. The decision, and the way that it compares to other recent decisions concerning computer-mediated communication, showcases how the law is and is not thinking about such communication.
In Bland v. Roberts (2013) Daniel Carter, an employee of a VA sheriff’s office, liked the standing sheriff’s opponent during an election and was subsequently fired (along with several others who had made similar infractions). He sued, asserting liking the opponent was a First Amendment right. At issue was whether the simple act of clicking a button, to accept one of a pre-constituted set of choices set by the Facebook interface, actually showed significant thought on the part of the clicker. Could such a simple action be taken as an expression of an original thought? An outward expression of the clickers’ internal mental state?
Because this click of a button “literally cause[d] to be published the statement that the user ‘likes’ something, which is itself a substantial statement”,3 the court said yes. Clicking like on Facebook is speech in the same way that displaying a political yard sign is. The decision argued that it was insignificant whether the user had typed the message himself or clicked the button causing a pre-determined message (authored by Facebook) to appear. This reasoning differs substantially from that employed in earlier cases and begs a compelling question about whose speech a like truly is: that of Facebook, Inc. or that of the user.
In earlier cases, courts have argued that the click of a button does not amount to substantive speech because it does not require creativity or any original intellectual effort, and therefore does not bear the imprint of individuality implied by speech. In Commodity Futures Trading Commission v Vartuli (2000), the appeals court argued that a program dispensing instructions to buy or sell stocks was not advice, or speech, but closer to action, and thus not immune from being prosecuted for consumer fraud. In order to make this argument, the judges had to say that the user’s actions in response to the program’s instructions, whether they were clicking a button or making a call, were merely mechanical. Quite literally, the user was figured as a cog in the machine: “the fact that the system used words as triggers and a human being as a conduit, rather than programming commands as triggers and semiconductors as a conduit, appears to us to be irrelevant for purposes of this analysis.”4 And in Universal Studios v. Corley (2001), the court similarly asserted that a click of the mouse was not sufficient action to denote the operation of a human will, or actual thought.
So why is clicking “like” on Facebook a clear expression of human will and thought when elsewhere in the law clicking a button or key is figured as being an extension of the machine (or program)?
A prosaic answer would be that the way that legal practitioners render communication is a function of the social relations that the legal practitioners are attempting to protect/regulate. The immediate context suggests specific (and generally limited, at least when it comes to computer-mediated communication) lines of communication. In both Corley and Vartuli, the fact that the programs in question trespassed the law – specifically, law concerning property (intellectual property and consumer protections, respectively) – kept the legal practitioners from looking at the communicative capacities of the programs themselves, or how programs might communicate through functionality.5 This would not necessarily have changed the outcomes of the cases. It would, however, have changed the way that they defined and reasoned about them. The immediate context directed the legal practitioners to focus on the relations among programmers and the content industry in the first case, and between a software company and its users in the second. In both cases, the relationships imagined are commercial and instrumental more than social or political. The clearly social and communicative use of the like button on Facebook, on the other hand, makes it clear how a rather simple function (clicking a button) can take on expressive contours.
A more conceptual and perhaps tendentious answer is that the law–to use what is here a useful reification–has a hard time figuring human-computer interactions. These interactions often do not fit well within the conception of individuals as autonomous agents at the heart of the U.S. legal system. A click of a key does not register the same trace of the unique individual mind or creative spark as does the written or spoken word. This trace is prized, if not fetishized, in both speech law and intellectual property law. The questions dodged within the Facebook “like” case are in fact significant within the concerns and hierarchies of the law. They are also ones that may expose its limits. What we really ought to be asking about Facebook likes is not whether they are expressions, but whose expression they are. To what extent is clicking on the like button in an environment owned and designed by Facebook, Inc. the speech of the user and to what extent is it the speech of the corporate owner? The interface defines a limited set of actions on the platform. Any use of Facebook falls within parameters defined by Facebook, Inc. Is it more appropriate to say they are the thought values, and ideas of the user, Facebook, or some chimera of the two?
If these questions sound much more like a parable of agency in late capitalism, this is because they get at the difficult questions of freedom and expression that privately owned digital spaces pose. These questions move away from what sort of conduit Facebook is and towards questions of how relationships are governed through it. They move us toward consideration of how these relationships promote, hinder, or bypass altogether the goals of free speech law.
These questions, and the cases that propel them, have a wide resonance. Media studies scholars, though, should pay particular attention to the way these questions are articulated and answered within decisions regarding new communication technologies. Legal decisions on the status of new media in First Amendment law shape how users interact with these media and their development and deployment. The rationales and definitions employed in these decisions often miss the messy ways that communication happens and certainly the variation in and among audiences. The definitions employed, no matter how partial, often produce what they purport to describe, as Lee Grieveson shows in his history of how legal definitions of early cinema as entertainment helped to shape the industry in these terms.6 The language of these decisions matters. Here these technologies are defined, their parameters as media institutionalized.
Watch carefully. It may be an excellent show. Cases involving new technology that complicate ideas of authorship or unitary origination of expression, also threaten or promise, depending on where you stand, to expose the limits of the law. The difficulty of applying existing speech law to new communication technologies may make the fault lines between 18th-century conceptions of the autonomous subject and the messier 21st-century ideas about the subject, as a decentered one whose agency often seems mitigated or distributed via technological and economic systems.
Image Credits:
1. Facebook’s “Like” Thumb
2. Political Yard Signs
3. DeCSS Code Sheet
Please feel free to comment.
- True threats are not protected from legal action under the First Amendment. [↩]
- Many of the cases that have grabbed headlines so far have been more about labor rights than the First Amendment (e.g., cases limiting the restrictions employers can place on workers’ speech). [↩]
- Bland v. Roberts, http://www.ca4.uscourts.gov/Opinions/Published/121671.P.pdf (2013), p. 39. [↩]
- CFTC v. Vartuli (2000), http://caselaw.findlaw.com/us-2nd-circuit/1204265.html. [↩]
- Jennifer Petersen, “Is Code Speech? Law and the Expressivity of New Media” New Media & Society DOI: 10.1177/1461444813504276. September 25, 2013. See also Matt Ratto “Embedded Technical Expression: Code and the Leveraging of Functionality,” The Information Society, vol. 21, No. 3. (2005) 205-213. [↩]
- Lee Grieveson, Policing Cinema: Movies and Censorship in Early 20th Century America (Berkley: University of California Press, 2004). [↩]