Censorship and Content Regulation of the Internet

views updated

Censorship and Content Regulation of the Internet

INTRODUCTION
HISTORY OF REGULATION
PROBLEMS OF INTERNET CENSORSHIPMETHODS OF CENSORSHIP
TRENDS
CONCLUSION

Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders. Do not think that you can build it, as though it were a public construction project. You cannot. It is an act of nature and it grows itself through our collective actions.

John Perry Barlow, “A Cyberspace Independence Declaration,” 1996, Electronic Frontier Foundation “Publications—John Perry Barlow” Archive.

INTRODUCTION

All countries have some form of content regulation of their traditional media. Typically, these regulations have to do with truth,

honesty and taste. Less typically but still fairly common are content regulation on the grounds of national security and racial and religious harmony. Even in the US with its First Amendment, there are written laws and unwritten social norms to keep the media in line with public opinion and taste.1 In fact, opinion in the US seems more conscious of these unwritten norms. For example, when the Monica Lewinsky report was published in the newspapers, a number of American papers had warning labels that the report was graphic. Anecdotal evidence suggests that newspaper editors outside the US are not as sensitive to such concerns.

The internet, however, raises many thorny issues regarding these laws and norms of tastes. Even if a content provider were painstakingly careful to keep its website's content truthful, honest and tasteful, it may be illegal or blasphemous when it is accessed from another country. Perhaps the closest to an agreed list of information that may be restricted was drawn up by a report to the European Parliament in 1996. There, the committee accepted restrictions on access to (or allows censorship of) information that are “potentially harmful or illegal or can be misused as a vehicle for criminal activities” on the following grounds:

  • national security (for example, instructions on bomb-making, illegal drug production, terrorist activities)
  • protection of minors (for example, abusive forms of marketing, violence, pornography)
  • protection of human dignity (for example, incitement to racial hatred or racial discrimination)
  • economic security (for example, fraud, instructions on pirating credit cards)
  • information security (for example, malicious hacking)

1 It should be noted that legal doctrine in the US draws a distinction between private sector censorship and government censorship. The First Amendment prohibits the latter but not the former. For the end user, however, such a distinction is artificial at best.

  • protection of privacy (for example, unauthorized communication of personal data, electronic harassment)
  • protection of reputation (for example, libel, unlawful comparative advertising)
  • intellectual property (unauthorized distribution of copyrighted works, e.g., software or music)2

That list includes restricting information that violates privacy, reputation and intellectual property rights. As practiced in traditional print and broadcast media, censorship removes or deletes forbidden material from the user. With the internet, however, it is often not possible to remove material. Instead, the most frequently used methods are to deny, block or filter access. Such attempts, however, attract outcries from the internet community. And here is another divide between what regulations and community desire.

HISTORY OF REGULATION

Early Days3

Three successive and parallel “waves” of attempts to regulate internet content may be discerned. The first wave was in the early days of the internet, soon after it was introduced to the public. The approach was to treat the internet in the same way as the traditional mass media. In Asia, particularly where the media are more regulated, a number of governments considered the likely impact of the internet before

2“Illegal and Harmful Content on the Internet: Communication to the European Parliament, the Council, the Economic and Social Committee and the Committee of the Regions,” 1996. http://www2echolu/legal/en/internet/content/communichtm.

3 The ideas in this section have been featured in Peng Hwa Ang, “Censorship and the Internet,” Encyclopaedia of Library and Information Science, ed. Allen Kent, 475—83 (New York: Marcel Dekker, 2003).

making it publicly available. Singapore, for example, was not the first country in Southeast Asia to allow public internet access but its less technologically-savvy neighbor, Malaysia. Singapore officials were proceeding cautiously with this new information channel. China too appears to have proceeded cautiously, making the internet available in universities only before allowing its diffusion to the wider public, at a time when many countries were already allowing access to the public. Vietnam at first declared that it would not allow the internet into the country in April 1996 but the government has recently seem to indicate a U-turn in policy.4

Lest the countries appear to be merely single-mindedly censorial, it should be noted that some of the most creative, even aggressive, promotion of e-lifestyle have been in Singapore and Vietnam. In the case of Vietnam, seven months after saying no to the internet, the government organized an Internet Day.5 And while Singapore was the first country in the world to develop a code of practice for website owners in 1996, it was also among the first, if not the first, to have an official government website (www.sg).

Meanwhile, in the West, governments were also attempting to apply certain offline ideas and rules to the internet. In Canada, there was some discussion about adapting the quota system, that had successfully increased Canadian-made television content, to the internet.6 A key constituency was the French-speaking Canadian. But the attempt went nowhere. It is one thing to encourage content, it is quite another to mandate it.

In those early days, there were often stories about widespread pornography and credit card fraud on the net. It was as if Prometheus had brought back fire, and all that the media covered were people

4 Peng Hwa Ang, “How Countries are Regulating Internet Content,” (lecture, Internet Society Annual Conference, Kuala Lumpur, Malaysia, June 1997). http://www.isoc.org/isoc/whatis/conferences/inet/97/proceedings/B1/B1_3.HTM1997 (accessed September 1, 2001).

5 Ibid.

6 “Canada Eyes Internet Regulation,” Ottawa Citizen, November 15, 1996, D15.

getting burned by it. One incident that marks this feverish pitch was Time magazine's cover story of July 3, 1995, in which the publication reported that more than eighty percent of the images posted on Usenet newsgroups were pornographic.7 This claim was later discredited.8

However, the extensive media coverage about pornography especially created pressure on politicians to do something. As a result, laws were passed hastily. In 1996, the French Constitutional Council struck down provisions of a new Telecommunications Law that empowered the Conseil Supérieur de la Télématique to make recommendations on the types of content that were permissible.9 In the US, the Supreme Court in 1997 struck down some provisions of the Communications Decency Act10 that had been passed out of apprehension and confusion rather than prevention.11

In short, between 1995 and 1996, it looked as if the internet would never be regulated. Countries that were trying to do so were either failing before the courts or failing in practice. Singapore was giving it a shot but in reality, it was not really doing much. The US government was moving toward a more hands-off approach to regulation. In the conceptual space, scholars such as Post and Johnson suggested that the internet needed a new legal regime.12 Today, Post and Johnson are referred to euphemistically as “first-generation scholars.”13

7 Philip Elmer-De Witt, “On a Screen Near You: Cyberporn,” Time, July 3, 1995, 38.

8 Andrew L. Shapiro, The Control Revolution: How the Internet is Putting Individuals in Charge and Changing the World We Know, 35-38 (New York: Public Affairs/The Century Foundation, 1999).

9 Ang, “How Countries are Regulating Internet Content.”

10 Attorney General of the United States of America v. American Civil Liberties Union, et al. 117 S. Ct. 2329 138. L. Ed. 2d 874 (1997). http://www.aclu.org/court/renovacludec.html (accessed October 2, 2001).

11 Shapiro, The Control Revolution, 70.

12 David R. Johnson and David Post, “Law and Borders: The Rise of Law in Cyberspace,” Stanford Law Review 48 (1996): 1367, 1378.

13 Christoph Engel, “Organizing Co-Existence in Cyberspace: Content Regulation and Privacy Compared,” 2002, Max-Planck Project Group. Reprint No. 2002/12. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=325360.

Probably the high-water mark was John Perry Barlow's famous evangelical statement at Davos, Switzerland, at the World Economic Forum, before an audience of international and business leaders, in which he declared cyberspace to be independent of governments.14

Latter Days

While Barlow's declaration is bold, strident and visionary, it is unrealistic. If the internet is indeed a powerful medium of communication and information, then it would be in the interest of governments to step in. Surely, if the internet is indeed a medium that can do good, it can be twisted to do bad. In any event, as long as there is human interaction on the internet, some regulations will be needed.

Law enforcement, for example, has never assumed that it was impossible to regulate the internet. The police in the UK in July 1995 mounted Operation Starburst to sieve out child pornography, the first international internet sweep, which led to the arrest of thirty-seven persons across a few countries.15 The New York Attorney General's Office mounted a similar sweep called Operation Ripcord in March 1996, which issued seventy-five search warrants in the state of New York alone and more than 220 referrals internationally.16

International sweeps for child pornography have since been organized. The last significant holdout was Japan, which had to pass child porn laws in May 1999 to be in line with international norms.

14 John Perry Barlow, “A Cyberspace Independence Declaration,” 1996, Electronic Frontier Foundation “Publications —John Perry Barlow” Archive. http://www.eff.org/Publications/John_Perry_Barlow/barlow_0296.declaration (accessed November 1, 2002), Davos, Switzerland, February 8, 1996. WIRED 4.06. Available at http://hotwired.lycos.com/wired_online/4.06/declaration (accessed April 24, 2003).

15 Y. Akdeniz, “The Regulation of Pornography and Child Pornography on the Internet,” The Journal of Information, Law and Technology 1 (1997). http://elj.warwick.ac.uk/jilt/internet/97_1akdz (accessed November 20, 2002).

16 Michael McCartney, “Computer Crime Issues,” New York State Attorney General's Office, n.d. http://www.oft.state.ny.us/security/electronic%20presentations/ag%20new.pdf (accessed December 1, 2002).

Until then, the Japanese drew no distinction between porn of children and adults.17 In a sweep called Operation Cathedral, police in fourteen countries in the OECD cooperated to crack several child-porn rings.18 Today, on- and offline child porn is an area that police patrol in several countries.

Australia conducted the first international internet sweep for consumer fraud in 1996. No new internet laws were passed for the sweep. Instead, the then Australian Fair Trading Office relied on existent laws. Since then, a number of countries, including the US, have joined Australia in conducting such annual sweeps. As at 2002, some thirty countries, mostly from the West, take part in such sweeps annually.19

Even the courts have taken the position that it is possible to enforce legal sanctions on the internet. In May 1998, a German court convicted the managing director of the German division of the online service company CompuServe for allowing its servers to carry porn.20 This is a regrettable example of online law enforcement. The judge apparently felt that it was technically feasible to block access by German users to some 200-odd pornographic sites, even though that action affected the workings of the entire CompuServe network outside Germany. Nevertheless, from the judge's point of view, the desired outcome of blocking access was possible. The case was later overturned on appeal, in part through reliance on a new German multimedia law passed after the conviction.21

17 “Japan: The Darker Side of Cuteness,” The Economist, May 8, 1999, 32.

18 “How Police Smashed Child Porn Club,” CNN Worldwide, February 13, 2001. http://www3.cnn.com/2001/WORLD/europe/UK/02/13/paedophile.police (accessed September 30, 2001).

19 International Consumer Protection and Enforcement Network, “International Internet Sweep Days,” 2004. http://www.icpen.org/imsn/activities.htm (accessed December 2004).

20 Alan Cowell, “Ex-CompuServe Head Sentenced in Germany,” New York Times, May 29, 1998. An unofficial translation of the case is available at http://www.cyber-rights.org/isps/somm-dec.htm (accessed September 1, 2001).

21 Edmund Andrews, “German Court Overturns Pornography Ruling Against Compuserve,” New York Times, November 18, 1999.

A French court in 2000 ordered the internet search engine Yahoo! to block French users from accessing a section of the site that auctioned Nazi memorabilia.22 Although Yahoo initially objected, it later banned the sale of Nazi and hate-related material on its site.23 To be sure, there were peculiarities in the case. First, Yahoo had used a French domain name yahoo.fr. That gave the French court a toehold to assert its jurisdiction on Yahoo. Second, Yahoo was targeting the French users through its advertising. On this point, the judge, therefore, found it unconvincing when Yahoo argued that it had no way to block users; if French users could be directed to the French site of Yahoo, surely Yahoo could block its users to deny them access to Nazi memorabilia. The judge based his findings on information from a panel of experts that included internet pioneer Vinton Cerf.24

The experts, however, did not agree with the conclusion of the judge on ideological (the internet should be free) and pragmatic (the filtering was only about seventy percent effective anyway) grounds. The ideological ground is an article of faith that will take some persuasion to convert to the contrary. But the pragmatic ground reflects a deficient understanding of the purpose of censorship; it is not meant to be one hundred percent effective. This is a form of what economists call the “Nirvana fallacy,” i.e., something is not worth doing if it does not achieve perfect results. An argument often made against censorship is that on the internet, a user can always make an international phone call anyway. The argument misses the point. It has never been possible to reliably block everything, even at customs. The persistent will always find a way pass the censors. The censor's goal is achieved if it is difficult for most users most of the time to access most of the material.

22 “Yahoo! Ordered to Bar the French from Nazi Items,” Wall Street Journal, November 21, 2000.

23 “Yahoo! Will Ban Hate Material and Charge Fees on Auction Sites,” Wall Street Journal, January 3, 2001.

24 Mark Ward, “Experts Question Yahoo Auction Ruling,” BBC, November 29, 2000. http://news.bbc.co.uk/1/hi/sci/tech/1046548.stm (accessed August 24,

2004).

The difference in perspective is what differentiates civil libertarians and regulators. The civil libertarians' initial premises is: Can users evade government rules? The answer of course is that it is possible. From this perspective, the internet seems to weaken rules or make them difficult, if not impossible, to enforce. In contrast, regulators will ask: Can most people apart from the determined be deterred most of the time? If so, the law can be passed. This difference in attitude explains why some liberal activists feel that the internet cannot be regulated, while enforcement agencies brush such concerns aside and regulate the internet anyway.

From 2000

Around the turn of the millennium, two trends towards regulation may be discerned: self-regulation and direct regulation. While the internet industry is of the opinion that self-regulation is the best mode of regulation, evidence has shown otherwise. The US Federal Trade Commission, after having given several one-year deadlines for the industry to regulate the privacy of children using the net, passed the Children Online Privacy Protection Act in 200025.In Australia, the

Internet Industry Association, after many attempts, emerged with a code of practice only after the federal government had passed a law that, among other things, compelled the development of the code.26 The federal law has forced some sites to move operations from Australia.27

Europe has shown greater interest in self-regulation. ISP associations have often developed codes to minimize their own

25 Federal Trade Commission, “New Rule will Protect Privacy of Children Online,” October 20, 1999. http://www.ftc.gov/opa/1999/9910/childfinal.htm (accessed September 30, 2001).

26 Australian Broadcasting Authority, “Online Services Content Regulation,” n.d. http://www.aba.gov.au/what/online/register_codes.htm (accessed October 8, 2001).

27 Stewart Taggart, “Down Under Smut Goes Up Over,” Wired News, February 2, 2000. http://www.wired.com/news/politics/0,1283,34043,00.html (accessed October 8, 2001).

liability as well as to self-regulate some aspects of online business. This is in contrast with the US, which is suspicious of codes because they tend to harden into law, and Asia, which lacks cohesive industry collaboration. The movement towards self-regulation in Europe has been aided by the work of the Bertelsmann Foundation (the “Foundation”).28

In 1999, the Foundation, started by one of the world's largest book publishing houses, brought together a group of experts to address the issue of content filtering. The result was that in 2000, the Internet Content Rating Association (ICRA) filtering platform based on Platform for Internet Content Selection (PICS) was launched.

ICRA, an effort with which the author has been involved since 1999, aims to empower parents with simple tools to filter the content of websites while addressing the concerns of censorship.29 American civil liberty interest groups such as the American Civil Liberties Union (ACLU) and the Center for Democracy and Technology (CDT) were consulted during the development of the platform. However, more than a year after the official launch of the complete filtering tool in early 2002, fewer than 250,000 sites were self-labeled. One major factor for the low take-up rate was the decision by internet browsers not to incorporate highly sophisticated filtering tools. At first, both Microsoft and Netscape added features while engaging in the browser war to outdo each other. Among the features they added was a filter based on a simple, four-checkbox system created by the US-based Recreational Software Advisory Council for the Internet

28 Jens Waltermann and Marcel Machill, eds., Protecting Our Children on the Internet: Towards a New Culture of Responsibility (Gutersloh: Bertelsmann Foundation Publishers, 2000).

29 Civil liberty groups apparently embraced the notion of parental empowerment with a different intent. In his book Cyber Rights: Defending Free Speech in the Digital Age (New York: Times Book, 1998) 233, Mike Godwin, legal counsel for the Electronic Frontier Foundation, candidly admits that the term “parental empowerment” was intended to be a “positive frame” to counter forces pushing for legislation against pornography.

(RSACi) that filtered sites for language, nudity, sex and violence. ICRA's filter is a more sophisticated system that had sixty boxes for the webmaster to check, with shortcuts to reduce the number of clicks needed. By the time ICRA finally emerged with its filter software, the browser war was over. ICRA had to develop its own filtering plug-in but that meant that it was not as easy to use and so the full power of the sophisticated ICRA system could not be realized.

From the perspective of the author, it appears that regulators and enforcers are winning since new laws are being passed and enforced all over the world to regulate the internet. The French approach may be used in other countries. Korea has reportedly ordered its ISPs to block access to Korean-language pornographic sites hosted overseas that do not restrict access with an age-verification process.30

PROBLEMS OF INTERNET CENSORSHIP

Even with traditional media, censorship has never been one hundred percent effective one hundred percent of the time. With the internet, it is difficult even to begin to censor.

The root of the problem for censors is that the internet combines the characteristics of various forms of media (print, radio, television) and telecommunications (telephone and computers). Each of these inventions alone has contributed to an increase in the transmission and reception of information. Combined, they offer a surfeit of information for censors.

Second, the reach of the internet ranges from one-to-one (email) to one-to-many (websites) and many-to-many (Usenet groups). No

30 Adam Creed, “Korean Government Promises Action Against Porn Sites,” Newsbytes, April 11, 2001. http://www.newsbytes.com/news/01/164417.html (accessed September 1, 2001).

other technology has had such a reach. Email is generally regarded in most countries as private communication, an “e” version of regular mail, and therefore subject to less censorship than other forms of mass communication. But it is possible to conduct mass mailings through which some discussion groups are conducted. In short, on the internet, a private communication medium, has the potential to be a mass medium.

Theoretically, there is no reason why email cannot be read and censored, just that it takes a lot of time and effort. Opening and reading email, however, may well be the most effective means of turning users away and killing the growth of the internet. In 1994, through a misunderstanding of a high-level official request, internet accounts of an access provider in Singapore were scanned for.gif files. Of 80,000 files scanned, five were found pornographic by Singapore standards and the users warned. Although no non-.gif files were opened, users nevertheless were irate. Many expressed grave reservations about security and privacy on the internet. In the end, the access provider had to assure its users that no such scanning will occur in the future.31

Third, there is the problem of the regulatory paradigm. By combining the traits of traditional communication media and the blurring of their boundaries, the internet poses the question of who is to regulate the content and by what domestic and/or international standards.

Should the internet be treated as a postal service because it has email? Or would the capabilities of internet relay chat and voice telephony make it a telecommunications service? Are electronic newspapers considered in the same way as their print version? Or should the availability of radio and television make the internet a

31 Peng Hwa Ang and B. Nadarajan, “Censorship and the Internet: A Singapore Perspective,” Communications of the ACM 39(6) (1996): 72-78.

broadcast medium? Because the internet is accessed via a computer, should the computer model of regulation apply?

In practice, regulators bypass the question of paradigm fit by treating the internet based on its functions. That is, a web-based newspaper is likely to be treated as a newspaper, an online radio station as a radio station. It means that there is no single regulatory paradigm. But such an approach is a short-term fix because it magnifies the regulatory problem of convergence where different rules and standards apply to convergent products and services. For example, is a news website that combines a newspaper with television news to be treated as a newspaper or a television station?

The fourth problem of internet censorship is that the computer culture celebrates maximum (and sometimes anarchistic) freedom, not censorship. This culture may be traced back to the origins of the internet, which was designed to function as a communications channel even after a nuclear attack.32 The very architecture of the internet militates against censorship, which is read by the internet as “damage” and the system will do what it can to correct it. Dynamic re-routing ensures that if one communication link is broken, traffic can be redirected through other existing links.

As an “organization,” the internet has no central controlling body, but only a voluntary council that sets technical standards. There is no one to whom, for example, a complaint about objectionable material may be laid. It is, therefore, inherently resistant to censorship, in both its operating philosophy and technical set-up.

32 S. Carr, S. Crocker and Vinton Cerf, “Host-host Communication Protocol in the ARPA Network,” in AFIPS Conference Proceedings, Spring Joint Computer Conference, 1970, 36, 589-97; Vinton Cerf and Robert E. Kahn, “A Protocol for Packet Network Intercommunication,” IEEE Transactions on Communication 22(5) (1974): 637-48. An alternative suggestion of the original intention of the internet is offered by Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late: The Origins of the Internet (New York: Simon & Schuster, 1996).

In the eyes of its visionary pioneers, the censor-resistant internet constituted a new global community that embraced a libertarian culture of self-help and non-reliance on government. The early internet seemed to operate by a loose, informal consensus. It arose from universally-accepted technical protocols for carrying electronic conversations between remote locations and times, and it gave birth to a common language and culture, and norms.

No matter how system administrators at individual sites may restrict access to objectionable material, savvy users will find ways to overcome the hurdles. Sites that are banned in one country are quickly relocated to another. Sites that are blocked are bypassed through anonymous proxies. In fact, the US is developing a software program that would allow users in China to bypass the government's block.33

Fifth, the internet highlights a major legal issue of global interconnectivity: which censorship standard should be applied? This issue extends beyond the classic “What is pornography?” debate as was first highlighted in a US case where a bulletin board service (BBS) operator in California was convicted of delivering pornography to a resident in Tennessee.34

The internet offers a myriad of material on subjects such as drug culture, bomb-making, murder and anti-Semitism. Material that is illegal in one country and punishable with a heavy sentence may be wholly legal in another. Germany's case of the Neo-Nazis frames the problem best. In January 1996, the German phone company Deutsche Telekom blocked users of its computer network from accessing the website of Ernst Zündel, a German-born activist living in Toronto, Canada, suspected of distributing Neo-Nazi and anti-

33 “U.S. may Help Chinese Evade Internet Censorship,” New York Times, August 30, 2001.

34 USA v. Robert Alan Thomas and Carleen Thomas (1996) FED App. 0032P (6th Cir.).

Semitic material over the internet.35 Given the history of Germany, such a response is perhaps understandable. However, in the US, several prestigious universities offered to mirror Zündel's site.36

Similarly, when Germany tried to block access to a magazine called Radikal, forty-seven other sites all over the world mirrored the Radikal site.37 Enforcement of this German law is very difficult, if not impossible. Any attempt at censoring the internet, therefore, also has to consider its international dimension.

Finally, the process of regulation tends to be piecemeal and almost always lags behind changes in technology. Censorship of any new medium today is most likely to come into play after the objectionable material has been disseminated. Until then, censors or regulators would not be aware of the possibility of circumvention or violation.

METHODS OF CENSORSHIP

The censorship methods illustrate how it is possible to censor the internet. The basic method is first to outlaw illegal content. For example, many countries outlaw child pornography. The mere declaration of law does not bar the persistent from accessing such content. But it does mean that should the police crack down those who produce or possess such material will be prosecuted. This happened with Operation Avalanche in the US in 2001. The FBI passed a list of credit card numbers used to access pornographic material to the UK police and they in turn began cracking down on

35 “Censuring the Censors: Azeem Azhar Looks at Hardening Attempts to Control the Internet,” Guardian, February 8, 1996.

36 Ibid.

37 Jim McClellan, “Germany Calling,” Guardian, September 25, 1996.

their offenders.38 Such a method depends on the vigilance of law enforcement. Unlike, say, a murder where the evidence of a crime is often visible, internet offenses do not lend themselves to such ready evidence.

The second commonly-used method to censor the internet is to use the ISP to block or filter content. Depending on the configuration, such a block may be one hundred percent effective at blocking blacklisted sites. For example, if the blacklisted site is blocked on a server, it means that users using that server will be denied access. There is no workaround, unless the content is emailed to the user.

Other ways to regulate users include licensing the equipment as in Myanmar or licensing users as in China.39 Keeping in mind that the ISP is key, one way that governments try to control access is to control the entry of an ISP into the market.

Access Control

Negroponte has said that bits and bytes do not stop at borders.40 Authoritarian regimes, however, can stop bits and bytes crossing into and out of its borders. A well-executed and well-read 2001 study of the internet in China and Cuba illustrated this point: The internet does not automatically disempower authoritarian governments.41 In Myanmar, modems, even those built into laptops, have to be licensed. Those who use unlicensed modems have been jailed.42

38 “Operation Avalanche: Tracking Child Porn,” BBC News, 2002. http://news.bbc.co.uk/1/hi/uk/2445065.stm (accessed August 24, 2004).

39 Ang, “How Countries are Regulating Internet Content.”

40 Nicholas Negroponte, Being Digital (Hodder & Stoughton, 1995).

41 Shanthi Kalathil and Taylor C. Boas, “The Internet and State Control in Authoritarian Regimes: China, Cuba, and the Counterrevolution,” Carnegie Endowment for International Peace 2001. http://www.ceip.org/files/Publications/wp21.asp (accessed October 8, 2001).

42 Ted Bardacke, “High Price to Pay for Internet Use in Burma,” Financial Times, October 5, 1996.

Only slightly less restrictive is the use of centralized access, typically through some form of state control. A number of states in the Middle East, such as Syria, Iran and Iraq, use this approach. The fear of being caught accessing unauthorized material is the deterrent. However, the limited means of access discourage competition, which slows the spread of internet usage.43

China regulates access by compelling internet subscribers to register with the police.44 Those who access the net at cybercafés have to show some identification before being able to do so. Lest the approach appears too wild, a similar notion of registering users has been proposed by Robert Cailliau,45 who co-developed the World Wide Web with Tim Berners-Lee.

Blocking using a proxy server, as with Singapore and the UAE, is less effective. There are workarounds: locate and substitute a public proxy server for the local server and full access is restored. In Singapore's case, the regulators say they block only one hundred “high-trafffic” pornographic sites for home subscription plans.46 The process of having to look up the blacklist does slow down access a little if imperceptibly.47 Such centralized controls have limited capabilities; at most, several hundred pornographic sites can be blocked in the face of hundreds of thousands of others.

In the quest to block undersirable sites, companies have sprung up to offer regularly-updated blacklists of pornographic sites. The names of these sites are then blocked from user access. Such a family-friendly

43 Leonard R. Sussman, Censor Dot Gov: The Internet and Press Freedom 2000 (Freedom House, 2000). http://www.freedomhouse.org/pfs2000/sussman.html (accessed October 8, 2001).

44 Ang, “How Countries are Regulating Internet Content.”

45 Reuters, “Web Co-inventor Backs Licensing,” ZDNet, November 29, 1999. http://news.zdnet.co.uk/story/0,,s2075495,00.html (accessed October 8, 2001).

46 Singapore Broadcasting Authority, “SBA's Approach to the Internet,” June, 2001. http://www.sba.gov.sg/work/sba/internet.nsf/ourapproach/1 (accessed October 8, 2001).

47 Ming Chien Tong, “Device to Block Out Blacklisted Web Sites,” The Straits Times, July 20, 1996.

internet access plan is increasingly available throughout the world. However, because of the costs of installing and maintaining the list, subscribers have to pay for such filtering plans.

Using Technology

Technology that has enabled the spread of information has been used to censor it. However, the result has been mixed at best.

Probably the first program to delete contents from the internet was developed by medical researcher Richard DePew in 1993. Annoyed by anonymous messages on the Usenet groups, he developed a program he called Automated Retroactive Minimal Moderation (ARMM) to delete them. The program initially failed. Several versions later, when it succeeded, it affected the workings of other connected computers.48

Many Usenet readers disagreed with DePew's deletions because it deprived them of the messages and for that reason some have called it censorship. Since then, DePew has stopped using ARMM. Instead, he has developed a program called a bincancelbot that removes inappropriately-placed binary files from Usenet groups. Such binary files tend to be large, usually containing programs or images, and are often off-topic. This time, there was a more muted reaction.49

A similar cancelbot was first used against a law firm, Canter and Siegel, who in 1994 sent off-topic advertisements hawking their legal services to more than 1,000 Usenet groups. A 25-year-old Norwegian programmer, Arnt Gulbrandsen, developed a cancelbot that hunted down and deleted messages that were sent by the firm.50

48 David L. Wilson, “A Computer Program that Can Censor Electronic Messages Sets Off a Furore,” Chronicle of Higher Education, May 12, 1993, A21.

49 M. Frauenfelder, “Usenet's Etiquette-Enforcement Agency,” Wired News, 1997. http://www.wired.com/news/topstories/0,1287,5262,00.html (accessed October 8, 2001).

50 Peter Lewis, “Censors Become a Force on Cyberspace Frontier,” New York Times, June 29, 1994, A1.

While cancelbots are effective, they have an uncomfortable feature: No matter how good the intentions, cancelbots are canceling someone else's posts. Cancel messages enter the Usenet through security holes in poorly-managed news servers. There is always room for abuse.

In September 1996, a computer user in the US sent a cancelbot to remove 25,000 messages from the Usenet.51 In 2002, the Church of Scientology used the cancelbot to silence its critics, especially its former members, by deleting dissenting posts on its Usenet newsgroup (alt.religion.scientology). Instead of canceling messages that threaten to destroy the Usenet as a discussion medium, the Church of Scientology “clams,” as they are called, send out forged cancel messages to erase posts that criticized the Church. These forged cancels, unlike those of the Cancelmoose, were done without the consent of the Usenet site from which they were posted. The Church clams snuck through site security to post them.

Tracking down the source of these forged cancels has involved Usenet defenders in some serious detective work. As recently as the first week of July 2002, an MIT professor traced the source of some Scientology forged cancels to University College in Dublin, Ireland.52

Filtering

Of all the technological means to censor the internet, the most widespread and least objectionable is filtering by the end-user. The goal of filtering is to empower parents to block undesirable content from children, the only group for which censorship is internationally accepted. Filtering software programs have been endorsed, at one time or another, by industry and governments in Europe, Australia and the US.

51 “Cancelbot Attacks Usenet,” Wall Street Journal, September 27, 1996, A13A.

52 Charles A. Gimon, “The Battle for Usenet.” http://www.skypoint.com/~gimonca/usewar.html (accessed October 3, 2002).

Such programs have improved in sophistication. Initially, some of the software blocked-off sites merely for having words such as “breast” on its pages. There were concerns by civil libertarians that this meant blocking educational or medical sites. Today's filtering programs are more context-sensitive. Nevertheless, they still suffer from two inherent limitations. First, such filtering software has to balance the need to be accurate with the comprehensiveness of the filter. The more comprehensive the filter (the more offensive words it recognizes), the less accurate the filter will be in parsing context. That is, a filtering software that is accurate in distinguishing context will be less comprehensive. The second limitation is that the filter software needs to be updated frequently.

At the time of writing, a group called the Internet Content Rating Association is promoting a self-labeling filtering system originally based on the Platform for Internet Content Selection (PICS) standard.53 ICRA is taking over the functions of the defunct RSACi, the Recreational Software Advisory Council's self-rating content-labeling advisory system for the internet launched in 1996. RSACi folded in 1999 and the intellectual property rights were bought by ICRA.

The new labeling system, developed with funding from the EU and the Bertelsmann Foundation, improves on RSACi by being more international in scope and by allowing for context. For example, in RSACi, content was labeled by the provider on three categories: violence, nudity/sex and language. There were a total of five scores, from 0 to 4, with each score explicitly defined.54 It was then up to the

53 The author is a member of ICRA Board and was part of the group that developed it from 1999. In 2004, ICRA moved to adopt the RDF (Resource Description Framework)/XML2 (Extensible Markup Language 2) standard for its label. See Phil Archer, “Labelling Work Group Final Report,” Internet Content Rating Association, December 10, 2004, http://www.icra.org/archive/labellingWG/finalreport/ (accessed December 30, 2004).

54 Recreational Software Advisory Council, 1996. http://www.rsac.org (accessed 1997) (changed).

receiver of the material to set the scale on which materials may be received.

Earlier, the author had predicted that RSACi would not be widely adopted because “although RSACi viewed itself as value-neutral, its values are based on those in the US. Anti-Semitic, race-hate and religiously offensive speech, for example, are not rated.”55 As it turned out, the author was right (RSACi was not widely adopted) but for the wrong reasons.

Initially, the uptake for RSACi was strong especially with positive press coverage. The drivers behind the ICRA have invited the civil libertarians for consultation and hired Professor Jack Balkin to recommend the architecture behind the filtering. Balkin, a law professor at Yale, had taught some of the legal counsel representing the civil liberty organizations.56 Then the ACLU, in its report Is Cyberspace Burning?” said that such efforts could be used by governments for censorship. In fairness to the EU, the report concluded that filtering and rating were not torching free speech in cyberspace.57 Nevertheless, the damage had been done and there was negative press coverage of filtering and rating after that. The uptake of sites self-rating declined and RSACi never recovered thereafter.

55 Peng Hwa Ang, “Censorship and the Internet,” in Encyclopaedia of Library and Information Science, ed., Allen Kent, Vol. 65 Supplement 28, 12-22. (New York: Marcel Dekker, Inc., 1999).

56 Lawrence Lessig, “Tyranny in the Infrastructure,” Wired 5.07, July 1997, 96; Simson Garfinkel, “Good Clean PICS: The Most Effective Censorship Technology the Net has Ever Seen May Already be Installed on Your Desktop,” HotWired, February 5, 1997. http://www.hotwired.com/packet/garfinkel/97/05/index2a.html see, e.g., Lawrence Lessig, “What Things Regulate Speech: CDA 2.0 vs. Filtering,” JURIMETRICS J. 38(629) (1998); Joshua Micah Marshall, “Will Free Speech Get Tangled in the Net?” AMERICAN PROSPECT 36(46) (Jan./Feb. 1998) contending that content filtering makes the majority of bullying of dissenters into silence even more feasible than in the past.

57 American Civil Liberties Union, “Fahrenheit 451.2: Is Cyberspace Burning? How Rating and Blocking Proposals May Torch Free Speech on the Internet,” http://www.aclu.org/issues/cyber/burning.html (accessed October 8, 2001).

The civil liberty groups have made an impact in policy decision-making with regards filtering. For example, the blacklist created by the filtering companies is generally regarded as a confidential trade secret. If a company had a blacklist of pornographic sites, it could go into the filtering business. However, in the US, the Digital Millennium Copyright Act specifically allows the reverse engineering of filtering software to retrieve the blacklist. In effect, this has discouraged the development of such blacklists.

Passing Laws

That laws in the US and France have been struck down because they conflict with the freedom of expression does not mean that it is not legally possible to censor the internet. What it means is that legislators cannot simply rush the laws for political expedience. Initially, perhaps because the medium is fairly new, these laws appear not to have been well-thought out.

How successfully existent laws can be applied to the internet depends on how well an analogy can be drawn to the internet. In Singapore, for example, websites of religious and political organizations, as well as online newspapers produced locally, have to be registered.58 Although the law appears novel, it is essentially an adaptation of existing media laws in Singapore.

In the EU, soon after the publication of the EU paper “Illegal and Harmful Content on the Internet,”59 ISPs were, for a time, fearful of possible legislation that would make them liable for content that they had not created. Since then, internet-specific immunity provisions

58 Broadcasting (Class Licence) Notification under the Broadcasting Act (Cap. 28, 1996 Ed.), section 9. http://www.mda.gov.sg/medium/internet/i_classlicence.html (accessed December 30, 2004).

59 “Illegal and Harmful Content on the Internet,” Communication to the European Parliament, the Council, the Economic and Social Committee and the Committee of the Regions, 1996. http://www2.echo.lu/legal/en/internet/content/communic.htm.

have been written in a number of countries—Germany, the US, Singapore, India, Bermuda, France, Australia—so that intermediaries who do not originate content would not be held liable.

Self-regulation by Industry

Seeing how some governments are attempting to regulate the internet, the industry in some places has developed a code of conduct to address objectionable content. Such codes have typically been developed because of pending legislation. In Europe, a slew of such codes has been sparked off by the possibility of EU-wide internet content laws.

In November 1999, the Association of Internet Hotline Providers for Europe (INHOPE) was created to facilitate cooperation among the various European hotlines. In countries where INHOPE has links, any member of the public can phone a hotline number and report illegal content. The hotline will then inform its counterpart in the country where the illegal content is hosted to take action. INHOPE in effect is acting as a monitoring mechanism for internet regulations in the participating European countries.

Codes require the force of law to be effective. It is instructive that the European INHOPE is targeted at illegal content. It is using the force of law, and existent law at that, to remove objectionable content.

TRENDS

Overall, the internet will make countries freer.60 Singapore, which had the dubious honor of an internationally publicized code on

60 Peng Hwa Ang, “Why the Internet Will Make Asia Freer,” Harvard Asia Quarterly V(3) (2001): 48. http://www.fas.harvard.edu/~asiactr/haq/index.htm (accessed September 1, 2001).

content, in practice places minimal restrictions on internet access. In China, the education network has less censorship. In some respects, it is even freer than some US universities as the Chinese academic network does not even block Napster.61

Rational governments are keenly aware of the cost of censorship. Take the example of Singapore, which bills itself as a wired island. Over a fifteen-year period from 1978 to 1993, before the advent of the internet, the amount of material the censors had to vet increased five-fold from about 5,500 in 1978 to more than 25,000 in 1993.62 Theoretically, it is possible to hire more censors. But for developing countries, hiring more censors for the internet is unwise as this diverts valuable computer-literate resource to a non-productive job.

Some form of censorship, perhaps better phrased as “content regulation,” of the internet is here to stay. What is more ralient is the degree and form of censorship.

Why governments would want to exercise censorship is a matter of their country's history and culture. There are the classic cases of Germany's memory-searing record of anti-Semitism, Singapore's history of race riots in the mid-1960s and South Korea's fear of a North Korean invasion. After terrorists attacked the World Trade Center in New York and the Pentagon near Washington, DC, Frenchman Smain Bedrouni set up an internet site (stcom.net) applauding the attacks and urging Muslims to fight a holy war. He was arrested in France, where endorsing suicide attempts to kill others is an offense. Interestingly enough, the report added that since the September 11 attack, ISPs were on the lookout for similar sites that incited violence.63 In short, acts of censorship that may seem

61 Ibid.

62 Peng Hwa Ang and Berlinda Nadarajan, “Censorship and the Internet: A Singapore Perspective,” CACM 39(6) (1996): 72-78.

63 Reuters English News Service, “Frenchman Probed for Website Applauding US Attacks,” September 20, 2001.

overzealous cannot and should not be blithely dismissed as mere violations of free speech.

Censorship tends to be more tolerated in times of crisis and war. There is evidence of greater censorship and other retrenchment of civil liberties in response to September 11. The freedom and anarchy that the internet vanguard had hoped for is likely to be curtailed (temporarily at least) in the West.

In some ways, censorship of the internet is more severe than censorship of traditional media. Often, censorship is through a blocking mechanism, which is prior restraint. Legal doctrine views this as a more severe form of censorship than restriction after publication or broadcast.

Censorship affects all internet users. Because the internet is a global, interconnected communication and information system, any restrictions in that communication anywhere in the world will hurt others in the link. Any internet user can point out the harm and limitations of censorship. But, somehow, it should be done while respecting the history and culture of each country and node.

developing area of censorship is copyright.64 In fact, copyright was used as an excuse for censorship in the earlier phases of sensitizing information. There is very little debate on this issue as rights are being extended by rights holders at the expense of society at large.

CONCLUSION

The future of freedom on the net is bright. Freedom House's study of net freedom concludes that the large number of users alone will make

64 Lawrence Lessig, “Copyright Law in Age of Digital Networks.” http://technetcast.ddj.com/tnc_play_stream.html?stream_id=517 (accessed September 30, 2001).

it hard for governments to censor, and that thereby creates more space for public debate.65 Free speech has either won or will win any battle censors may put up on the internet.

The one area that virtually all countries agree to censor is material that is exposed to the young. The US has gone so far as to impose laws, such as the Children Online Privacy Protection Act, that have closed down websites.

Other areas that developed countries are prepared to see regulated are child pornography and representations to consumers that spill over to become consumer fraud. Apart from these areas, censorship laws become murky and harder to defend. It is generally accepted that countries do have a right to their own cultural identity and certainly history plays a part. But not everyone agrees how that culture and history should influence issues pertaining to the internet. Hence, the debate over censorship will continue, especially on the internet.

65 Leonard R. Sussman, Censor Dot Gove.

More From encyclopedia.com