At least 80 people died and more than 500 people went missing in a massive forest fire that took place in California in November, 2018. Even though local police and authorities said the fire was caused by damaged power plants, the conspiracy that had nothing to do with the authorities’ announcement began to spread quickly. The conspiracists claimed that the laser beam launched by a laser-equipped airplane led to a forest fire. The YouTube videos also used manipulated photos and images of another event to support such conspiracy. Despite the serious lack of evidence the videos had, the videos with keywords such as “Laser Beam” and “The conspiracy of 2018” heated up YouTube in late 2018. How can we regulate such videos on YouTube?
YouTube is the world’s largest video sharing service operated by Google that allows users to upload, watch and share videos. As 8 out of 10 of the people of age between 18 and 49 years old watch YouTube in an average month according to MerchDope, the impact that YouTube videos have is growing more powerful throughout the world. Even though it is true that a lot of YouTube videos have a good influence on people, there are also myriads of provocative videos that are created in order to attract more viewers in these days, and these videos often contain false information and hatred expressions that often lead to hatred, controversy, and violent demonstrations among people.
It is not too long ago when people in South Korea started to use the words “hannamchung” (an expression to demean the South Korean men) and “kkolpemi” (an expression to demean the feminist) to express their hatred against the opposite side on YouTube and on other many social medias. This problem is not only happening in Korea but also in many other countries where there are many users of YouTube and social medias. In the West, hatred expressions and conspiracy that spread through YouTube videos are instilling too many people into joining right-wing groups or further encouraging violent demonstrations. According to BBC, some analysts said that the spread of fake news surrounding democratic presidential candidate Hillary Clinton in the 2016 U.S. presidential election has even affected the result of the presidential election. As problems arise in many other countries, there are increasing pressure on YouTube to regulate video clips. However, many analysts argue that it will not be easy to regulate not only falsehoods but also hatred expressions because of the possibility of infringement of freedom of expression.
According to Google Transparency Report in 2018, YouTube deleted 7.84 million harmful videos, and 1.66 million YouTube channel accounts were blocked between July and September in 2018. Of the deleted videos, 72.2 percent were spam, videos that induced clicks. Child abuse videos were 10.2 percent, which is the second highest number, followed by pornography with 9.9 percent. However, the type of the reported videos differs from that of the deleted videos. Among the total reported videos, 27.7 percent of them were spam, followed by pornography which accounted for 25.2 percent. The third most reported type of video clip was ‘extreme videos’ (17.7 percent), which contained hatred expressions. Compared with a total of 7,432,000 reports for “extreme videos”, only 16,000 videos were actually deleted (0.6 percent). Due to these results, ABC News pointed out that YouTube’s video blocking and deleting algorithm cannot digest the large amount of videos that users create. In fact, since 2010, when YouTube started to spread in earnest, they imposed regulations on pornographic materials and violent video clips, but failed to respond to the recent problems such as hatred expressions and conspiracy theories. Experts point out that YouTube’s ‘recommended videos algorithm’, which recommends videos based on user’s activities, has also contributed to the abuse of ultra-rightist organizations and the distribution of false information. The Washington Post reported that extremists are exploiting YouTube’s recommended system algorithm to plant their own thoughts on YouTube users. It is clear that YouTube should take an action to regulate harmful videos in order to prevent further issues.
Britain, France, and Germany are also recognizing hatred expressions and conspiracy theories through YouTube as problems and pursuing countermeasures. For instance, a U.K.’s YouTube channel stirred controversy by combining Terissa May’s image, who is obsessed with Brexit negotiations, with the monster ‘Gollum’ in the movie ‘The Lord of the Rings.’ In France, a group of yellow vests opposed to the recent increase in oil tax and criticized the president by publishing a video of the President Emmanuel Macron’s mockery. This is why the government in Europe is urging to regulate YouTube videos that show excessive hatred and slander certain people.
However, regulating harmful videos is not as easy as people think. Despite the large number of harmful videos exist in Europe, European countries are not pushing ahead with regulations because they do not want to suppress freedom of expression. For instance, in Germany, the German law on social networking services is applied to social networking sites with more than 2 million users, including YouTube, Twitter, and Facebook and is focused on preventing hatred expressions. However, after the law was proposed, the German public opposed it, fearing that it would infringe upon freedom of expression. Kathryn Archer, a social media expert at Murdoch University, said the blocked video is just the tip of the iceberg and it is time to admit the problem. Overall, Archer noted, the problem with YouTube begins with the fact that non-specialists can post videos. Also, one of technical difficulties for regulating such videos is that while relatively fast monitoring is possible on text-oriented communication platforms such as Facebook and Twitter, it is difficult to monitor identity and symbolism of videos on YouTube. That is why YouTube has limits to finding hatred speech quickly.
Regulating youtube videos not only suppress freedom of expression but also possibly lower the quality of the service that organizations offer. While each country has its own legal and social framework for freedom of expression, privacy, and copyright, IT companies providing services across borders are negative for providing services with different national standards. It is not due to many technical difficulties. Rather, accommodating requests for censorship or data submission from a particular country would result in endless requests from the rest of the country, which would compromise services and pose a fatal threat to organizations’ reliability. Although it is difficult to apply state-level regulations to certain internet services and it is difficult for global companies to take joint measures against countries like China, the environment related to Internet regulation is facing significant changes in these days. Companies that provide Internet services to the world such as Google, Apple, Facebook, Twitter, and others seek freedom of expression and use of various information for more business opportunities, but they continue to meet new regulatory issues in the process of expanding their services. The regulation and freedom of the Internet is also becoming more complex and high-order.
In fact, YouTube is actually a search service. Their objective is to expose the most accurate video information to the users in response to the entered information. Also, it’s an algorithm that allows users to stay on YouTube longer. The recommended algorithm in ‘Up next’ plays such role. The recommended algorithm takes into account the viewing history. The algorithm is designed to understand the relationship between users and content, and between users. It will reflect the total viewing time to understand the relationship between you and your content. For example, if users stay and watch the video content for a long time, the video is more recommended on the top. That’s why YouTube is constantly emphasizing viewing time. Yet, here’s a trap. Not all of videos with lots of viewing time have reliable content. Maybe that’s why YouTube algorithms are so vulnerable to spreading false information. A comprehensive, long-term plan is needed to solve such problems if regulating such videos by government is impossible..
Conspiracy theory and fake news are complicated issues involving social news, people’s countless gossip, and the views and opinions of users accepting them. Most people know this complex problem, but since YouTube is the number one video provider in the world, we are looking forward to a primary responsibility for it and a stance to solve it. “A comprehensive, long-term plan”, it’s nice to say, but it’s hard to organize them, and even if they do, it’s hard to do it without limiting the freedom of expression. However, if there is no way to completely filter out the content that would be problematic with current technology, it would be better to discuss all issues transparently and look at them with long breaths.
From the perspective of operators dealing with information, quick resolution is important, and it is important to not lose their trust with users. However, if a quick fix does not lead to a fundamental solution, the problem can recur over and over again. At the same time, the government needed measures to publicize the overall problems of conspiracy theories and fake news without filtering them and to encourage members of society to understand them, while also encouraging them to participate. There is no perfect solution to the problem that caused by entering an information society. Given the nature of the information society, which competes with the technologies that create problems and prevent them, it is difficult to expect a completely problem-free environment. So, the YouTube incident and action are just examples of this kind of information society. It shows a pattern of problems that have always been different and difficult to solve.
In the information society, information problems occur constantly. However, technical measures alone are not enough to resolve the issue. It’s nothing but an old idea to think that will solve the problem. The fourth industrial revolution is coming on the basis of artificial intelligence, the Internet of Things, and big data, but it is still possible to enter a new era without solving the problem of human-made information. There is no doubt that the era of fake news being flooded by machines incorporating ICT technologies will never come, but we are now turning a blind eye to the problems that will arise in that era. It is time to admit the problems we have in the information society and work them out together for the better future.