The social media has had vast positive impacts on the people’s lives globally. For instance, people can share ideas, photos, and videos, different experiences, market their products, etc. Similarly, the social media has on numerous occasions been used to expose lot in the society, for instance, the excessive and inappropriate use of force by the police, ethnic cleansing in remote places of the world, among others leading to a mass uproar and a possible turn of events for the better. Even so, social media has also been adversely used in some cases as a medium for promoting hate or even murder. Facebook, being one of the giants in the social media arena has had its best share of these negativities, especially following the live streaming of the murder of an elderly man recently (Newcomb, 2017). As such, many people believe that social media platforms, including Facebook, have a role in helping prevent social media crime. In this respect, this paper critically analyzes whether Facebook has a role in rescuing the crime victims and ways in which it can actively help in addressing the ever-growing concern.
While Facebook (FB) has a moral and ethical responsibility of preventing criminal activities or ideologies from being propagated in its platform, it may not have any legal or ethical duty for rescuing them. This is because, while in some cases such as the murder of Robert Godwin Sr. by Steve Stephens (Newcomb, 2017) are aired live, the majority are not. As such, by the time FB detects the occurrence of the crime, it could be too late for acting in a manner that would lead to the rescuing of the victim. Even so, the platform should have a legal and ethical responsibility of swiftly pulling down a photo or a clip of such atrocities, for respect and protecting the dignity of the victims. The social media platform is, therefore, morally and ethically obliged to have the right tools and resources for ensuring that adverse events being broadcasted on it are blocked instantly to prevent further exacerbations.
There are myriad of ways in which the social media platforms can be more proactive as well as thorough in their review of the types of content appearing on their site. One of the primary way and the simplest is their current approach of pulling down images or videos of atrocities whenever they are reported (Isaac & Mele, 2017). Noteworthy, the strategy has already been tested and found to be workable- thus only requires improvement to reduce the time taken for the action to be taken. Secondly, the platforms must ensure that they have adequate personnel or moderators in their teams for reviewing most of the material being broadcasted in their sites even before the audience can report them. As such, the material would be eliminated the earliest possible even before the public can start sharing it on different social media platforms which makes it even harder to prevent its mass propagation (Aiken, 2017). Thirdly, the different social media platforms should devise a way of collaborating in shutting down on any kind of criminal activity material being broadcasted in their sites. Such would make it easy for a swift elimination of the content before it reaches the majority of people thus safeguarding the dignity of the victims and maintaining calmness in public.
Research indicates that currently there is no content-analysis technology applicable for effectively filtering the live-streamed clips in the real-time (Aiken, 2017). As such, this role has been left in the hands of the human content moderators who are supposed to remove the content that acts as a violation of the various social media company’s policies. Even so, there have been efforts to come up with software that would help in preventing images and videos of various violent crimes, assaults and other atrocities from being broadcasted on these sites. In an interview with the USA Today, for instance, Zuckerberg the FB CEO noted that the company was in the process of developing an artificial intelligence software that would help manage the concern by detecting the prohibited content (Guynn, 2017). The companies must, therefore, ensure these tools are developed and put in use the soonest possible. Nevertheless, in the meantime, the companies must be thorough in ensuring that there are enough human filters in place for preventing the anti-social content from being broadcast. This is following, research by the Time newspaper indicating that majority of the human moderators have on some cases been shown to be overwhelmed emotionally by their work which could negatively affect their effectiveness in removing cyber atrocities (Aiken, 2017).
Drawing from a myriad of news sources, it is evident that FB does not have an Ethics Officer nor an oversight committee. Both of these are an important aspect of promoting ethics in social media companies as they enable them to make sound and ethical decisions regarding diverse issues affecting them. For instance, while the companies such as FB, Snapchat, and Twitter have been shown to apply computers and algorithms in an attempt to control the censured materials from reaching the public, they have nonetheless been shown to be ineffective in making sound ethical decisions (Heider, 2017). As a result, it is important for having chief ethicists or related officers for helping the executives come up with critical decisions regarding the various violations. Such could, for instance, create ethical guidelines for the organizations notwithstanding helping in providing company-wide training regarding ethical decision-making. Similarly, an oversight committee would help in resolving the majority of the issues that keep recurring in FB including the transmission of violent content by providing the necessary additional guidance (Carlo, Jaeger, & Grimes, 2012). More so, the committee would be responsible for helping the social media giant in developing strategic visions regarding its role in the society as far as managing anti-social content is concerned.
Apart from increasing the number of its content moderators as well as developing artificial intelligence software to help manage the anti-social content in its sites, FB must adopt a new way of informing their user’s policy to the public and be stricter on any users that violate their policy. Such includes providing clear terms and conditions for use including the content that should (or should not be) posted or shared in its sites. The broadcasting or sharing of such contents should consequently be faced with strong disciplinary actions to the users including banning of their accounts. Such an approach could be key in bringing sanity among the users and making them more responsible in the way they make use of the platform. Also, instead of ‘covering’ or hiding the graphic content shared in its media for user discretion, the social media platform should devise a way of deciding the kind of content to fully eliminate from the site without necessarily cautioning for user discretion in viewing it. Such would ensure that the violating media is not only invisible to the public but also non-sharable.
Despite the social media had brought enormous positive impacts to the society, it has also been associated with numerous undesirable effects. For instance, actions of violence, assaults such as rape, and even murder have been posted as photos, or even streamed live as videos in these sites. FB being one of the social media giants have been a victim of these anti-social content. As a result, many people have argued that the company has a role to play in rescuing the victims of such atrocities. This paper has shown that it is difficult for FB to prevent crimes from occurring as events would always have happened before the issue is detected. However, the platform has a legal and ethical responsibility to ensure the content is swiftly removed thus safeguarding the dignity of the victims.