#UOSM2008 · Topic 4 · Topic4

Ethics and Social Media

In my posts for topic 2 and topic 3, I considered an aspect of the Justine Sacco incident from 2013 and the risks posed by being a personal presence on social media. In this post I intend to further delve into the ethical aspects of social media, both for individual professionals seeking to use it for business purposes and from the perspective of social media firms.

Firstly, I would like to consider another incidence of public shaming which led to job losses. This particular case occurred at a PyCon event in 2013. Adria Richards sent a tweet accusing two male colleagues of inappropriate behaviour, as a result both men were fired and she herself later lost her job over the incident. (Ronson, 2015)

Adria

(Image screen-cap from Twitter, details blurred by me, 2013)

According to Antone Johnson writing for Forbes, this tweet raises several legal problems, due to the fact the conference is in private property, the rights of an individual to take a photo of another individual and publicise it online without any consent are questionable (Johnson, 2013). While the two involved could have been acting inappropriately, the response taken is also inappropriate, and led to pressures against the firm. While you’d like to think these things would be dealt with the same way in or out of the public eye, it is clear that the public pressure leads to different results – a consequence of this being threatening and hateful messages being launched against the firm and Adria. However this also raises another question, should Twitter be in some way responsible for these events on their platform?

Ethics and social media

(Image made by me, twitter logo sourced from https://brand.twitter.com/en.html)

It could be argued that Twitter should have to do something to prevent these mass lynchings and hateful or even threatening messages being posted on their platform, and while Twitter does have a system for reporting abuse, it could be argued it is too little too late. Currently Twitter is protected by laws relating to common carriers established in the US “Communications Decency Act 1996”, which amended the US code to state “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This in effect means that any provider of an online platform cannot be held responsible for the actions of others using their platform. I would argue that this is an essential aspect of the online world though, as otherwise running a social platform would be unsustainable for all of the checking firms would have to make before allowing anything to be published at all. (Cornell University Law School, 1998)

(416 words)

References:

Cornell University Law School. 1998 “47 U.S. Code § 230 – Protection for private blocking and screening of offensive material” (Accessed 26 March 2017)

Johnson, A. 2013 “Was It Appropriate For Adria Richards To Tweet A Photo Of Two Men At PyCon And Accuse Them Of Being Sexist?” (Accessed 26 March 2017)

Ronson, J. 2015 “How one stupid tweet blew up Justine Sacco’s life” (Accessed 26 March 2017)

Featured image made by me, twitter logo sourced from https://brand.twitter.com/en.html

Advertisements

8 thoughts on “Ethics and Social Media

  1. Hi Philip,

    Thanks for a great post on this example Tweet. I have, of course, heard what the service providers say about the issue, but I’ve not heard what the Communications Decency Act actually states before and your description of it was helpful.

    I have a couple of questions around these legal issues. Firstly, it seems that the legislation is quite outdated, 1996 means that the Act is as old as me! The use of online services has moved on significantly in that time and I’d argue that we need a new legal framework that better encompasses the Web than we have now. Germany yesterday decided that it has had enough of Facebook and it’s stance on ‘fake news’: https://www.theregister.co.uk/2017/03/14/germany_proposes_50m_fake_news_fine/ Is this the right approach? Legally this is a tricky issue for all involved.

    Also, this Act covers just the United States, what about the rest of the world? A significant number of these online platforms (including Twitter, Facebook etc.) are American companies. Should, therefore, the US Congress decide on how these run around the world? The US already has a significant say in the technical aspects of the Internet. What do you think should be the solution to these legal problems?

    Finally, a moral (or ethical) question on naming and shaming. As your post highlights, these events can lead to serious consequences (such as them losing their jobs). However, people say and do stupid stuff all the time (see David Moyes’ comments at Sunderland recently http://www.bbc.co.uk/sport/football/39478693). Quite often, not many people hear it and it is forgotten. However, in both these cases, the comment was broadcast far and wide. Now, Moyes was because he is a figure in the public eye, and, rightly held accountable for what he says. However, in the case of the conference, they were heard from only because of Twitter and the fact some broadcasted their comments. Do you think this is right? Should we all be more careful of what we say because of new technology such as Twitter? Is that a good thing? I’m not sure of the answer myself.

    Thanks again for a great post, I look forward to hearing anything you might have to say.

    Mark.

    Like

    1. Hi Mark,

      My apologies for the delay in this reply, I have been writing a dissertation for the past several weeks so have had to delay writing on this blog for a while.

      Thank you for your comment, it makes for very interesting reading. I’d like to start by considering the move to fine social media sites for fake news. I am concerned that for this raises serious problems for the platforms themselves. As previously stated the sheer volume of these stories being posted mean it would cost a fortune to be able to vet every story for accurate information – particularly with the existence of satire sites such as The Onion. It means that social media platforms would have to be able to draw a clear line between what is satirical and what is fake.

      However you could also argue that there clearly is a form of vetting going on with these social media platforms, as they use algorithms to promote certain types of content and receive money to prioritise certain interactions over others in the forms of push adverts. So surely it is possible to build an algorithm to detect stories which have a reaction which is likely to indicate they are fake, which they can then send to a smaller team for vetting? Or alternatively, they can use an algorithm on stories which are getting lots of clicks and double check them?

      However then they run into the problem of what justifies taking a story down, how can they check the facts sufficiently and quickly enough to remove a piece when there are hundreds coming in from various places each hour? It also adds the risk of them removing a true story (or at least one which cannot be verified as false) which leads to a massive backlash and a large portion of their audience moving elsewhere due to a feeling of their content being filtered, how would Facebook cope with accusations of bias or propagandising?

      Consider it like this, Facebook and Twitter are in effect common carriers. They both are places for one individual or firm to post something for other individuals or firms to receive. This is similar to the postal service, as an individual can send an item to another individual. The postal service is not held responsible for slander that is sent through a letter, or for any stories that are printed in magazines they deliver. The cost to the postal service of having to vet every single letter and parcel that passes through their system would be enough to bankrupt their service. So is it fair to allow certain common carriers the freedom of not worrying about content passing through their hands and not others? I would argue the answer to that question is no.

      The problem with legal rights on the internet is that it is international, and really control over it lies mainly within the country the company hosts itself in, and in the countries that most heavily consume the information. Internet policy in the United States greatly effects policy elsewhere as it can lead to strengthening and weakening of certain web businesses. Consider the net neutrality argument, if net neutrality hadn’t been upheld in the United States, many smaller businesses may have collapsed leading to no access to their services elsewhere in the world, all because of their data being slowed in the USA. Watch this CGP Grey video for an explanation of the Net Neutrality Problem https://www.youtube.com/watch?v=wtt2aSV8wdw .

      As regards shaming, I personally believe that what you say to someone at a private event should remain private. Individuals should not be able to expose other individuals like this as it is in effect condemning someone without trial. All parties involved in the tweet in my original post lost their job over the incident due to public pressure, with no chance to give their side of the story or to verify the facts. This is a terrifying precedent as it means that a media storm could lead to someone’s life being torn apart without them having even done something. While I am not saying that in this case that is what happened, I am saying that it is appalling to think that if someone wanted to they could lie on social media and have someone else’s life torn apart.

      I believe this to be different in the case of public figures speaking to the press as was the case with David Moyes, while there is deeper context to the incident, Moyes was speaking in a public forum and new his comments were going to be scrutinised. I would argue that the acceptable nature of shaming depends on context, if you are not a public figure and your comments were not made in public, you should not be scrutinised by social media for what you have said, those problems should be for courts and internal firm procedures to rule on. However when public figures are involved, that is when it is fair to scrutinise online.

      I hope I have answered all of your questions sufficiently.

      Phil.

      Liked by 1 person

      1. Hi Phil,

        Many thanks for your detailed reply. You raise some interesting points.

        Firstly, on the subject of fake news on social networking sites you’re correct, it’s a tricky area. The key issue, as far as I can tell, is the extent to which these sites count as ‘publishers’ (see http://www.telegraph.co.uk/news/2017/03/09/investigate-facebook-google-murky-fake-news-publishers-demand/ https://www.ft.com/content/da427af2-2670-11e7-8691-d5f7e0cd0a16). Are these sites responsible for their content, or are they merely a ‘pipe’ for user’s content? This is not easy for us to decide. Rarely before in history have people had the opportunity to publish their thoughts and ideas so widely, without having to go through something like a publisher. I agree that comprehensively screening these messages is not at all feasible.

        An automated approach is likely to make mistakes, which are unlikely to be acceptable (as people see algorithms as perfect, and any inevitable mistake will cause trust in it to be lost), and humans are simply too slow. As you mention, some kind of machine learning is possible. However, this is certainly not infallible, as they are based on chance and statistics. Furthermore, imagine a genuine news story being classified as ‘fake news’, that harms their reputation. Why should Facebook et al., private companies run for profit, be the arbiter of what is true and incorrect news? This gives them immense power. Therefore, considering all this a new approach is needed, with algorithms forming part of the solution.

        I’d argue the key to this is user education. Offline, people are generally quite good at telling good news from bad (based on its source, the context etc.). Therefore, better education (and maybe technology tools can help with this) on how to tell fake and real news apart is the answer. Technology may be changing all the time, but the basic ideas of fact checking don’t. However, this takes time and, as you say, new information is being created online all the time.

        I agree that these services are different from the postal service. Letters sent via post are private and aren’t shared with everyone in the same way as posts online. However, I disagree that such costs would ‘bankrupt the service’. It would simply increase the costs of the firm, and therefore the costs to the consumer (i.e. stamps etc.). In essence, having mail vetted simply becomes part of the cost of sending mail. In the case of the post, this is not seen as necessary, so it doesn’t happen. However, online some are calling for this to vetted, so is this cost worth it? Furthermore, as discussed above, this kind of vetting may not actually be feasible, not matter the resources. Also, Facebook et al. don’t raise revenue from the end user, but from advertising. Therefore, this may lead to them collecting more valuable data in order to pay for the screening service. Could another social network compete on the basis that they scan their posts and ensure that they are true? Would consumers value this enough? These are all the questions that are raised by this analogy.

        Legally, the Internet and the Web are incredibly complex areas. As they cross jurisdiction boundaries this leads to all sorts of complications. As you say, lots of internet companies are American, meaning that their laws do have weight online. Also, a lot of the Internet infrastructure legally still has its basis in the USA (central to the ‘net neutrality’ debate you mentioned). Furthermore, the USA has many states, each with their own laws. Add to this the EU (and even problems around brexit) it’s clear that there is probably never going to be a coherent legal framework for the web, recognised everywhere.

        Finally, I agree with your points around public/private offline and sharing online. This is carrying on offline precedents in an online context. However, the Web gives everyone a much louder voice, increasing the severity of incidents, such as those described in your post.

        I hope those points are helpful. I’d be happy to hear any further thoughts you may have.

        Cheers,
        Mark.

        Like

  2. Hi Phil,

    An interesting read as always. I agree that social media sites, who give online users a voice, should be somewhat responsible for the actions taken by their users. Although not directly, social media platforms need to have systems in place to deal with online interactions similar to the cases outlined in your post and even more.

    What do you think could be done to better monitor the content which get posted online? This has been some debate on twitter since it enables users to contact others more directly can in instances gain momentum by other users jumping on the ‘online bandwagon’.

    Additionally, do you think that this kind of mentality of ‘lynch mobs’ will ever pass? Or is this something that users, firms and groups will always have to be weary of?

    Ollie

    Word Count: 135

    Like

    1. Hi Ollie,

      Thank you for your comment. Having done some more reading on the topic I am not sure it is possible to hold social media sites accountable for the content of their users. I believe that, much like the postal service, they should be regarded as a common carrier of messages – however with the additional benefit of being able to take things down once they have been flagged up as inappropriate, which they should be held responsible for.

      I believe that the best way to tackle this problem is to find ways of making individuals aware of the consequences of what they do online and make it so there are legal ramifications for certain types of material surfacing online. A combination of better educating people about the dangers of their actions on the internet and of a cost to irresponsible usage could greatly reduce the “lynch mob” mentality and increase the responsibility of use of the internet – however this raises other problems over the issue of anonymity and of freedom of speech on the internet.

      The best way to monitor content currently appears to be self-taught algorithms along with people flagging up problems which then retrospectively are checked by those who work on the platform. This however is not perfect at filtering out problems and comes at a great cost.

      Overall I think lynch mobs will pass, but the framework of internet usage may need to change in order for real change to occur.

      Phil.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s