The Social Dilemma: Some proposals

As social media companies have grown throughout the world, there have been growing concerns that they are increasing social conflict by individually "feeding" information to "users" that moves their interest more and more toward extreme content. The most egregious of these are the murders and suicides streamed to large numbers of people.

The social media companies initially claimed that they were "neutral platforms" that hosted information but were not responsible for it.  Increasingly, they have accepted responsibility for "curating" extreme content, and employed large numbers of people and built AI to act as censors.  This response has failed to prevent abuses in many instances and also been criticised because of the growing political bias being included in the brief of the censors.

The one consensus that is emerging globally in the need for regulation.  The social media companies are attempting to shape this regulation around the idea of government assistance (and responsibility) for censorship.  They want to prevent the regulation from "killing the goose that laid the golden egg".  

Censorship would be both dangerous and ineffective because the business model of the companies is inherently exploitative and corrupting.  Helping the model to survive and grow unchanged will inevitably lead to more dangerous social and cultural outcomes.

There are a number of possibilities that could deflect the business model without breaking it completely:
  • Dilemma: “News” is no longer a shared experience as social media feeds display different news to different people based on their profile. The goal of AI is to maximize engagement.  AI has demonstrated that the most effective strategy for maximising engagement is to continually display more and more extreme “news”.  This leads to automated “viral” spread of extreme “news” regardless of credibility.  Social media companies try to censor their AI they are alerted. Social media companies have established both AI and human censorship systems which are both failing.. 
    • Response: Regulate media companies prohibiting (1) the use of profiles in the targeted dissemination of any “unpaid” information. 
    • Rationale: No insertion of information into feeds from sources other than specifically requested by the user.  All users subscribing to a feed are shown the same information from that feed. 
    • This applies to all content feeds, not just traditional “news”.  Social media companies are prohibited from inserting content from any other users into your feed unless you have specifically requested the feed.
    • If you have requested the feed, they are prohibited from filtering the feed i.e. all users must get the same content from that feed.

  • Dilemma: The effect of paid “disinformation” from unidentified sources is increasing social conflict.
    • Response: Require all advertisers (people who pay to display content) to be registered with the registration displayed on all ads. (Advertisements remain unregulated but are clearly sourced)
    • Rationale: The source of any attempt at starting a viral “disinformation” meme is identifiable.

  • Dilemma: The ability for a public post to “go viral” has caused repeated and widespread disinformation, and has become a goal for “bad actors”
    • Response: Make social media companies legally responsible for all public posts (ie content other than paid advertising).  Legal responsibility for private posts (I.e. visible only among groups or “friends”) remains with the author.
    • Rationale: Social media companies have acted as publishers for more than a decade now in terms of ‘curating” content. They need to be responsible for all information that is published to the public.  People joining private groups are legally treated as engaging in a private conversation, and are responsible for what they say.

  • Dilemma: Media company policies are filtering and biasing social and political dialogue.
    • Response: Require all “platforms” to be non discriminatory in relation to accepting all paid content. 
    • Rationale: The distinction between political and non-political content is impossible to tease out.  Even after the prohibiting the dissemination of content to people that they have not requested it, there is still the reality that companies will try to bias the platform against those whose views their customers do not like.  This is an unacceptable restraint on free speech. 
See:

No comments:

Post a Comment

Comment or Send a Message

You can use this form to send a message OR make a comment as your contribution is NOT published automatically, but sent to Stephen for
consideration.


You can select "anonymous" from the drop down menu below if you do not have a google account.