Where’s The Algorithm To Save Lives?

no more violent content on social media

Where is the algorithm that may save lives?

Imagine you are in the restaurant having dinner, suddenly some lunatic stands up and starts screaming and ranting about shooting and killing people. The lunatic then sits down and finishes his meal, pays the bill and leaves. Everyone is relieved that nothing happened and life goes on.

A few months later, the same lunatic is on the news because he was involved in a mass shooting. Everyone in the restaurant feels the guilt. The owner is questioned by police and asked ‘why they didn’t call the police?’ The media hounds the owner and workers about it, basically blaming them in part because they could have prevented it. Soon, the restaurant goes out of business.

Why do we give social media companies a free pass for doing nothing?

It’s time we also talk about the responsibility of the social media companies. From terrorists to school shooters, it seems like we hear about the warning signs on their social media accounts after it’s too late. At the least, they have a moral responsibility to do everything they can to help.

We all have a responsibility to monitor our kids and those we are connected to on social media, but we should be asking Facebook/Instagram, Twitter, Snapchat, Google/YouTube, etc. to do their part and use technology to help save lives.

Social media companies employ some of the smartest people in the world who are working on AI and other predictive technologies that track what we post, sites we visit, who we communicate with and they can identify everything and everyone in a photograph or video, etc. All of it is mostly being used to collect data in order to serve us very specific advertising.

If they wanted to make ‘saving the world’ a priority, the could make a difference. It would be easy to identify and flag accounts with violent patterns and at the least, shut them down and if they are under 18, alert adults that are connected to them.

This is what they tell us in their community guidelines and rules fine print:

Instagram’s ‘Community Guidelines” include: “Instagram is not a place to support or praise terrorism, organized crime, or hate groups.”

Facebook’s ‘Community Standard’s” include: “We remove content, disable accounts, and work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety.”

SnapChat’s ‘Community Guidelines” include: “Never threaten to harm a person, group of people, or property. Don’t post Snaps of gratuitous violence.”

Twitter’s ‘Rules’ include: “Twitter allows some forms of graphic violence and/or adult content in Tweets marked as containing sensitive media. However, you may not use such content in your profile or header images. Additionally, Twitter may sometimes require you to remove excessively graphic violence out of respect for the deceased and their families if we receive a request from their family or an authorized representative.”

YouTube’s ‘Community Guidelines” include: It’s not okay to post violent or gory content that’s primarily intended to be shocking, sensational, or gratuitous. If posting graphic content in a news or documentary context, please be mindful to provide enough information to help people understand what’s going on in the video. Don’t encourage others to commit specific acts of violence.

It is disturbing how soft some of these platforms are on enforcing the distribution of ultra violent material. Although they may have some ‘community guidelines’ that sound good, I can search for two minutes and still find disturbing content on all of these platforms. It’s time for them to do more.

Social media companies are behemoths that have created great tools for the world to communicate but they also have a responsibility to play a bigger part in the solution and we should demand that they use their brain power and technology to make saving lives a priority.

Comments: 0

Your email address will not be published. Required fields are marked with *