“You’re not allowed to have a fake account on Facebook.”
That’s what Mark Zuckerberg told Congress in October 2018 after the company’s role as a purveyor of political lies, propaganda and misinformation (see Cambridge Analytica).
It wasn’t true then, and it’s not true now.
Facebook and Twitter* give cesspools a bad name.
- Using internal Facebook documents from 2021, “researchers estimated that the company was removing less than 5 percent of all hate speech on Facebook.” [September 2023, Carnegie Endowment for International Peace]
- “The daily average overall usage of hate keywords on Twitter nearly doubled after Musk bought Twitter.” [April 2023, ScienceX]
- “The BBC analysed over 1,100 previously banned Twitter accounts that were reinstated under Mr Musk. A third appeared to violate Twitter’s own guidelines. Some of the most extreme depicted rape and drawings showing child sexual abuse.” [April 2023, BBC]
- Twitter failed to respond to 99 of 100 reports of “tweets containing racist, homophobic, neo-Nazi, antisemitic or conspiracy content” that originated with paid accounts. [May 2023, Center for Countering Digital Hate]
- “When ads calling for genocide in Ethiopia repeatedly get through Facebook’s net — even after the issue is flagged with Facebook — there’s only one possible conclusion: there’s nobody home,” said Rosa Curling, director of Foxglove, a London-based legal nonprofit that partnered with Global Witness in its investigation. [June 2022, PBS News Hour]
- “Sheryl Sandberg silenced and censored a Kurdish militia group that “the Turkish government had targeted” in order to safeguard their revenue from Turkey.” [June 2021, AI Ethics]
- In 2015, legitimate media accounted for six of the 10 Myanmar websites with the most engagement on Facebook. By 2018, they accounted for none. “A United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a ‘determining role’ in the atrocities.” [March 2021, MIT]
Let’s “think different” about account verification.
I’ve been on both sides of the “people should be verified” debate.
I’ve come down squarely on the “people should be verified” with pseudonyms allowed.
Rather than charge people to be verfied, charge them to be unverified and brand those accounts as such.***
Make sure it’s really a person setting up the account, though. And limit daily posts and comments.
Let verified users pay a fee to see zero unverified account comments in their timeline. Make this reasonable (<$20/year paid monthly). Heavy users will be happy.
Both platforms rely on advertising to pay their bills. With verification, they can raise ad rates because advertisers will know people are real.
A profile exception: any legacy account (deceased) should be allowed to continue, as is.
Treat pages the same way as profiles: pay to be anonymous and have your privileges restricted and lack of account identification clear.
Yes, this will take a lot of time. Phase it in. Facebook should (eventually) save on the need for human moderators. Twitter? Shrug.
In the interim, use AI tools to make it harder to set up a new account. Be more aggressive in deleting the existing ones.
If I can confirm an account is fake in less than five minutes, how long should it take an algorithm?
There has been little traction against trolls in a “man’s world” where “size” is equated with “worth.” Bigger = better (and all of that crap).
If women ran these companies, there would be action against trolling. These two (relatively tame) examples came from a post I made mid-afternoon 07 Sept 2023.
I’ve not accepted one of these friend requests (perhaps I should) so I have no idea about the goal. Gullible older women who will part with cash? 🤷♀️
Are those real people? Sure!
Do the real people operate these accounts? Doubtful!
1. Profiles do not have customized names.
- “Richard” profile link:
facebook.com/profile.php?id=100093537326428 - “Eric” profile link:
facebook.com/profile.php?id=100082563303254
2. Personal details are incorrect.****
I can hear you thinking … what’s the big deal? No harm, no foul.
Every public deception (drip, drip, drip) affects our mental state, which in turn affects our willingness to trust one another and increases the likelihood of fear of “the other.”
When fake account holders deliberately spread or amplify propaganda, lies and disinformation, they set in motion a chain of events that can change someone’s brain.
One of the biggest barriers to correcting misinformation is the fact that hearing the truth doesn’t delete a falsehood from our memory.
Instead, the falsehood and its correction coexist and compete to be remembered. Brain imaging studies conducted by Lewandowsky and his colleagues found evidence that our brains store both the original piece of misinformation as well as its correction.
Clearly, account verification is a necessary but insufficient condition to stem our tsunami of misinformation: see Donald J. Trump.
One step at a time.
~~~
Featured image: ID 222453175 | © Mikhail Rudenko | Dreamstime.com
* I will not call it “X.”
** Admittedly, I have a US-centric mindset, and we need better systems for verifying that a person exists and has only one account. You can pay for a second one.
*** Musk has made the “Blue check mark” a laughing stock.
**** I was able to report the Richard D. Clarke account because there is a Lt.Gen. Richard Clarke account. I doubt it’s official: no followers and missing a custom account name (id=61550663153057). I was not able to report the Eric T. Hill account as fake.
UPDATE: Another one tonight.