As somebody that was poking around unserious Twitter accounts when Twitter rolled out the Community Notes I’ve gained some insight into the practical value of the program. The result is somewhat of a mixed bag, there are things that work well, and things that started off well that degenerated into partisan support, and some things that never really worked from the get go. To be fair, the last one there seems to be pretty minimal.
The general theory is that like there has been some success with YouTube adding context to conspiracy videos, a quick blurb usually from a well vetted Wikipedia article, a newcomer to social media has a chance to what is actually going with “chemtrails” or “Freemasonry” or “Covid Vaccines”. Like YouTube adding context to scientific concepts, Twitter can and does get notes put up, hopefully before something ridiculous gets traction. For example, trying to claim a fresh water hydra made it into a person’s eyeball from being in a COVID vaccine. A quick explanation about how this can’t be due to the size of the hydra and needles can be a good start. The problem is the writers of the notes are fallible, and the fact checkers need to fact check themselves constantly. Some people, intentionally or not, don’t write good context notes. Continuing with the hydra example, the proposed note may be, “This is a digitally altered image of a hydra in an eyeball.” Well, you better add a good source to that note. Even better, if you feel the need to comment about the fake image then combine it with the scientific explanation.
Herein lies one of the problems with conspiracy misinformation content. Explaining sizes of microorganisms in nanometers is more gobbledygooky than saying it’s a fake image. The other side combats the correct information by saying it’s a real image and the gobbledygook is ignored.
The Community Notes program is outside the paid staff of Twitter. The claim is that a note won’t be put up officially unless rated as good by enough people. They’ve even thrown in a specific comment about being rated by people from “different points of view”. How they determine what your “point of view” is remains a mystery to me.
Political grifting, having been sizable for a long time on Twitter, capitalizes on pseudoscientific conspiracy content. Even with an upgrade to having completely bogus scientific content accurately Noted, the same downside issues exist before the program started.
1). The grifters have infiltrated the program. One of the main tactics is to not just downvote politically accurate notes, they also add a note saying no context is needed. Often these are accurate as they point out that the debate should be in the Twitter thread and not argued about in the Notes. Way too often, these degenerate into a polarized left versus right wing game of suppressing the Note. The grifters who use engagement above substantive content have the edge when there’s only the Twitter Thread to point out what’s wrong. Closed minded people remain oblivious to the “different point of view” because they are simply on the wrong side of the tracks.
2). The lag time. Again, edge to the grifters. The nearest analogy I can think of is something that YouTube does a better job with now than five years ago. It used to be that merely watching a video about a topic (e.g., mass shooting) would generate lists of related videos that by design the grifters got promoted to being popular enough that you would get a “what the fuck did I just watch?” recommendation. Now it’s more likely that you get a recommendation from a more legitimate news source. And a context blurb if your conspiracy video mentions the Illuminati. A well sourced, well written Note has to wait until the “different point of view” has had their go at muddying up the waters. Getting the Note up days later doesn’t help much when the Tweet has already received the majority of views it will get in total.
On a personal note, before I forget. Nothing drives me nuts about the program more than using a source behind a paywall. Some disinfo researchers put up gift articles from reputable sources since people don’t want to pay for their news. People don’t want to pay for their news so much they do the last thing they should be doing - use social media as their news source. I count a Note from a paywalled Wall Street Journal article, regardless of its totally accurate economic news reporting value, as an unreliable source.
That’s a Tweet from a deeply unserious account posting about chemtrails, gematria and whatever BS topic is the soup of the day. Using the logical fallacy of Just Asking Questions (JAQing off), they are really promoting the conspiracy that pedo vampires are drinking adrenochrome. You know, because that makes all kinds of sense. The Note is accurate, but the context misses the key point. It’s a deeply unserious account that is doing nothing but its job of being wrong about everything for engagement. The Note would never see the light of day if it pointed this out. Congratulations to the Note writer for pointing out that it’s conspiracy related in the Note. Well, take what we can get, I suppose. No congratulations for spelling adrenochrome wrong.
If only Twitter and all social media had some computer programmers on the staff. Somebody that could write some kind of algorithm. One that could recognize deeply unserious accounts since they refuse to staff with people who care about what’s going on in the world. Then the context I’d like to see would get added:
Although this account may not be a spambot it is posting baseless conspiracy content. We like free speech, but it’s not like all free speech is created equal. Grifters need a job, too so we want to make things as easy for them as possible.
No comments:
Post a Comment