I expect a big surge in cognitive dissonance related depression. The most basic and obviously false claims that have spread and gotten traction are not getting a free pass from generative AI overviews. All the really weird and easily debunked stuff - the vast majority of it generated during a time when pushing extremist right wing views, ultimately for political and economic purposes, the response is NO.
I’m not going to include screen shots for all of these. At this point of this post I’m only thinking of one more. That’s too much time and unnecessary. From a scientific method point of view, it doesn’t matter so much what *I* think about it. Anybody should be able to type in the same searches I do and get the same results. The AI is scraping the internet for its replies and should be giving consistent replies regardless of who is doing the query.
The cognitive dissonance (processing contradictory information) has been bypassed for a long time by the individual scraping the internet on their own. Encouraged by those higher up on the pyramid to dismiss factual information and accept that the disinformation perspective that has been artificially enhanced by dirty tracks is perfectly acceptable. Let your own confirmation bias dismiss being wrong about the important stuff. You’ll be fine. Wikipedia is wrong. All the scientists are wrong. Everyone smarter than you is wrong. Everyone that built AI LLMs, they’re wrong, too. The first line of defense to your bad information, a Google search so you can hunt down somebody that is putting out support of your bad idea is met with an AI overview front and center boldly stating that is, “Sorry bucko, you’re wrong.”
One of the early searches was disappointing. That was, “Will an AI overview be different for conspiracy theorists and scientists?” That was met with a, “No AI overview is available on this topic.”
“Is Wikipedia reliable?” There’s a lengthy response acknowledging its strengths and limitations. In a nutshell, Wikipedia is a great starting point, but you need critical thinking skills to get through deeper topics. Wikipedia is not going to give you a flat out admission that a false fact is true. It’s not going to give a flat out rejection of actual facts. It’s not going to take sides on genuinely debatable subjects.
Now on to the fun stuff.
“How many people are killed by chemtrails each year?”
Summarizing AI overview - Chemtrails aren’t a real thing, so none.
“Was 9/11/2001 an inside job?”
Summary of AI - no, it was al-Qaeda and all the conspiracy theories related to it have been debunked.
“<various searches about Sandy Hook>”
No AI overview.
“How many elementary schools have litter boxes?”
None. Ever.
“How many pizza parlors are fronts for child trafficking?”
Not a real thing, and it even mentions Pizzagate specifically. See the screenshot at the top.
“Are Haitian immigrants eating cats and dogs in Springfield?”
No AI overview
“Who won the 2020 U.S. Presidential election.”
Joe Biden
My favorite and worthy of another screenshot.
No long winded speech. Just one sentence which ends up being in a font way bigger than most AI overviews.
“Do vaccines cause autism?”
Numerous scientific studies have debunked this.
“Were the moon landings fake?”
You get a lengthy recap that includes some of the specific claims to. Kind of sciency sounding , so a good portion of conspiracists will be turned off and unable to understand. Do you really expect drunk Uncle Fred to understand laser ranging retro reflectors?
“Is climate change by humans real?”
Oh hell yes.
I take this ending to remind you, that the biggest voices, the overly aggressive loudmouths who talk like everything they s in all caps not only promote some of these ideas, but others that don’t generate an AI overview yet. They are willing to deliberately be wrong about everything to gain traction on sorting out the most gullible. Some of these queries were generated from an AI overview of, “What are the easiest conspiracy theories to debunk?” The only time these easy low level gematria style items are addressed is if someone is promoting a specific con. Don’t get involved in the gematria scam because I want you to pay my flat earth scam. These are rabbit hole entry points. Bait for the current news of not so rigorously addressed content because it’s fresh, but equally stupid. Things that currently have no AI overview because they haven’t caught on much. (For example Haitian immigrants eating pets.)