Friday, September 29, 2023

A Tale Of Two AIs

Story number one comes from the world of computer chess.

https://m.youtube.com/watch?v=VUC1K3UA-3Y

A recap of that for those who haven’t got the time or find Levy’s over abundance of personality difficult to deal with for a long video:

Stockfish, the best chess playing computer program ever lost as black for the first time in two years.  The winner was Leela Chess Zero, the same program that beat Stockfish two years ago.  Arguably the two best chess engines (technobabble for chess computers) ever.

And recapping the advancements of chess engines in far less space than can do the topic justice -

It is pointless for humans to play chess computers now, that is if they expect to win.  They simply do not make mistakes that a human brain can exploit.  The only way they lose is in when matched against other chess engines, and that is rare for these top two.  As with human grandmaster classical time control chess, there are a lot of draws.  These are certainly boring for the public that doesn’t enjoy chess and even regarded as boring by those that do play chess.  They want to see Levy’s over abundance of personality recap a stunning queen sacrifice.

As part of the chess engine development process, the age old question, starting from move 1 and with best play from both sides is the game a win for white or a forced draw.  Chess is mathematically complex enough that humans have zero chance of breaking out equations to figure that out.  But where we are at is near 100% certainty that it is not a forced win for black.  There is an advantage to moving first.  This game between two titans of chess enginuity, and the previous loss that was essentially the same process (Stockfish allowing a piece to get trapped and out of play), bears out that perfection as white is still not here.  These engines are constantly being upgraded, and the use of a neural net to mimic human thinking is advancing to be applied to non chess topics, for example medical diagnoses.  The implication of solving medical problems more accurately is typical of the AI debate topic.  Yes it’s cool and some people will be saved from suffering.  Yes, it’s not cool because some capitalist will use the tech to make money because that’s cheaper than hiring a human doctor.

The next story is something that sounds like a potential topic for a conspiracy grifter.  Fairy circles.

https://amp.cnn.com/cnn/2023/09/28/world/fairy-circles-new-sites-scn/index.html

These mysterious circles have been known about for some time, and being mysterious is perfect for a gematria story about how the Jesuits are using them for…something.  Maybe hockey face off circles, who knows.  But that’s a digression from the main point.  Using a neural net isn’t simply a matter of throwing data at the net, pressing a button and out pops the result.  Chess neural nets are known to have them play against themselves for a butt ton of games, analyzing what works and what doesn’t to fine tune and eliminate what doesn’t work.  And it’s important to emphasize that playing against themselves means literally playing against themselves.  It’s Leela vs. Leela with a starting point of the basic rules for the game, which aren’t that difficult to program in.

Like gematria, there are no rules in place for what exactly is a fairy circle and what is something that just looks like one but isn’t.  It could be a dark elf circle that’s completely evil.  It could be fairy oblate spheroid.  It could be fairy Pluto that should have gotten demoted from being called a planet a long time ago.  The humans involved need to force feed the initial data into the net to jump start the process.  Here net, these are actual fairy circles.  Here you go not, these look like fairy circles but they aren’t really fairy circles.  Then, here you go net, here’s satellite images.  Tell me what you think.

I suppose the goal is something noble like how vegetation patterns can be analyzed for food production, that didn’t seem clear.  Or it could be a repeat of using neural net projects to advance the study of neural nets and they had to pick something to get the ball rolling.  And I suppose it could be, “I can get funding for this?  I can eat for another year?  Cooooooooolll!”

There is no clearly defined win or loss in the fairy circle analysis.  There is in the chess, the game for each side will end in a win, loss or draw.  And note that the fairy circle news story shows that humans intervened to correct the AI mistakes that were made.

Every new tech advance makes it easier for the internet conspiracy grifter.  Here’s a photo.  Figured out how to edit that.  Here’s video digital editing.  Figured out how to make convincing deep fakes.  Here’s ChatGPT and neural net AI.  How about a day that comes along where we have AI that’s trained on:

Here is a bunch of stupid shit people believe in short term.  Show me the top hits for what gets the most engagement today so I can grift off of it.  AIlex Jones.  There is quite a lot of misinformation stockpiled on the Internet.  And there’s quite a lot of history of social media not using human intervention to correct mistakes.  “Thanks for reporting this to YouTube” does not count as actual human intervention.

No comments:

Post a Comment