Wednesday, October 23, 2024

Bot Not For Me

 


I'm still doing the daily Wordle, partly animated by my hatred of The Times's Wordle Bot and its critique of my performance, even when it praises me:


Who is it talking to? I didn't have this kind of strategic vision at this point. I was just looking to see if the answer contains any more of the commoner letters, and hit two of the letters. That was a good Turn 2 result!

I had no idea at this point that there were only two remaining words, of course, let alone what words they were. The Bot knows, because it only takes seconds to run through all the mathematical possibilities. (If I thought of "beaut" I wouldn't like it, I don't think Wordle's list is the same as the bot's, and that's the kind of word it would recognize but not deploy; on the other hand I have this feeling they've already used it, just a few weeks ago—if I'd thought of "gamut", on the other hand, I certainly would have tried it.) 

My own puzzle going into turn 3 is where do the A and U go? How many English words end in "-UT"? I don't have a list in my head, I have to game it out.



The first word that comes into my head that meets the new conditions is unknown to the Bot, so it thinks I'm the stupid one. Typical. 

But don't tell me that wasn't skillful! The move gives me all the information to force me to get the right answer. It must be  GA_UT, because those are the only possibilities left for the vowels, and (bonus!) it must start with a G. All I need now is to hit on the right letter for the third position, and that's easy—I know the word perfectly well, though I couldn't conjure it out of the void.


And it's complaining I don't deserve it. I'm "luckier". 

But that's not the difference. It doesn't think like me, and I prefer to say that it doesn't think at all. It doesn't have to, because the answer is always there in the database. It just runs through the math and puts down the answer that covers the most bases, where I'm thinking somewhat blindly about letter frequency and familiar patterns. It substitutes computing power for cognition.

I'm glad I do cognition instead, even though the Bot beats me pretty often. I definitely have more fun than it does, even though as fun goes it's pretty minimal.

***

A few days after drafting that bit without having any plans for it, I ran into something by an often interesting Substacker who goes by Philosophy Bear (aiming, perhaps, at an intelligence that is the diametric opposite of artificial) that irritated me, with a very uncritical concept of "intelligence", into having some more thoughts, to the effect that what's unintelligent about AI is that its life is too easy. AI is unable to "think critically" (even as it's able to behave critically, as in passing judgment on my Wordle game) because it never has any problems that really matter to it, and it never will unless there's a radical change in the approach.

My argument got absolutely zero reaction in the Substack Notes, anyhow, so I thought I might recycle it over here:

A lot of people say LLM’s are not AI’s because they fail in various ways, they think this is the properly cynical position.

I think it’s not cynical enough, the concept of AGI is not that well specified. When people say “hallucinating means it’s not an AGI” I think they assume there’s a clear and demanding concept of “Artificial general intelligence” they are appealing to, but I don’t know what that concept is, and very rarely is spelt out. When it is spelt out, it sounds like a much higher bar than “Artificial General Intelligence”. I don’t see anything in the nature of LLM’s that precludes them from being AGI’s in the most straightforward sense. They are artificial, they are domain general, and they are capable of intelligent behaviour (AI is typically defined in terms of intelligent capabilities rather than possessing some intelligence essence). For almost any given task that can be done with written inputs and outputs there are countless people less capable than them. I don’t think vision is a requirement for general intelligence -blind people have general intelligence- and even if I did, plenty of LLM’s have vision now. To the extent AGI has any meaning at all, LLM’s are AGI’s.


Really, it would be better to couch the argument in terms of the ways human intelligence is necessarily inferior to LLMs, for instance in terms of the sheer limitations of computing power to what we can process through the senses. It’s because we’re forced to work with woefully incomplete datasets that every toddler learns techniques of inference that work pretty well (but very far from perfectly). LLMs have no experience of inadequacy, of not knowing things; they just compute the probabilities from their models of what is truthlike and offer those, and that’s what the “hallucinations” are. Intelligence in the ordinary language sense is about the ability to struggle. (Even etymologically, “intellego” is “selecting from among” the objects of a restricted perception and memory.)


I spend a lot of time for employment reasons using Google Translate, a very well-made LLM with limited functions. One of the things it can’t do right is to assign gender in translating from Japanese to English, because Japanese doesn’t use gendered pronouns. It guesses pretty well inside a sentence, like if somebody is identified as a “female musician” it will use “she” in that sentence, and I think it kind of succeeds with a source like Wikipedia by assuming that most articles are about males because they don’t get marked wrong for the assumption, for the obvious sexist reasons (most articles really are about males). But it doesn’t memorize that this particular person, subject of the article, is a female, and switches genders randomly in the course of telling the story. It may learn generalities, but it doesn’t learn anything in this immediate local way.


To the extent that machine learning is really “learning”, which I think I don’t want to dispute at all—they really do this, and it’s really remarkable—it’s not what I think phenomenologists call “embodied” learning, born out of a struggle with the material world and sensorimotor perception. All the LLM has is language, with nothing to check it against except more language. It doesn’t have a special status for “that which is the case” (Tarski?), though it may have preferred sentences and dispreferred sentences. It doesn’t have a way of predicting a sentence (“All swans are green”) is dispreferred.


Natural intelligence—I want to say second-order intelligence, because it doesn’t apply to very many animals and has an evident meta aspect—is semiotic, involving mapping sentences against “that which is the case”, and I don’t see how AI can be endowed with that. I understand there are these very cool programs for mapping visual input (like a CAT scan) with language. But not against it.

Cross-posted at The Rectification of Names.

No comments: