What to make of these efforts that edge up to passing the Turing Test?
Here’s GPT-3 from Open AI.
That got this Facebook AI guy all a-twitter over the need to button down such efforts in order to button up GPT-3’s lip.
“This is a bizarre anthropomorphic view that makes little sense. AIs are not people but algorithms created by humans making deliberate design choices (eg, model, objective, training data). When AIs make sexist or racist statements, these humans should be responsible for it. 12/13Paul Graham @paulgPeople get mad when AIs do or say politically incorrect things. What if it’s hard to prevent them from drawing such conclusions, and the easiest way to fix this is to teach them to hide what they think? That seems a scary skill to start teaching AIs.”
He should talk! By the looks of his photo he is himself a virtual creature. Does this guy look real or is he himself a deep fake?
Only an autist would buy the notion of the Turing Test in the first instance. The ability of a machine to carry on a conversation is not the same thing as having consciousness, being alive and having agency. I am not saying a machine can never have human qualities, or are conscious in some fashion, or that could transcend humans in new ways only indirectly related to concepts like “agency” and “consciousness.” But a simulated conversation with a human doth not a human make
A lot of the fear of “racist AI” does not relate to AI’s increasing ability to think, or appear to think, in fuzzy human ways. The panic is over AI’s traditional role as objective number cruncher and pattern recognizer. There is no room in the current orthodoxy for inconvenient truths so the bulk of the pushback from that crowd is effectively anti-science.
But the problem Pesenti is touching on is a deeper one. Actual people are not machines. They think in fuzzy ways. Memes spread among them in an almost organic fashion. Our Betters can do a good job of trying to herd opinions and manufacture consent but at base human intelligence is protean. So Pesenti & co. have a problem on their hands. Right now all well and good to call for algorithms that force AI conversation into predictable channels. That is a machine analogy to thought control, and I suppose if elites use thought control on people they will try it on machines as well. But if their true aim is the development of something deeply resembling human intelligence they won’t be able to stop AI from noticing things, and from talking about them.