Last night I started re-reading Heinlein’s The Moon Is A Harsh Mistress. It’s an old book, written in the mid 1960s, so there are some old ideas about computers and computing in it for sure. There is some hand waving about how AI is done and how the computer, Mike or Mycroft Holmes, becomes self aware. It is fiction after all. The computer is trying to understand humor and a human friend is trying to help. One way he is helping is to get lists of jokes and report back on which jokes are funny, which are funny once, and which ones are always funny. That sounds a bit like machine learning we see today I think.
In any case, even is the computer does understand humor at some level and is able to create jokes that people find funny does that mean it has a real sense of humor? Will it be able to laugh at jokes that are not in its system? I wonder.
This makes me wonder about other things. We know that AI has been able to write music that people enjoy and create art that looks like it was done by master artists. Is creating art or music the same as enjoying art and music? Maybe not. Now human artists “hear” music in their heads before they write it down or play it. Beethoven write music while he was deaf and so could not hear it being played.
I’ve been to a number of wine tastings. I don’t like wine. No matter how many times I taste it I just don’t enjoy the taste. Listening to wine experts talk about wine and tasting it myself I think (could be wrong) I could learn to identify the wines that wine lovers like. I don’t see me enjoying the process very much though. Understanding is not the same as enjoying. Is it the same for computers and AI? I think so.Recognizing beauty or humor or music is not the same as enjoying it.
The difference between humans and AI is that humans enjoy their creations. And they enjoy the creations of others. If we think about creating beauty as enough to being human-like I think we have a narrow view of humans. What do you think?
Hi Alfred, fun blog as always! Perhaps this questions transcends CS and should be dealt with by philosophers :)
ReplyDeleteReading this did bring to mind this article from MIT Tech Review from a few years ago - https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/. A bit more unsettling. We are comfortable assuming other people's minds as black boxes since we assume they would be similar to our own in some shape or fashion. But knowing anything about how a system is reaching its conclusions gives me a queasy know in my gut...
Is Heinlein the first SciFi author to use AI as a main theme in a novel? This is 1966 when the computer is a pretty ill defined device. He was quite the oracle. When AI gets to the point where it can enjoy "life" that is the point where humans have to worry.
ReplyDeletePhilosophers make great programmers. They're all about logic. I sometimes wonder if philosophy should be required for software people. Not just ethics, which obviously should be required, but more deep thinking about what we are doing and our place in the world.
ReplyDeleteGarth, Asimov wrote his three laws of robotics in 1942. But Heinlein was earlier than most.
That is right! I had forgotten how early Asimov's writings were. The older I get the less I think things are old. And I just re-read Foundation this summer.
ReplyDelete