[ Content | View menu ]

Very Artificial Intelligence

Mark Mzyk | March 11, 2008

Thanks to the blog Futurismic I came across this article about a bot that was programmed that can pass the false belief test.  The test is typically passed in humans at about the age of four or five.  It basically consists of figuring out that other people see the world differently than you do.

In the test that is set up in Second Life, the bot is shown a gun that is in a red suitcase.  There are two observers with the bot.  One observer leaves.  The gun is transfered to a green suitcase.  The observer that left is then called back.  Upon returning, the bot is then asked which suitcase the observer who left would look for the gun in.  The bot correctly answers the red suitcase, since that was the last suitcase the observer saw the gun in, even though the bot knows the gun is in the green suitcase.

To achieve this reasoning, the bot has been seeded with a logic statement that if an observer sees something, they know it, and if they don’t see something, they don’t know it.

Apparently some people are calling this a great AI achievement.  Am I missing something here?  I’m not understanding what is so great about this.  It seems to me that it can be accomplished with a state machine.  Just what are research dollars being used for these days?

Please enlighten me if I’m missing something.  Or if you know more technical details about this that might fill in the gaps in my knowledge to make this more impressive, let me know.

Filed in: Technology.

One Comment

  1. Comment by Tom R:

    I would tend to agree that the AI for this is not complicated to write. As some of the comments to that article mention, creating an AI which could progress on its own from failing to passing this test – i.e. learning – would be a true milestone.

    All that glitters is not gold…

    March 11, 2008 @ 23:09