top of page

The Walker may theoretically be able to navigate a wumpus world, but does that make it intelligent? We don’t think so. Since its task is relatively simple, our first instinct is to reject any idea that it is intelligent or has a mind. After all, it could be quite frustrating trying to translate our ideas into a working algorithm—it may be likened to explaining them to a particularly stupid baby. Also, the robot does only what we tell it to do. The Chinese room argument comes to mind: because it consists of a list of very simple instructions, the Walker’s program could easily be executed by an unintelligent human. We, not the robot, solved the wumpus world problem.

 

But what separates the Walker from a more intelligent AI? First of all, AI programs are much more complex than our algorithm, so perhaps a meaningful distinction can be made there. Additionally, they employ machine learning and artificial neural networks, meaning the algorithm is in fact figuring out things that weren’t directly programmed into it and evolving beyond its original capabilities. The Walker does none of this, so we think it is more like a calculator than a brain and isn’t intelligent. For the same reasons, and the simplicity of the problem compared to what current programs can do, the project did not give us any new insights into the feasibility of AI.

 

But maybe the robot can still have a mind without being intelligent. Certainly, according to the two-dimensional scale of experience and givenagency in The Mind Club, robots are at least perceived to have some mind. Once again, however, in this case it makes more sense to view the Walker as a lifeless automaton that works exactly as we set it up to work. There is no mystery about what it does (except when there is a bug we can’t seem to locate), and there is no uncertainty about what it will do next. A more complicated AI might have a better case, but we think the Walker can be ruled out as having a mind.

bottom of page