Skip to content

keep moving

December 19, 2005

I had the opportunity to see an interesting talk by Dario Floreano, the director of the Laboratory of Intelligent Systems in the School of Engineering at EPFL (L’Ecole Polytechnique Fédérale de Lausanne). He is interested in what makes biological and artificial systems intelligent, and has done some really interesting work in the field of computer vision.

Vision, it turns out, is one thing that computers can’t seem to do very well. Certain techniques can perform such functionality as facial recognition, and limited simple shape distinction, but they are sorely lacking for other tasks, such as counting the number of people in a room (especially if several are occluded by visual obstacles).

The main problem seems to be the complexity of the visual scene. The old saying “a picture is worth a thousand words” is an enormous understatement. There is such a diversity in color and shape. In addition, there are further complexities caused by scene dynamics (things moving around or flying through the air) that we instantly recognize. Finally, add in the fact that our own sense of vision utilizes two simultaneous “vision feeds” to extrapolate object positions in three dimensions. It’s pretty amazing our brain can handle all of this nearly instantaneously isn’t it?

Dario seems to have hit on a crucial behavior of biological life forms that seems to help the “training’ of vision centers in brains. He’s built a number of robots that are trained to find patterns by moving towards them in a room. The neural nets that form the basis for the robots decision making are trained to try to always move forward. If a wall is in their path, they try to move around it, maintaining as much of their forward momentum as possible. Their vision processors are very simple, and the environmental features of the room are not programmed into the robot (i.e. they are not “taught” what a wall looks like). The cameras they use to see with are mounted on tiny servomotors that can position the lens in nearly any direction. They eventually train themselves to move around the periphery of the room, without touching the walls, and positioning the camera at the optimal view for discerning a wall boundary (at the darkened corner where the wall meets the floor).

The main point of his experiments wasn’t to have robots avoid walls. Roomba already does this fairly well using other forms of sensing. Dario is trying to give evidence on how vision evolves in biological life forms. Key to the emergence of an efficient and effective visual processing unit is movement. In fact, I would be very interested to know of any lifeform with a sophisticated vision system that doesn’t move. (I checked out starfish, but they don’t count, as they don’t have a brain, and their “eyes” are just light sensitive patches).

The key feature of movement in the context of vision is that it aids in the magnification of patterns that prove to be useful to the entity. Basically, if you see something interesting, move towards it, it probably is better than all the boring nothingness around you. For simple life forms, this is apparently a solid gamble.

The notion of vision can be extrapolated into any situation in which an entity seeks to make sense of their environment. Furthermore, this can be generalized into any situation in which you want to make sense of something interesting.

Dario also did studies of what happened when similar robotic vision units were not mobile. The results were nowhere near as good as the mobile robots. Furthermore, and here’s the kicker, the non-mobile robots’ performance improved once they severed some of their weaker neural net connections. This is because these non-mobile units were forced to deal with more peripheral noise, and as a result these neural connections often drowned out what would’ve been a better decision.

The talk made me think of a couple of my own projects in different ways. And I’m thinking about how to emulate “movement” in other information retrieval domains, but I’m not having much luck.

So instead, here’s a haunting moral for the day, as taught to me by one of these intelligent robots:

Figure out what’s interesting to you, and head straight for it. Every now and then, get rid of some of your lower rank neural connections (your lower priority concerns).

From → HCI

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: