Walking, talking androids have been a
sci-fi staple for decades, but as John Pavlus reports building one in
reality is still a matter of getting the right parts and smarts.
-We made you ‘cause we could.
– Can you imagine how disappointing it would be for you to hear the same thing from your creator?
In Prometheus,
Ridley Scott’s film about a space expedition searching for the origins
of human life, the elegant, Lawrence-of-Arabia loving android David discovers from a crew member the possible motives behind his own creation – and understandably finds this less than inspiring.
But the idea of creating intelligent
robots has fired human imagination for decades. These robots have taken
many forms in speculative fiction, from the seductive charms of Futura
in Fritz Lang’s masterpiece Metropolis to the urbane, existential angst
of David in Prometheus. In reality, though, how far have we progressed
towards being able to create an intelligent robot just “’cause we
could”?
To understand where we are now, we
have to go back about twenty years, to a time when artificial
intelligence research was in crisis. Rodney Brooks, then a professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology, wrote a landmark paper in 1990
stating: “Artificial Intelligence research has foundered in a sea of
incrementalism… small AI companies are folding, and attendance is well
down at national and international Artificial Intelligence conferences…
What has gone wrong?”
The problem, as Brooks saw it, was that the type of research inspired by Alan Turing’s famous artificial intelligence test
had hit a dead end. The Turing test directed decades of AI efforts
towards devising computer systems that “thought” by solving logic
problems –focusing on the “sea of symbols”, as Brooks put it, that were
believed to undergird intelligence. These systems could shuffle and sort
information with dizzying speed, giving them the appearance of
intelligence when performing certain abstract tasks (like playing
chess). But when it came to “common sense” intelligence – the kind we
rely on when selecting a book from a bookshelf, distinguishing a cat
from a dog or a rock, or holding a glass of water without dropping or
crushing it – this symbolic, Turing-style AI couldn’t cope.
Get smart
A better alternative for AI was to
take a “situated” route, as Brooks called it. The first order of
business: forget about building brains that can solve logical problems.
Instead, focus on building bodies that can deal with and respond to the physical world. In other words: build robots.
There’s something about an embodied agent that seems more “intelligent”, in a general sense, than any algorithm. IBM’s Watson system may be able to beat humans at Jeopardy! with its deep reservoir of facts – an impressive simulation of “book smarts”. But Boston Dynamics’ Big Dog robot, manoeuvering itself
sure-footedly up hills and around unfamiliar obstacles, and even
maintaining its balance when shoved by its human companion, actually
seems to be smart – at least, in the same way a dog or horse is.
“One kind of smart has to do with
knowing a lot of facts and being able to reason and solve problems;
another kind of smart has to do with understanding how our bodies work
and being able to control them,” says Marc Raibert, CEO of Boston
Dynamics. “That kind of smart helps people and animals move with
remarkable mobility, agility, dexterity, and speed.”
When Brooks wrote about this new kind
of artificial intelligence in his 1990 paper, he introduced half a dozen
robots who look like Big Dog’s evolutionary ancestors. One of them was Genghis,
a six-legged insect-like robot that could autonomously negotiate
unfamiliar terrain in an eerily lifelike way, without any high-order
processing or centralized control system. All it had were lots of simple
sensors “tightly coupled” to motor controllers in each leg, loosely
connected in a “nerve-like” network to pass sensory information between
the motors, “without any attempt at integration”.
This primitive-seeming architecture,
wrote Brooks, was the key to someday building artificially intelligent
robots: Parts before smarts.
Brooks’s insight paved the way for Boston Dynamics’ lifelike robots, as well as Brooks’s own iRobot corporation (which manufactures Roombas and bomb-defusing robots for the military). And yet a truly intelligent robot – with parts and
smarts equivalent even to that of a domestic dog – has yet to be built.
Why? Not because situated AI turned out to be yet another dead end, but
because it addressed a newer, harder problem, known as Moravec’s Paradox.
“It is comparatively easy to make computers exhibit adult level
performance on intelligence tests or playing checkers, and difficult or
impossible to give them the skills of a one-year-old [human] when it
comes to perception and mobility,” roboticist Hans Moravec wrote in
1988.
Acting human
So how can we solve Moravec’s Paradox?
One approach is to take the assumptions of situated AI to their logical
endpoint: If we want to build a robot with human-like intelligence,
first build a robot with humanlike anatomy. A team of European
researchers has done just that: their ECCERobot
(Embodied Cognition in a Compliantly Engineered Robot) has a
thermoplastic skeleton complete with vertebrae, phalanges, and a
ribcage. Instead of rigid motors, it has muscle-like actuators and
rubber tendons. It has as many degrees of movement as a human torso; it
flops into a heap when its power is turned off, just like an unconscious
human would. And most importantly, all of these parts are studded with
sensors.
“The patterns of sensory stimulation
that we generate from moving our bodies in space and interacting with
our environment are the basic building blocks of cognition,” says Rolf
Pfeifer, a lead researcher on ECCERobot. “When I grasp a cup, I am
inducing sensory stimulation in the hand; in my eyes, from seeing how
the scene changes; and proprioceptively [in my muscles], since I can
feel its weight.”
These sensory patterns are the raw
material for the brain to learn something about the environment and how
to make distinctions in the real world, says Pfeifer, and these patterns
depend strongly on the particular actions we perform with our
particular body parts. “So if we want the robot to acquire the same
concepts that we do,” he says, “it would have to start by generating the
same sensory patterns that we do, which implies that it would need to
have the same body plan as we do.”
For now, ECCERobot’s humanoid
physiology is so difficult to control that it can barely pick up an
object, much less exhibit intelligent behaviour. But Pfeifer and his
team aren’t the only ones exploring this “anthropomimetic” strategy:
Boston Dynamics, the same firm that created Big Dog, is working with
DARPA, the US military’s research wing, to develop a humanoid robot
called ATLAS which will “use the arms in conjunction with the legs to get higher levels of rough-terrain locomotion,” says Raibert.
In any case, says Pfeifer, building an
intelligent humanoid robot – one that “can smoothly interact with
humans and human environments in a natural way” – will require
breakthroughs in computing and battery efficiency, not to mention a
quantum leap in sensory equipment. “A really crucial development will be
skin,” he says. “Skin is extremely important in the development of
intelligence because it provides such rich sensory patterns: touch,
temperature, pain, all at once.”
A robot with skin and human-like
internal anatomy starts to sound less like a robot at all, and more like
a synthetic organism – much like David in Prometheus. Which
takes us back to the question he asks in the film. Or as Pfeifer more
pragmatically puts it: “Why build a robot which is a very fragile and
expensive copy of a human being?”
It is a very useful goal, Pfeifer
argues. “Even if we still mostly want robots to do specialized tasks,
there will be tons of spinoffs from an understanding of humanoid,
intelligent behaviour. Yes, we’ll draw inspiration from biology. But
that doesn’t imply that we won’t go beyond it.”
via: bbc.com
No comments:
Post a Comment