Wednesday, February 25, 2015

Robotics and
artificial intelligence

A ROBOT IS A SOPHISTICATED MACHINE, OR A SET OF MACHINES WORKING
Together, that performs certain tasks. Some people imagine robots as having two legs, a
head, two arms with functional end effectors (hands), an artificial voice, and an electronic
brain. The technical name for a humanoid robot is android. Androids are within
the realm of technological possibility, but most robots are structurally simpler than, and
don’t look or behave like, people.
An electronic brain, also called artificial intelligence (AI), is more than mere fiction,
thanks to the explosive evolution of computer hardware and software. But even
the smartest computer these days would make a dog seem brilliant by comparison.
Some scientists think computers and robots might become as smart as, or smarter than,
human beings. Others believe the human brain and mind are more complicated than
anything that can be duplicated with electronic circuits.

Asimov’s three laws
In one of his early science-fiction stories, the famous author Isaac Asimov first mentioned
the word robotics, along with three “fundamental rules” that all robots ought
to obey:

• First Law: A robot must not injure, or allow the injury of, any human
being.
• Second Law: A robot must obey all orders from humans, except orders
that would contradict the First Law.
• Third Law: A robot must protect itself, except when to do so would contradict
the First Law or the Second Law.

Autonomous robots
A robot is autonomous if it is self-contained, housing its own computer system, and
not depending on a central computer for its commands. It gets around under its own
power, usually by rolling on wheels or by moving on two, four, or six legs.
. The more complex
the task, and the more different things a robot must do, the more autonomy it will have.
The most autonomous robots have AI. The ultimate autonomous robot will act like a living
animal or human. Such a machine has not yet been developed, and it will probably
be at least the year 2050 before this level of sophistication is reached.

Androids
An android is a robot, often very sophisticated, that takes a more or less human form.
An android usually propels itself by rolling on small wheels in its base. The technology
for fully functional arms is under development, but the software needed for their
operation has not been made cost-effective for small robots.

An android has a rotatable head equipped with position sensors. Binocular, or
stereo, vision allows the android to perceive depth, thereby locating objects anyplace
within a large room. Speech recognition and synthesis are common.
Because of their humanlike appearance, androids are ideal for use wherever there
are children. Androids, in conjunction with computer terminals, might someday replace
school teachers in some situations

Robot arms
Robot arms are technically called manipulators. Some robots, especially industrial
robots, are nothing more than sophisticated manipulators. A robot arm can be categorized
according to its geometry. Some manipulators resemble human arms. The
joints in these machines can be given names like “shoulder,” “elbow,” and “wrist.”
Other manipulators are so much different from human arms that these names don’t
make sense. An arm that employs revolute geometry is similar to a human arm, with
a “shoulder,” “elbow,” and “wrist.” An arm with cartesian geometry is far different
from a human arm, and moves along axes (x, y, z) that are best described as “upand-
down,” “right-and-left,” and “front-to-back.”



Degrees of freedom
The term degrees of freedom refers to the number of different ways in which a robot
manipulator can move. Most manipulators move in three dimensions, but often
they have more than three degrees of freedom.

“ You can use your own arm to get an idea of the degrees of freedom that a robot arm
might have. Extend your right arm straight out toward the horizon. Extend your index
finger so it is pointing. Keep your arm straight, and move it from the shoulder. You can
move in three ways. Up-and-down movement is called pitch. Movement to the right and
left is called yaw. You can rotate your whole arm as if you were using it as a screwdriver.
This motion is called roll. Your shoulder therefore has three degrees of freedom: pitch,
yaw, and roll.
Now move your arm from the elbow only. This is hard to do without also moving
your shoulder. Holding your shoulder in the same position constantly, you will see that
your elbow joint has the equivalent of pitch in your shoulder joint. But that is all. Your
elbow, therefore, has one degree of freedom.
Extend your arm toward the horizon again. Now move only your wrist. Try to keep
the arm above the wrist straight and motionless. Your wrist can bend up-and-down,
side-to-side, and it can also twist a little. Your lower arm has the same three degrees of
freedom that your shoulder has, although its roll capability is limited.
In total, your arm has seven degrees of freedom: three in the shoulder, one in the
elbow, and three in the arm below the elbow.
You might think that a robot should never need more than three degrees of freedom.
But the extra possible motions, provided by multiple joints, give a robot arm versatility
that it could not have with only three degrees of freedom. (Just imagine how
inconvenient life would be if your elbow and wrist were locked and only your shoulder
could move.)


Degrees of rotation
The term degrees of rotation refers to the extent to which a robot joint, or a set of
robot joints, can turn clockwise or counterclockwise about a prescribed axis. Some
reference point is always used, and the angles are given in degrees with respect to
that joint.
Rotation in one direction (usually clockwise) is represented by positive angles;
rotation in the opposite direction is specified by negative angles. Thus, if angle _
58 degrees, it refers to a rotation of 58 degrees clockwise with respect to the reference
axis. If angle 274 degrees, it refers to a rotation of 74 degrees counterclockwise.

Articulated geometry
The word articulated means “broken into sections by joints.” A robot arm with articulated
geometry bears some resemblance to the arm of a human. The versatility
is defined in terms of the number of degrees of freedom. For example, an arm might
have three degrees of freedom: base rotation (the equivalent of azimuth), elevation
angle, and reach (the equivalent of radius). If you’re a mathematician, you might recognize
this as a spherical coordinate scheme. There are several different articulated
geometries for any given number of degrees of freedom.

Cartesian coordinate geometry
Another mode of robot arm movement is known as cartesian coordinate geometry
or rectangular coordinate geometry. This term comes from the cartesian system
often used for graphing mathematical functions. The axes are always perpendicular
to each other. Variables are assigned the letters and in a two-dimensional cartesian
plane, or x, y, and in cartesian three-space. The dimensions are called reach
for the variable, elevation for the variable, and depth for the variable


Robotic hearing and vision
Machine hearing involves more than just the picking up of sound (done with a microphone)
and the amplification of the resulting audio signals (done with an amplifier).
A sophisticated robot can tell from which direction the sound is coming and
perhaps deduce the nature of the source: human voice, gasoline engine, fire, or barking
dog.

Binaural hearing
Even with your eyes closed, you can usually tell from which direction a sound is coming.
This is because you have binaural hearing. Sound arrives at your left ear with
a different intensity, and in a different phase, than it arrives at your right ear. Your
brain processes this information, allowing you to locate the source of the sound, with
certain limitations. If you are confused, you can turn your head until the sound direction
becomes apparent to you.
Robots can be equipped with binaural hearing. Two sound transducers are positioned,
one on either side of the robot’s head. A microprocessor compares the relative
phase and intensity of signals from the two transducers. This lets the robot determine,
within certain limitations, the direction from which sound is coming. If the robot is confused,
it can turn until the confusion is eliminated and a meaningful bearing is obtained.
If the robot can move around and take bearings from more than one position, a more accurate
determination of the source location is possible if the source is not too far away.

Hearing and AI
With the advent of microprocessors that can compare patterns, waveforms, and
huge arrays of numbers in a matter of microseconds, it is possible for a robot to determine
the nature of a sound source, as well as where it comes from. A human voice
produces one sort of waveform, a clarinet produces another, a growling bear produces
another, and shattering glass produces yet another. Thousands of different
waveforms can be stored by a robot controller and incoming sounds compared with
this repertoire. In this way, a robot can immediately tell if a particular noise is a lawn
mower going by or person shouting, an aircraft taking off or a car going down the
street.
Beyond this coarse mode of sound recognition, an advanced robot can identify a
person by analyzing the waveform of his or her voice. The machine can even decipher
commonly spoken words. This allows a robot to recognize a voice as yours or that of
some unknown person and react accordingly. For example, if you tell your personal robot
to get you a hammer and a box of nails, it can do so by recognizing the voice as yours
and the words as giving that particular command. But if a burglar comes up your walkway,
approaches your robot, and tells it to go jump in the lake, the robot can trundle off,
find you by homing in on the transponder you’re wearing for that very purpose, and let
you know that an unidentified person in your yard just told it to hydrologically dispose
of itself.

Visible-light vision
A visible-light robotic vision system must have a device for receiving incoming images.
This is usually a charge-coupled device (CCD) video camera, similar to the
type used in home video cameras. The camera receives an analog video signal. This
is processed into digital form by an analog-to-digital converter (ADC). The digital
signal is clarified by digital signal processing (DSP). The resulting data goes to the
robot controller. The moving image, received from the camera and processed by the
circuitry, contains an enormous amount of information. It’s easy to present a robot
controller with a detailed and meaningful moving image. But getting the machine’s
brain to know what’s happening, and to determine whether or not these events are
significant, is another problem altogether.


Vision and AI
There are subtle things about an image that a machine will not notice unless it has
advanced AI. How, for example, is a robot to know whether an object presents a
threat? Is that four-legged thing there a big dog, or is it a bear cub? How is a robot to
forecast the behavior of an object? Is that stationary biped a human or a mannequin?
Why is it holding a stick? Is the stick a weapon? What does the biped want to do with
the stick, if anything? The biped could be a department-store dummy with a closedup
umbrella or a baseball bat. It could be an old man with a cane. Maybe it is a hunter
with a rifle.
You can think up various images that look similar, but that have completely different
meanings. You know right away if a person is carrying a tire iron to help you fix a flat
tire, or if the person is clutching it with the intention of smashing something up. How is

a robot to determine subtle things like this from the images it sees? It is important for a
police robot or a security robot to know what constitutes a threat and what does not.
In some robot applications, it isn’t necessary for the robot to know much about
what’s happening. Simple object recognition is good enough. Industrial robots are programmed
to look for certain things, and usually they aren’t hard to identify. A bottle that
is too tall or too short, a surface that’s out of alignment, or a flaw in a piece of fabric is
easy to pick out.

Sensitivity and resolution
Sensitivity is the ability of a machine to see in dim light or to detect weak impulses
at invisible wavelengths. In some environments, high sensitivity is necessary. In others,
it is not needed and might not be wanted. A robot that works in bright sunlight
doesn’t need to be able to see well in a dark cave. A robot designed for working in
mines, pipes, or caverns must be able to see in dim light, using a system that might
be blinded by ordinary daylight.
Resolution is the extent to which a machine can differentiate between objects. The
better the resolution, the keener the vision. Human eyes have excellent resolution, but
machines can be designed with greater resolution. In general, the better the resolution,
the more confined the field of vision must be. To understand why this is true, think of a
telescope. The higher the magnification, the better the resolution (up to a certain maximum
useful magnification). Increasing the magnification reduces the angle, or field, of
vision. Zeroing in on one object or zone is done at the expense of other objects or zones.
Sensitivity and resolution are interdependent. If all other factors remain constant,
improved sensitivity causes a sacrifice in resolution. Also, the better the resolution, the
less well the vision system will function in dim light.

Invisible and passive vision
Robots have a big advantage over people when it comes to vision. Machines can see
at wavelengths to which humans are blind.
Human eyes are sensitive to electromagnetic waves whose length ranges from 390
to 750 nanometers (nm). The nanometer is a billionth (10_9) of a meter, or a millionth
of a millimeter. The longest visible wavelengths look red. As the wavelength gets
shorter, the color changes through orange, yellow, green, blue, and indigo. The shortest
waves look violet. Energy at wavelengths somewhat longer than 750 nm is infrared
(IR); energy at wavelengths somewhat shorter than 390 nm is ultraviolet (UV).
Machines, and even nonhuman living species, often do not see in this exact same
range of wavelengths. In fact, insects can see UV that we can’t and are blind to red and
orange light that we can see. (Maybe you’ve used orange “bug lights” when camping to
keep the flying pests from coming around at night or those UV devices that attract bugs
and then zap them.) A robot might be designed to see IR or UV or both, as well as (or
instead of) visible light. Video cameras can be sensitive to a range of wavelengths much
wider than the range we see.
Robots can be made to “see” in an environment that is dark and cold and that radiates
too little energy to be detected at any electromagnetic wavelength. In these cases
the robot provides its own illumination. This can be a simple lamp, a laser, an IR device,
or a UV device. Or the robot might emanate radio waves and detect the echoes; this is
radar. Some robots can navigate via ultrasonic echoes, like bats; this is sonar.

0 comments:

Post a Comment