lundi 7 juin 2010
Haptic Systems
There are several approaches to creating haptic systems.
Although they may look drastically different, they all have
two important things in common -- software to determine the
forces that result when a user's virtual identity interacts
with an object and a device through which those forces can be
applied to the user. The actual process used by the software
to perform its calculations is called haptic rendering.
A common rendering method uses polyhedral models to represent
objects in the virtual world. These 3-D models can accurately
portray a variety of shapes and can calculate touch data by
evaluating how force lines interact with the various faces of
the object. Such 3-D objects can be made to feel solid and
can have surface texture.
The job of conveying haptic images to the user falls to the
interface device. In many respects, the interface device is
analogous to a mouse, except a mouse is a passive device that
cannot communicate any synthesized haptic data to the user.
Let's look at a few specific haptic systems to understand how
these devices work.
*The PHANTOM® interface from SensAble Technologies was
one of the first haptic systems to be sold commercially. Its
success lies in its simplicity. Instead of trying to display
information from many different points, this haptic device
simulates touching at a single point of contact. It achieves
this through a stylus which is connected to a lamp-like arm.
Three small motors give force feedback to the user by
exerting pressure on the stylus. So, a user can feel the
elasticity of a virtual balloon or the solidity of a brick
wall. He or she can also feel texture, temperature and
weight.
The stylus can be customized so that it closely
resembles just about any object. For example, it can be
fitted with a syringe attachment to simulate what it feels
like to pierce skin and muscle when giving a shot.
*The CyberGrasp system, another commercially available
haptic interface from Immersion Corporation, takes
a different approach. This device fits over the user's entire
hand like an exoskeleton and adds resistive force feedback to
each finger. Five actuators produce the forces, which are
transmitted along tendons that connect the fingertips to the
exoskeleton. With the CyberGrasp system, users are able to
feel the size and shape of virtual objects that only exist in
a computer-generated world. To make sure a user's fingers
don't penetrate or crush a virtual solid object, the
actuators can be individually programmed to match the
object's physical properties.
*Researchers at Carnegie Mellon University are
experimenting with a haptic interface that does not rely on
actuated linkage or cable devices. Instead, their interface
uses a powerful electromagnet to levitate a handle that looks
a bit like a joystick. The user manipulates the levitated
tool handle to interact with computed environments. As she
moves and rotates the handle, she can feel the motion, shape,
resistance and surface texture of simulated objects. This is
one of the big advantages of a levitation-based technology:
It reduces friction and other interference so the user
experiences less distraction and remains immersed in the
virtual environment. It also allows constrained motion in six
degrees of freedom (compared to the entry-level Phantom
interface, which only allows for three active degrees of
freedom).
The one disadvantage of the magnetic levitation haptic
interface is its footprint. An entire cabinet is required to
house the maglev device, power supplies, amplifiers and
control processors. The user handle protrudes from a bowl
embedded in the cabinet top.
As you can imagine, systems like we've described here can be
quite expensive. That means the applications of the
technology are still limited to certain industries and
specialized types of training.
Applications of Haptic Technology
It's not difficult to think of ways to apply haptics. Video
game makers have been early adopters of passive haptics,
which takes advantage of vibrating joysticks, controllers and
steering wheels to reinforce on-screen activity. But future
video games will enable players to feel and manipulate
virtual solids, fluids, tools and avatars. The Novint Falcon
haptics controller is already making this promise a reality.
The 3-D force feedback controller allows you to tell the
difference between a pistol report and a shotgun blast, or to
feel the resistance of a longbow's string as you pull back
an arrow.
Graphical user interfaces, like those that define Windows and
Mac operating environments, will also benefit greatly from
haptic interactions. Imagine being able to feel graphic
buttons and receive force feedback as you depress a button.
Some touchscreen manufacturers are already experimenting with
this technology. Nokia phone designers have perfected
a tactile touchscreen that makes on-screen buttons behave as
if they were real buttons. When a user presses the button, he
or she feels movement in and movement out. He also hears
an audible click. Nokia engineers accomplished this by
placing two small piezoelectric sensor pads under the screen
and designing the screen so it could move slightly when
pressed. Everything -- movement and sound -- is synchronized
perfectly to simulate real button manipulation.
Although several companies are joining Novint and Nokia in
the push to incorporate haptic interfaces into mainstream
products, cost is still an obstacle. The most sophisticated
touch technology is found in industrial, military and medical
applications. Training with haptics is becoming more and more
common. For example, medical students can now perfect
delicate surgical techniques on the computer, feeling what
it's like to suture blood vessels in an anastomosis or inject
BOTOX into the muscle tissue of a virtual face. Aircraft
mechanics can work with complex parts and service procedures,
touching everything that they see on the computer screen.
And soldiers can prepare for battle in a variety of ways,
from learning how to defuse a bomb to operating a helicopter,
tank or fighter jet in virtual combat scenarios.
Haptic technology is also widely used in teleoperation, or
telerobotics. In a telerobotic system, a human operator
controls the movements of a robot that is located some
distance away. Some teleoperated robots are limited to very
simple tasks, such as aiming a camera and sending back visual
images. In a more sophisticated form of teleoperation known
as telepresence, the human operator has a sense of being
located in the robot's environment. Haptics now makes it
possible to include touch cues in addition to audio and
visual cues in telepresence models. It won't be long before
astronomers and planet scientists actually hold and
manipulate a Martian rock through an advanced haptics-enabled
telerobot -- a high-touch version of the Mars Exploration
Rover.
The Importance of Haptic Technology
In video games, the addition of haptic capabilities is nice
to have. It increases the reality of the game and, as
a result, the user's satisfaction. But in training and other
applications, haptic interfaces are vital. That's because the
sense of touch conveys rich and detailed information about
an object. When it's combined with other senses, especially
sight, touch dramatically increases the amount of information
that is sent to the brain for processing. The increase in
information reduces user error, as well as the time it takes
to complete a task. It also reduces the energy consumption
and the magnitudes of contact forces used in a teleoperation
situation.
Clearly, Samsung is hoping to capitalize on some of these
benefits with the introduction of the Anycall Haptic phone.
Nokia will push the envelope even farther when it introduces
phones with tactile touchscreens. Yes, such phones will be
cool to look at. And, yes, they will be cool to touch. But
they will also be easier to use, with the touch-based
features leading to fewer input errors and an overall more
satisfying experience.
Inscription à :
Publier les commentaires (Atom)
Aucun commentaire:
Enregistrer un commentaire