
The characters in a game have skeletons. Similar to our own
skeleton, this is a hidden series of objects that connect
with and move in relation to each other. Using a technique
called parenting, a target object (the child) is assigned to
another object (the parent). Every time the parent object
moves, the child object will follow according to the
attributes assigned to it. A complete hierarchy can be
created with objects that have children and parents. Here's
an example for a human character
Once the skeleton is created and all of the parenting
controls put in place, the character is animated. Probably
the most popular method of character animation relies on
inverse kinematics. This technique moves the child object to
where the animator wants it, causing the parent object and
all other attached objects to follow. Another method that is
popular for games is motion capture, which uses a suit of
sensors on a real person to transmit a series of coordinates
to a computer system. The coordinates are mapped to the
skeleton of a game character and translated into fluid,
realistic motion.
Each character's range of motion is programmed into the game.
Here's a typical sequence of events:
* You press a button on the controller to make the
character move forward.
* The button completes a circuit, and the controller
sends the resulting data to the console.
* The controller chip in the console processes the data
and forwards it to the game application logic.
* The game logic determines what the appropriate action
at that point in the game is (move the character forward).
* The game logic analyzes all factors involved in making
the movement (shadows, collision models, change of viewing
angle).
* The game logic sends the new coordinates for the
character's skeleton, and all other changes, to the rendering
engine.
* The rendering engine renders the scene with new
polygons for each affected object, redrawing the scene about
60 times each second.
* You see the character move forward.

We've all experienced the fluid-dynamics phenomenon known as
the "teapot effect." Every time you pour out a nice relaxing
cup of tea, a little of the elixir dribbles down the outside
of the spout of the teapot, dampening your doily and your
spirits.
It happens because liquid clings to the lip of the spout
instead of exiting neatly, especially at low rates of flow.
Cyril Duez and his team of fluid dynamicists could not
tolerate one more dribble. They have identified the root
cause, a "hydro-capillary effect" that makes the tea fail to
leave the spout material gracefully. Two techniques can be
used to combat this.
One is to simply use a spout made of thinner material, which
gives the wayward beverage less purchase. Metal teapots, for
instance, like we see at Chinese restaurants, tend to drip
less than pudgy-walled ceramic ones.
The other, cooler approach is to coat the spout with one of
a class of super-hydrophobic materials, which repel any
attempt by the tea to cling to the spout instead of going
where it's supposed to. Some of these materials can be
activated and deactivated electrically, raising the exciting
possibility, as Technology Review points out, of a hilarious
gag teapot with a drip/no-drip switch. It would go nicely
with your Fraunhofer Perfect Mug.
The puzzling part is that the team of exacting tea scientists
are not British, but French.

Imaging an unborn fetus and and spotting a lurking submarine
could both become much easier with the world's first acoustic
hyperlens. The device manipulates imaging sound waves to
provide an eightfold increase in the magnification power of
technologies such as ultrasound and sonar.
Hyperlenses use specially engineered materials that combine
metals and dielectrics, and allow scientists to image
features much smaller than typical light wavelengths.
Researchers at the U.S. Department of Energy's Lawrence
Berkeley National Laboratory applied this approach to capture
information in evanescent sound waves, which have higher
resolution and more detail but dissipate much more quickly
than typical waves.
The acoustic hyperlens consists of 36 brass fins arrayed in
a pattern resembling a hand-held fan. The fins remain
embedded in a brass plate from which they were shaped, and
extend from an inner radius of just 2.7 centimeters to
an outer radius of 21.8 centimeters.
"As a result of the large ratio between the inner and outer
radii, our acoustic hyperlens compresses a significant
portion of evanescent waves into the band of propagating
waves so that the image obtained is magnified by a factor of
eight," said Lee Fok, one of the researchers in the Berkeley
Lab.
The same lab group, led by materials scientist Xiang Zhang,
demonstrated a similar feat in 2007 by breaking the
"diffraction limit" that usually prevents researchers from
imaging features smaller than typical light waves.
Zhang's group is now upgrading their approach to produce 3-D
images, and wants to make the technique compatible with the
pulse-echo technology found in medical ultrasounds and
underwater sonar.