vendredi 27 novembre 2009
Configuring & Updating Your BIOS
Configuring BIOS
In the previous list, you saw that the BIOS checks the CMOS
Setup for custom settings. Here's what you do to change those
settings.
To enter the CMOS Setup, you must press a certain key or
combination of keys during the initial startup sequence. Most
systems use "Esc," "Del," "F1," "F2," "Ctrl-Esc" or
"Ctrl-Alt-Esc" to enter setup. There is usually a line of
text at the bottom of the display that tells you "Press ___
to Enter Setup."
Once you have entered setup, you will see a set of text
screens with a number of options. Some of these are standard,
while others vary according to the BIOS manufacturer. Common
options include:
* System Time/Date - Set the system time and date
* Boot Sequence - The order that BIOS will try to load
the operating system
* Plug and Play - A standard for auto-detecting connected
devices; should be set to "Yes" if your computer and operating
system both support it
* Mouse/Keyboard - "Enable Num Lock," "Enable the
Keyboard," "Auto-Detect Mouse"...
* Drive Configuration - Configure hard drives, CD-ROM and
floppy drives
* Memory - Direct the BIOS to shadow to a specific memory
address
* Security - Set a password for accessing the computer
* Power Management - Select whether to use power
management, as well as set the amount of time for standby and
suspend
* Exit - Save your changes, discard your changes or
restore default settings
Be very careful when making changes to setup. Incorrect
settings may keep your computer from booting. When you are
finished with your changes, you should choose "Save Changes"
and exit. The BIOS will then restart your computer so that
the new settings take effect.
The BIOS uses CMOS technology to save any changes made to the
computer's settings. With this technology, a small lithium or
Ni-Cad battery can supply enough power to keep the data for
years. In fact, some of the newer chips have a 10-year, tiny
lithium battery built right into the CMOS chip!
Updating Your BIOS
Occasionally, a computer will need to have its BIOS updated.
This is especially true of older machines. As new devices and
standards arise, the BIOS needs to change in order to
understand the new hardware. Since the BIOS is stored in some
form of ROM, changing it is a bit harder than upgrading most
other types of software.
To change the BIOS itself, you'll probably need a special
program from the computer or BIOS manufacturer. Look at the
BIOS revision and date information displayed on system
startup or check with your computer manufacturer to find out
what type of BIOS you have. Then go to the BIOS
manufacturer's Web site to see if an upgrade is available.
Download the upgrade and the utility program needed to
install it. Sometimes the utility and update are combined in
a single file to download. Copy the program, along with the
BIOS update, onto a floppy disk. Restart your computer with
the floppy disk in the drive, and the program erases the old
BIOS and writes the new one. You can find a BIOS Wizard that
will check your BIOS at BIOS Upgrades.
Major BIOS manufacturers include:
* American Megatrends Inc. (AMI)
* Phoenix Technologies
* ALi
* Winbond
As with changes to the CMOS Setup, be careful when upgrading
your BIOS. Make sure you are upgrading to a version that is
compatible with your computer system. Otherwise, you could
corrupt the BIOS, which means you won't be able to boot your
computer. If in doubt, check with your computer manufacturer
to be sure you need to upgrade.
BIOS
One of the most common uses of Flash memory is for the basic
input/output system of your computer, commonly known as the
BIOS (pronounced "bye-ose"). On virtually every computer
available, the BIOS makes sure all the other chips, hard
drives, ports and CPU function together.
Every desktop and laptop computer in common use today
contains a microprocessor as its central processing unit. The
microprocessor is the hardware component. To get its work
done, the microprocessor executes a set of instructions known
as software (see How Microprocessors Work for details). You
are probably very familiar with two different types of
software:
* The operating system - The operating system provides
a set of services for the applications running on your
computer, and it also provides the fundamental user interface
for your computer. Windows 98 and Linux are examples of
operating systems. (See How Operating Systems Work for lots
of details.)
* The applications - Applications are pieces of software
that are programmed to perform specific tasks. On your
computer right now you probably have a browser application,
a word processing application, an e-mail application and so
on. You can also buy new applications and install them.
It turns out that the BIOS is the third type of software your
computer needs to operate successfully. In this article,
you'll learn all about BIOS -- what it does, how to configure
it and what to do if your BIOS needs updating.
What BIOS Does
The BIOS software has a number of different roles, but its
most important role is to load the operating system. When you
turn on your computer and the microprocessor tries to execute
its first instruction, it has to get that instruction from
somewhere. It cannot get it from the operating system because
the operating system is located on a hard disk, and the
microprocessor cannot get to it without some instructions
that tell it how. The BIOS provides those instructions. Some
of the other common tasks that the BIOS performs include:
* A power-on self-test (POST) for all of the different
hardware components in the system to make sure everything is
working properly
* Activating other BIOS chips on different cards
installed in the computer - For example, SCSI and graphics
cards often have their own BIOS chips.
* Providing a set of low-level routines that the
operating system uses to interface to different hardware
devices - It is these routines that give the BIOS its name.
They manage things like the keyboard, the screen, and the
serial and parallel ports, especially when the computer is
booting.
* Managing a collection of settings for the hard disks,
clock, etc.
The BIOS is special software that interfaces the major
hardware components of your computer with the operating
system. It is usually stored on a Flash memory chip on the
motherboard, but sometimes the chip is another type of ROM.
When you turn on your computer, the BIOS does several things. This is its usual sequence:
1. Check the CMOS Setup for custom settings
2. Load the interrupt handlers and device drivers
3. Initialize registers and power management
4. Perform the power-on self-test (POST)
5. Display system settings
6. Determine which devices are bootable
7. Initiate the bootstrap sequence
The first thing the BIOS does is check the information stored
in a tiny (64 bytes) amount of RAM located on a complementary
metal oxide semiconductor (CMOS) chip. The CMOS Setup provides
detailed information particular to your system and can be
altered as your system changes. The BIOS uses this information
to modify or supplement its default programming as needed. We
will talk more about these settings later.
Interrupt handlers are small pieces of software that act as
translators between the hardware components and the operating
system. For example, when you press a key on your keyboard,
the signal is sent to the keyboard interrupt handler, which
tells the CPU what it is and passes it on to the operating
system. The device drivers are other pieces of software that
identify the base hardware components such as keyboard,
mouse, hard drive and floppy drive. Since the BIOS is
constantly intercepting signals to and from the hardware, it
is usually copied, or shadowed, into RAM to run faster.
Booting the Computer
Whenever you turn on your computer, the first thing you see
is the BIOS software doing its thing. On many machines, the
BIOS displays text describing things like the amount of
memory installed in your computer, the type of hard disk and
so on. It turns out that, during this boot sequence, the BIOS
is doing a remarkable amount of work to get your computer
ready to run. This section briefly describes some of those
activities for a typical PC.
After checking the CMOS Setup and loading the interrupt
handlers, the BIOS determines whether the video card is
operational. Most video cards have a miniature BIOS of their
own that initializes the memory and graphics processor on the
card. If they do not, there is usually video driver
information on another ROM on the motherboard that the BIOS
can load.
Next, the BIOS checks to see if this is a cold boot or
a reboot. It does this by checking the value at memory
address 0000:0472. A value of 1234h indicates a reboot, and
the BIOS skips the rest of POST. Anything else is considered
a cold boot.
If it is a cold boot, the BIOS verifies RAM by performing
a read/write test of each memory address. It checks the PS/2
ports or USB ports for a keyboard and a mouse. It looks for
a peripheral component interconnect (PCI) bus and, if it
finds one, checks all the PCI cards. If the BIOS finds any
errors during the POST, it will notify you by a series of
beeps or a text message displayed on the screen. An error at
this point is almost always a hardware problem.
The BIOS then displays some details about your system. This
typically includes information about:
* The processor
* The floppy drive and hard drive
* Memory
* BIOS revision and date
* Display
Any special drivers, such as the ones for small computer
system interface (SCSI) adapters, are loaded from the adapter,
and the BIOS displays the information. The BIOS then looks at
the sequence of storage devices identified as boot devices in
the CMOS Setup. "Boot" is short for "bootstrap," as in the
old phrase, "Lift yourself up by your bootstraps." Boot
refers to the process of launching the operating system. The
BIOS will try to initiate the boot sequence from the first
device. If the BIOS does not find a device, it will try the
next device in the list. If it does not find the proper files
on a device, the startup process will halt. If you have ever
left a disk when you restarted your computer, you have
probably seen this message.
The BIOS has tried to boot the computer off of the disk left
in the drive. Since it did not find the correct system files,
it could not continue. Of course, this is an easy fix. Simply
pop out the disk and press a key to continue.
PROM
Creating ROM chips totally from scratch is time-consuming and
very expensive in small quantities. For this reason, mainly,
developers created a type of ROM known as programmable
read-only memory (PROM). Blank PROM chips can be bought
inexpensively and coded by anyone with a special tool called
a programmer.
PROM chips have a grid of columns and rows just as ordinary
ROMs do. The difference is that every intersection of
a column and row in a PROM chip has a fuse connecting them.
A charge sent through a column will pass through the fuse in
a cell to a grounded row indicating a value of 1. Since all
the cells have a fuse, the initial (blank) state of a PROM
chip is all 1s. To change the value of a cell to 0, you use
a programmer to send a specific amount of current to the cell.
The higher voltage breaks the connection between the column
and row by burning out the fuse. This process is known as
burning the PROM.
PROMs can only be programmed once. They are more fragile than
ROMs. A jolt of static electricity can easily cause fuses in
the PROM to burn out, changing essential bits from 1 to 0.
But blank PROMs are inexpensive and are great for prototyping
the data for a ROM before committing to the costly ROM
fabrication process.
How ROM Works
Read-only memory (ROM), also known as firmware, is
an integrated circuit programmed with specific data when it
is manufactured. ROM chips are used not only in computers,
but in most other electronic items as well.
ROM Types
There are five basic ROM types:
* ROM
* PROM
* EPROM
* EEPROM
* Flash memory
Each type has unique characteristics, which you'll learn
about in this article, but they are all types of memory with
two things in common:
* Data stored in these chips is nonvolatile -- it is not
lost when power is removed.
* Data stored in these chips is either unchangeable or
requires a special operation to change (unlike RAM, which can
be changed as easily as it is read).
This means that removing the power source from the chip will
not cause it to lose any data.
ROM at Work
Similar to RAM, ROM chips (Figure 1) contain a grid of
columns and rows. But where the columns and rows intersect,
ROM chips are fundamentally different from RAM chips. While
RAM uses transistors to turn on or off access to a capacitor
at each intersection, ROM uses a diode to connect the lines
if the value is 1. If the value is 0, then the lines are not
connected at all.
A diode normally allows current to flow in only one direction
and has a certain threshold, known as the forward breakover,
that determines how much current is required before the diode
will pass it on. In silicon-based items such as processors
and memory chips, the forward breakover voltage is
approximately 0.6 volts. By taking advantage of the unique
properties of a diode, a ROM chip can send a charge that is
above the forward breakover down the appropriate column with
the selected row grounded to connect at a specific cell. If
a diode is present at that cell, the charge will be conducted
through to the ground, and, under the binary system, the cell
will be read as being "on" (a value of 1). The neat part of
ROM is that if the cell's value is 0, there is no diode at
that intersection to connect the column and row. So the
charge on the column does not get transferred to the row.
As you can see, the way a ROM chip works necessitates the
programming of perfect and complete data when the chip is
created. You cannot reprogram or rewrite a standard ROM chip.
If it is incorrect, or the data needs to be updated, you have
to throw it away and start over. Creating the original
template for a ROM chip is often a laborious process full of
trial and error. But the benefits of ROM chips outweigh the
drawbacks. Once the template is completed, the actual chips
can cost as little as a few cents each. They use very little
power, are extremely reliable and, in the case of most small
electronic devices, contain all the necessary programming to
control the device. A great example is the small chip in the
singing fish toy. This chip, about the size of your
fingernail, contains the 30-second song clips in ROM and the
control codes to synchronize the motors to the music.
Add RAM to Your Laptop
If you've ever switched from a desktop computer to a laptop,
the experience can be liberating. While some might enjoy
having a single space in which to work at a computer and
organize files both physical and digital, a desktop can also
keep you feeling anchored to the spot. It relies on
electrical outlets 100 percent of the time, and carrying all
of its bulky components from one place to the next isn't
convenient, either.
Laptops, however, run on battery power, and, once
sufficiently charged, can operate anywhere you carry them.
They're small, thin and lightweight, so whether you want to
lounge on the couch or work at the coffee shop, laptops are
portable and easy tools to use.
But the only real difference between a laptop and a desktop,
of course, is how they're put together. A laptop has all of
the same hardware and accessories a desktop does, like
a screen, a keyboard, a microprocessor, memory storage and
a series of fans to cool the system down. Everything is just
arranged differently since it needs to fit in a much smaller
package. That means once you start building up more files,
adding more pictures, uploading more music and using more
programs simultaneously, you'll experience something many
laptop computer users lament -- slow load times and sluggish
performance.
Typically, the culprit behind any performance issue is
an insufficient amount of random access memory, or RAM.
Although some owners cringe at the thought of adding more RAM
because a laptop's layout isn't as straightforward as
a desktop, sometimes adding or upgrading the RAM on your
system is the easiest and cheapest solution to increasing
your laptop's performance.
So what exactly are you doing when you add more RAM to your
system? How do you choose the right RAM? And once you've
opened up that laptop, how do you install RAM correctly?
Choosing RAM
When most people refer to a computer's "memory," they're
talking about random access memory, or RAM. RAM is considered
important to your laptop's central processing unit (CPU),
because memory allows you to run several programs at once
without too much interruption. How do you know if you need
more RAM, though?
The telltale sign of too little RAM is slow performance.
Usually, when you purchase a new laptop, it takes very little
time to start the computer and run its existing programs. But
as you add files and perform more tasks simultaneously,
things start to slow down. If you boot up your computer and
it takes several minutes for everything to work properly,
chances are you could use more RAM.
Fortunately, adding RAM to your laptop is probably the
easiest and most inexpensive method to boost computer
performance. Even getting a new CPU for your laptop might not
do as much as adding RAM. But if you do a little bit of
searching on the topic of RAM, you'll find there are several
different kinds and many different sizes available. What's
the right RAM for your computer?
First of all, you need to judge the performance of your
laptop and ask yourself what kind of work you'll be doing. If
you play games on your laptop or run lots of programs that
take up a lot of processing power, you'll want a good amount
of RAM -- 2 GB of RAM or more. If you're using your laptop
for simple day-to-day work, you probably won't need more than
512 MB of memory.
Adding RAM
Once you've purchased the necessary RAM module, you're ready
to add more memory to your computer. Before you start
anything, make sure the laptop is completely turned off and
unplugged from any power sources for the sake of safety. It's
also recommended that you use an antistatic wrist strap while
you're handling a RAM module.
Once everything is powered down, you'll need to find the
memory compartment door. Different manufacturers put these
slots in different places, but on most laptops, you'll find
a small door on the underside of the machine. Using the
appropriate screwdriver, open the door and take a look inside.
There are typically two slots for RAM. If both slots are
full, you'll have to remove one of the modules and replace it
with a module with more memory to upgrade your laptop's RAM.
You can remove a RAM module by pressing on the little ejector
clips that hold the module in place. If one of the slots is
empty, you can simply place the new module in the slot.
Adding a module is fairly straightforward -- it should just
slide into place and, once you give it a little push, it'll
lock down with the help of the clips. Again, your experience
may be different depending on the company that made your
computer, so make sure to check your owner's manual or
support Web site before you start opening your laptop's case.
Once you have everything back into place, replace the access
door and turn on your computer. If everything goes well, your
laptop should automatically recognize the extra memory.
You'll find that your computer will boot much faster, run
applications more smoothly and switch between programs with
less lag time.
How does disk defrag work?
The word "disk defrag" is typically used to refer to the
Microsoft Windows utility called Disk Defragmenter. It is
designed to solve a problem that occurs because of the way
hard disks store data.
If you have read the article How Hard Disks Work, then you
know three key facts about hard disks:
1. Hard disks store data in chunks called sectors. If you
imagine the surface of the disk divided into rings (like the
rings of a tree), and then imagine dividing each ring into
pie-slices, a sector is one pie-slice on one ring. Each
sector holds a fixed amount of data, like 512 bytes.
2. The hard disk has a small arm that can move from ring
to ring on the surface of the disk. To reach a particular
sector, the hard disk moves the arm to the right ring and
waits for the sector to spin into position.
3. Hard disks are slow in computer terms. Compared to the
speed of the processor and its memory, the time it takes for
the arm to move and for a sector to spin into place is
an eon.
Because of fact #3, you want to minimize arm movement as much
as possible, and you want data stored in sequential segments
on the disk.
So let's imagine that you install a new application onto
an empty hard disk. Because the disk is empty, the computer
can store the files of the application into sequential
sectors on sequential rings. This is an efficient way to
place data on a hard disk.
As you use a disk, however, this efficient technique becomes
harder for a disk. What happens is that the disk fills up.
Then you erase files to reclaim space. These files that you
delete are scattered all over the surface of the disk. When
you load a new application or a large file onto the disk, it
ends up being stored in hundreds or thousands of these
scattered pockets of space. Now when the computer tries to
load the scattered pieces, the disk's arm has to move all
over the surface and it takes forever.
The idea behind the disk defragmenter is to move all the
files around so that every file is stored on sequential
sectors on sequential rings of the disk. In addition, a good
defragmenter may also try to optimize things even more, for
example by placing all applications "close" to the operating
system on the disk to minimize movement when an application
loads. When done well on older disks, defragmenting can
significantly increase the speed of file loading. On a new
disk that has never filled up or had any significant number
of file deletions, it will have almost no effect because
everything is stored sequentially already.
As you might imagine, the process of indivdually picking up
and moving thousands of files on a relatively slow hard disk
is not a quick process -- it normally takes hours.
For the defragmenter to properly run, ensure you have no
applications running. Typically, SYSTRAY and EXPLORER are
all you need to have running to run this application. You can
see the active tasks you have running by doing a
"three-finger-salute" (Ctrl+Alt+Del). Disable any screen
saver in use, too. The defragmenter will fail to stay running
if your system is constantly accessing some other
application like Findfast.exe, a resource user that
automatically gets installed with Microsoft office. To
prevent Findfast.exe from running at every system boot,
simply delete it from your Windows STARTUP folder, or look
for the Findfast icon in control panel and change its
setting.
The defragmenter can take a considerable time to run, so
start the Defragmenter before going out for the evening or at
the end of the day, before going to sleep.
Add a Hard Drive to Your Computer in 8 Steps
Do you own a computer that is more than a year old? If so,
then you may be running out of disk space. In the same way
that closets and attics have a way of filling up and
overflowing, so do hard drives. Maybe your 8-megapixel camera
needs a gigabyte of disk space every time you unload the
camera's memory card. Or your MP3 collection grows by 10
songs every day. Perhaps you are trying to edit videos of the
kids, and every 5 minutes of tape consumes a gigabyte of disk
space. Or maybe you would like to add a TV tuner card to your
machine and turn your computer into a DVR.
Digital cameras, video cameras, MP3 players and TV tuner
cards all consume lots of disk space. If you use any of these
gadgets, chances are that you need more space.
1: Research your machine
Before we start the process of adding a drive, we need to do
a small amount of research inside your machine. The goal of
the research is to find out if it will be easy or not so easy
to add the new hard drive. We also need to find out what kind
of drive you need to buy. You may be able to do this research
by reading through your computer's manuals, but it is far
easier to simply open the case and look inside.
The first question to answer is: How many hard disk drives
have already been installed inside the case? In the majority
of machines, the answer to this question is "one." Having
only one hard disk drive installed makes it easy to install
another one. After you open up your computer's case and look
inside, you will probably find one optical drive (a CD or DVD
drive), a single hard disk drive and perhaps a floppy disk
drive. The optical and floppy drives will be easy to find
because you can see them on the outside of the case. The hard
drive may take a little searching. If you have no idea what
a hard drive looks like, look at the photo above.
If there are already two drives installed inside your case,
then adding a new one is more difficult.
2: Check how much space is available
Is there space available to add another hard-disk drive? Your
current hard disk is probably mounted in a small metal cage
or rack inside the machine. Make sure there is space
available in the cage for another drive. If not, adding
an external drive is an option.
An external drive connects to your computer through either
a USB 2.0 connection or a FireWire connection, so your
computer needs to have USB 2.0 or FireWire connectors. Once
you buy the drive, all you have to do is connect it and fire
up your computer. The drive will come with configuration
instructions, but on Windows XP it will likely be
plug-and-play. You can start saving files on your new drive
immediately.
There is one big advantage to an external drive: you can plug
it into multiple machines and move files around. You can take
it with you anywhere you go. The only real disadvantage is
that it will be slower than an internal drive. If it takes
a minute to copy a gigabyte of data on an internal drive, it
might take two minutes on an external drive. That may or may
not be important depending on what you want to do. For most
applications, the slower speed is irrelevant.
3: See what type of cable system is used
Find out what type of cable system is used to connect drives
to the motherboard. There are two systems in common use: IDE
drives (also known as PATA, or Parallel ATA), and SATA
(Serial ATA) drives. PATA drives have wide, flat cables or
thick cables as wide as your finger, while SATA drives have
thin cables about the diameter of a pencil. You will need to
know whether to buy an IDE or SATA drive, and you should be
able to tell by looking at the cables.
Now that you have confirmed that there is space to install
a new drive in your machine and you know what type of drive
you need (PATA or SATA), you can buy a new drive.
4: Buy a new hard drive
You can buy a new hard drive from many different places:
a retail store, a large computer store, a local computer
parts store or by mail order. Wherever you go to buy it buy
it, keep three things in mind:
* Buy a "normal" 3.5-inch wide hard drive. They're sold
everywhere, but you want to avoid the smaller hard disk
drives made for laptops.
* Make sure the new drive has the correct cable system
(SATA or PATA) to match your machine.
* Make sure the drive is big. Buy the biggest drive you
can afford, because it will probably fill up before you know
it.
Now that you have your new drive, you are ready to install it.
5: Eliminate static electricity
Before we start working with the drive, we need to talk
about static electricity. Your computer is highly sensitive
to static shocks. This means that if you build up static
electricity on your body and a shock passes from your body to
something like a hard drive, that hard drive is dead and you
will have to buy another one.
The way to eliminate static electricity is by grounding
yourself. There are lots of ways to do this, but probably the
easiest way is to wear a grounding bracelet on your wrist.
Then you connect the bracelet to something grounded (like a
copper pipe or the center screw on a wall outlet's face
plate). By connecting yourself to ground, you eliminate the
possibility of static shock. You can get a bracelet for
a few dollars.
6: Set the jumpers
First, set the jumpers (if it is an IDE drive). Let's talk
about this in more detail, because most people have IDE
drives.
In the IDE system, most motherboards allow you to have two
IDE cables. Each cable can connect to two drives. Usually you
use one cable to connect one or two optical drives to your
machine. The other cable is used to connect one or two hard
drives to your machine.
You want both hard drives to be on the same cable. The two
drives on the cable are called "master" and "slave." You want
your existing hard drive (which contains the operating system
and all of your current data) to be the "master" and the new
hard drive to be the "slave." The drive should have
instructions on them that tell you how to set the jumpers for
master and slave. So read the instructions and set the
jumpers. If you are using SATA drives, you do not need to set
jumpers for master and slave because each drive gets its own
cable. Check out How IDE Controllers Work to learn more about
the master and slave configuration.
7: Mount the drive and connect
Now that the jumpers are set correctly, mount the new drive
in your drive cage and screw it into place.
Next, plug in the drive's power connector to the power
supply. If it fits, then it's a match.
Connect the IDE or SATA cable to the drive.
8: Format the new drive
Close the machine, power it up and configure your new drive
using the Windows XP drive administration tool. To do this,
click the Start button, open the Control Panel, Switch to
Classic View, click on Administrative Tools, click on
Computer Management, click on Disk Management.
Look at the graphical area in the bottom right of this
display. Disk 0 is your original hard drive. Disk 1 is the
new hard drive. Chances are that the new drive will not be
initialized , or formatted. Click the small button to
initialize the drive, and then format it as an NTFS volume
(right click on the new drive, then click “Format...”).
Formatting may take an hour or more, so be patient.
When the formatting is done, you are ready to use your new
drive.
How can I recover a deleted file from my computer recycling bin?
When Microsoft introduced the Recycle Bin in Windows 95, it
immediately became a failsafe for many users. If you delete
a file and realize that you actually need it, you can recover
it easily by doing the following:
* Open the Recycle Bin by double-clicking on the Recycle
Bin icon on your desktop (or you can go to the Recycle Bin
folder in Windows Explorer).
* Find the file you want to recover and click to
highlight it.
* Go to the File menu and choose the Restore option (or
right click over the filename and select Restore from the
context-sensitive menu).
* The file is now back on your computer in its original
place.
While the Recycle Bin is a great utility, there are times
that a file is not placed in the Recycle Bin when you delete
it. These include files from removable storage such as flash
memory and Zip disks, files deleted from within some
applications, and files deleted from the command prompt.
Also, there are times that you will empty the Recycle Bin and
then realize that there was a file you wanted to keep.
A common misconception is that the data is actually removed
from the hard drive (erased) when you delete a file. Any time
that a file is deleted on a hard drive, it is not erased.
Instead, the tiny bit of information that points to the
location of the file on the hard drive is erased. This
pointer, along with other pointers for every folder and file
on the hard drive, is saved in a section near the beginning
of the hard drive and is used by the operating system to
compile the directory tree structure. By erasing the pointer
file, the actual file becomes invisible to the operating
system. Eventually, the hard drive will write new data over
the area where the old file is located.
There are several hard disk utilities that you can find on
the Internet that allow you to recover "deleted" files. What
these utilities do is search for data on the hard drive that
does not have corresponding pointer information and present
you with a list of these files. Your chances of fully
recovering a file diminish the longer you wait after you
deleted the file since the probability that the file has been
overwritten increases. Sometimes you can recover portions of
a file that has not been completely overwritten.
Inside the Hard Disk
The best way to understand how a hard disk works is to take
a look inside. (Note that OPENING A HARD DISK RUINS IT, so
this is not something to try at home unless you have
a defunct drive.)
Here is a typical hard-disk drive
It is a sealed aluminum box with controller electronics
attached to one side. The electronics control the read/write
mechanism and the motor that spins the platters. The
electronics also assemble the magnetic domains on the drive
into bytes (reading) and turn bytes into magnetic domains
(writing). The electronics are all contained on a small board
that detaches from the rest of the drive
Underneath the board are the connections for the motor that
spins the platters, as well as a highly-filtered vent hole
that lets internal and external air pressures equalize
Removing the cover from the drive reveals an extremely simple
but very precise interior
In this picture you can see:
* The platters - These typically spin at 3,600 or 7,200
rpm when the drive is operating. These platters are
manufactured to amazing tolerances and are mirror-smooth
* The arm - This holds the read/write heads and is
controlled by the mechanism in the upper-left corner. The arm
is able to move the heads from the hub to the edge of the
drive. The arm and its movement mechanism are extremely light
and fast. The arm on a typical hard-disk drive can move from
hub to edge and back up to 50 times per second -- it is
an amazing thing to watch!
Inside: Platters and Heads
In order to increase the amount of information the drive can
store, most hard disks have multiple platters. This drive has
three platters and six read/write heads:
The mechanism that moves the arms on a hard disk has to be
incredibly fast and precise. It can be constructed using
a high-speed linear motor.
Many drives use a "voice coil" approach -- the same technique
used to move the cone of a speaker on your stereo is used to
move the arm.
Storing the Data
Data is stored on the surface of a platter in sectors and
tracks. Tracks are concentric circles, and sectors are
pie-shaped wedges on a track, like this:
A typical track is shown in yellow; a typical sector is shown
in blue. A sector contains a fixed number of bytes -- for
example, 256 or 512. Either at the drive or the operating
system level, sectors are often grouped together into
clusters.
The process of low-level formatting a drive establishes the
tracks and sectors on the platter. The starting and ending
points of each sector are written onto the platter. This
process prepares the drive to hold blocks of bytes.
High-level formatting then writes the file-storage structures,
like the file-allocation table, into the sectors. This
process prepares the drive to hold files.
Hard Disks
Nearly every desktop computer and server in use today
contains one or more hard-disk drives. Every mainframe and
supercomputer is normally connected to hundreds of them. You
can even find VCR-type devices and camcorders that use hard
disks instead of tape. These billions of hard disks do one
thing well -- they store changing digital information in
a relatively permanent form. They give computers the ability
to remember things when the power goes out.
We'll take apart a hard disk so that you can see what's
inside, and also discuss how they organize the gigabytes of
information they hold in files!
Hard Disk Basics
Hard disks were invented in the 1950s. They started as large
disks up to 20 inches in diameter holding just a few
megabytes. They were originally called "fixed disks" or
"Winchesters" (a code name used for a popular IBM product).
They later became known as "hard disks" to distinguish them
from "floppy disks." Hard disks have a hard platter that
holds the magnetic medium, as opposed to the flexible plastic
film found in tapes and floppies.
At the simplest level, a hard disk is not that different from
a cassette tape. Both hard disks and cassette tapes use the
same magnetic recording techniques described in How Tape
Recorders Work. Hard disks and cassette tapes also share the
major benefits of magnetic storage -- the magnetic medium can
be easily erased and rewritten, and it will "remember" the
magnetic flux patterns stored onto the medium for many years.
Cassette Tape vs. Hard Disk
Let's look at the big differences between cassette tapes and
hard disks:
* The magnetic recording material on a cassette tape is
coated onto a thin plastic strip. In a hard disk, the
magnetic recording material is layered onto a high-precision
aluminum or glass disk. The hard-disk platter is then
polished to mirror-type smoothness.
* With a tape, you have to fast-forward or reverse to get
to any particular point on the tape. This can take several
minutes with a long tape. On a hard disk, you can move to any
point on the surface of the disk almost instantly.
* In a cassette-tape deck, the read/write head touches
the tape directly. In a hard disk, the read/write head
"flies" over the disk, never actually touching it.
* The tape in a cassette-tape deck moves over the head at
about 2 inches (about 5.08 cm) per second. A hard-disk
platter can spin underneath its head at speeds up to 3,000
inches per second (about 170 mph or 272 kph)!
* The information on a hard disk is stored in extremely
small magnetic domains compared to a cassette tape's. The
size of these domains is made possible by the precision of
the platter and the speed of the medium.
Because of these differences, a modern hard disk is able to
store an amazing amount of information in a small space.
A hard disk can also access any of its information in
a fraction of a second.
Capacity and Performance
A typical desktop machine will have a hard disk with
a capacity of between 10 and 40 gigabytes. Data is stored
onto the disk in the form of files. A file is simply a named
collection of bytes. The bytes might be the ASCII codes for
the characters of a text file, or they could be the
instructions of a software application for the computer to
execute, or they could be the records of a data base, or they
could be the pixel colors for a GIF image. No matter what it
contains, however, a file is simply a string of bytes. When
a program running on the computer requests a file, the hard
disk retrieves its bytes and sends them to the CPU one at
a time.
There are two ways to measure the performance of a hard disk:
* Data rate - The data rate is the number of bytes per
second that the drive can deliver to the CPU. Rates between
5 and 40 megabytes per second are common.
* Seek time - The seek time is the amount of time between
when the CPU requests a file and when the first byte of the
file is sent to the CPU. Times between 10 and 20 milliseconds
are common.
The other important parameter is the capacity of the drive,
which is the number of bytes it can hold.
Energy Loss in a Solar Cell
Visible light is only part of the electromagnetic spectrum.
Electromagnetic radiation is not monochromatic -- it is made
up of a range of different wavelengths, and therefore energy
levels. (See How Special Relativity Works for a good
discussion of the electromagnetic spectrum.)
Light can be separated into different wavelengths, and we
can see them in the form of a rainbow. Since the light that
hits our cell has photons of a wide range of energies, it
turns out that some of them won't have enough energy to form
an electron-hole pair. They'll simply pass through the cell
as if it were transparent. Still other photons have too much
energy. Only a certain amount of energy, measured in electron
volts (eV) and defined by our cell material (about 1.1 eV for
crystalline silicon), is required to knock an electron loose.
We call this the band gap energy of a material. If a photon
has more energy than the required amount, then the extra
energy is lost (unless a photon has twice the required
energy, and can create more than one electron-hole pair, but
this effect is not significant). These two effects alone
account for the loss of around 70 percent of the radiation
energy incident on our cell.
Why can't we choose a material with a really low band gap, so
we can use more of the photons? Unfortunately, our band gap
also determines the strength (voltage) of our electric field,
and if it's too low, then what we make up in extra current
(by absorbing more photons), we lose by having a small
voltage. Remember that power is voltage times current. The
optimal band gap, balancing these two effects, is around 1.4
eV for a cell made from a single material.
We have other losses as well. Our electrons have to flow from
one side of the cell to the other through an external circuit.
We can cover the bottom with a metal, allowing for good
conduction, but if we completely cover the top, then photons
can't get through the opaque conductor and we lose all of our
current (in some cells, transparent conductors are used on
the top surface, but not in all). If we put our contacts only
at the sides of our cell, then the electrons have to travel
an extremely long distance (for an electron) to reach the
contacts. Remember, silicon is a semiconductor -- it's not
nearly as good as a metal for transporting current. Its
internal resistance (called series resistance) is fairly
high, and high resistance means high losses. To minimize
these losses, our cell is covered by a metallic contact grid
that shortens the distance that electrons have to travel
while covering only a small part of the cell surface. Even
so, some photons are blocked by the grid, which can't be too
small or else its own resistance will be too high.
Anatomy of a Solar Cell
Before now, our silicon was all electrically neutral. Our
extra electrons were balanced out by the extra protons in the
phosphorous. Our missing electrons (holes) were balanced out
by the missing protons in the boron. When the holes and
electrons mix at the junction between N-type and P-type
silicon, however, that neutrality is disrupted. Do all the
free electrons fill all the free holes? No. If they did, then
the whole arrangement wouldn't be very useful. Right at the
junction, however, they do mix and form a barrier, making it
harder and harder for electrons on the N side to cross to the
P side. Eventually, equilibrium is reached, and we have
an electric field separating the two sides.
This electric field acts as a diode, allowing (and even
pushing) electrons to flow from the P side to the N side, but
not the other way around. It's like a hill -- electrons can
easily go down the hill (to the N side), but can't climb it
(to the P side).
So we've got an electric field acting as a diode in which
electrons can only move in one direction.
When light, in the form of photons, hits our solar cell, its
energy frees electron-hole pairs.
Each photon with enough energy will normally free exactly one
electron, and result in a free hole as well. If this happens
close enough to the electric field, or if free electron and
free hole happen to wander into its range of influence, the
field will send the electron to the N side and the hole to
the P side. This causes further disruption of electrical
neutrality, and if we provide an external current path,
electrons will flow through the path to their original side
(the P side) to unite with holes that the electric field sent
there, doing work for us along the way. The electron flow
provides the current, and the cell's electric field causes
a voltage. With both current and voltage, we have power,
which is the product of the two.
There are a few more steps left before we can really use our
cell. Silicon happens to be a very shiny material, which
means that it is very reflective. Photons that are reflected
can't be used by the cell. For that reason, an antireflective
coating is applied to the top of the cell to reduce
reflection losses to less than 5 percent.
The final step is the glass cover plate that protects the
cell from the elements. PV modules are made by connecting
several cells (usually 36) in series and parallel to achieve
useful levels of voltage and current, and putting them in
a sturdy frame complete with a glass cover and positive and
negative terminals on the back.
How much sunlight energy does our PV cell absorb?
Unfortunately, the most that our simple cell could absorb is
around 25 percent, and more likely is 15 percent or less. Why
so little?
How Silicon Makes a Solar Cel
l
Silicon has some special chemical properties, especially in
its crystalline form. An atom of silicon has 14 electrons,
arranged in three different shells. The first two shells,
those closest to the center, are completely full. The outer
shell, however, is only half full, having only four electrons.
A silicon atom will always look for ways to fill up its last
shell (which would like to have eight electrons). To do this,
it will share electrons with four of its neighbor silicon
atoms. It's like every atom holds hands with its neighbors,
except that in this case, each atom has four hands joined to
four neighbors. That's what forms the crystalline structure,
and that structure turns out to be important to this type of
PV cell.
We've now described pure, crystalline silicon. Pure silicon
is a poor conductor of electricity because none of its
electrons are free to move about, as electrons are in good
conductors such as copper. Instead, the electrons are all
locked in the crystalline structure. The silicon in a solar
cell is modified slightly so that it will work as a solar
cell.
A solar cell has silicon with impurities -- other atoms mixed
in with the silicon atoms, changing the way things work
a bit. We usually think of impurities as something
undesirable, but in our case, our cell wouldn't work without
them. These impurities are actually put there on purpose.
Consider silicon with an atom of phosphorous here and there,
maybe one for every million silicon atoms. Phosphorous has
five electrons in its outer shell, not four. It still bonds
with its silicon neighbor atoms, but in a sense, the
phosphorous has one electron that doesn't have anyone to hold
hands with. It doesn't form part of a bond, but there is
a positive proton in the phosphorous nucleus holding it in
place.
When energy is added to pure silicon, for example in the
form of heat, it can cause a few electrons to break free of
their bonds and leave their atoms. A hole is left behind in
each case. These electrons then wander randomly around the
crystalline lattice looking for another hole to fall into.
These electrons are called free carriers, and can carry
electrical current. There are so few of them in pure silicon,
however, that they aren't very useful. Our impure silicon
with phosphorous atoms mixed in is a different story. It
turns out that it takes a lot less energy to knock loose one
of our "extra" phosphorous electrons because they aren't tied
up in a bond -- their neighbors aren't holding them back. As
a result, most of these electrons do break free, and we have
a lot more free carriers than we would have in pure silicon.
The process of adding impurities on purpose is called doping,
and when doped with phosphorous, the resulting silicon is
called N-type ("n" for negative) because of the prevalence of
free electrons. N-type doped silicon is a much better
conductor than pure silicon is.
Actually, only part of our solar cell is N-type. The other
part is doped with boron, which has only three electrons in
its outer shell instead of four, to become P-type silicon.
Instead of having free electrons, P-type silicon ("p" for
positive) has free holes. Holes really are just the absence
of electrons, so they carry the opposite (positive) charge.
They move around just like electrons do.
The interesting part starts when you put N-type silicon
together with P-type silicon. Remember that every PV cell has
at least one electric field. Without an electric field, the
cell wouldn't work, and this field forms when the N-type and
P-type silicon are in contact. Suddenly, the free electrons
in the N side, which have been looking all over for holes to
fall into, see all the free holes on the P side, and there's
a mad rush to fill them in.
Photovoltaic Cells: Converting Photons to Electrons
The solar cells that you see on calculators and satellites
are photovoltaic cells or modules (modules are simply a group
of cells electrically connected and packaged in one frame).
Photovoltaics, as the word implies (photo = light, voltaic =
electricity), convert sunlight directly into electricity.
Once used almost exclusively in space, photovoltaics are used
more and more in less exotic ways. They could even power your
house. How do these devices work?
Photovoltaic (PV) cells are made of special materials called
semiconductors such as silicon, which is currently the most
commonly used. Basically, when light strikes the cell,
a certain portion of it is absorbed within the semiconductor
material. This means that the energy of the absorbed light is
transferred to the semiconductor. The energy knocks electrons
loose, allowing them to flow freely. PV cells also all have
one or more electric fields that act to force electrons freed
by light absorption to flow in a certain direction. This flow
of electrons is a current, and by placing metal contacts on
the top and bottom of the PV cell, we can draw that current
off to use externally. For example, the current can power
a calculator. This current, together with the cell's voltage
(which is a result of its built-in electric field or fields),
defines the power (or wattage) that the solar cell can
produce.
That's the basic process, but there's really much more to it.
Solar Cells
You've probably seen calculators that have solar cells --
calculators that never need batteries, and in some cases
don't even have an off button. As long as you have enough
light, they seem to work forever. You may have seen larger
solar panels -- on emergency road signs or call boxes, on
buoys, even in parking lots to power lights.
Although these larger panels aren't as common as solar
powered calculators, they're out there, and not that hard to
spot if you know where to look. There are solar cell arrays
on satellites, where they are used to power the electrical
systems.
You have probably also been hearing about the "solar
revolution" for the last 20 years -- the idea that one day we
will all use free electricity from the sun. This is
a seductive promise: On a bright, sunny day, the sun shines
approximately 1,000 watts of energy per square meter of the
planet's surface, and if we could collect all of that energy
we could easily power our homes and offices for free.
we will examine solar cells to learn how they convert the
sun's energy directly into electricity. In the process, you
will learn why we are getting closer to using the sun's
energy on a daily basis, and why we still have more research
to do before the process becomes cost effective.
What are the best settings for e-mailing or printing digital pictures?
In general, if you are e-mailing the pictures to friends who
will view them on a computer screen, you will want to send
them pictures in the jpeg format at 640 x 480 pixels. If you
are printing the pictures, you need about 150 pixels per inch
of print size. So you would not want to print your 640 x 480
images at a size bigger than 4 x 3 inches.
Cameras can be quite complicated and use unintuitive jargon.
Your camera probably has several different picture quality
and picture size settings. For example, we'll go through all
of the quality settings of one of the cameras we use. We took
the same picture in all of the different modes and here are
the results.
Setting Name Format Quality Pic Size (pixels) File Size
TIFF TIFF No Compression 2048 x 1536 9,231 kB
SHQ jpeg 97% 2048 x 1536 1,391 kB
HQ jpeg 91% 2048 x 1536 682 kB
SQ1 jpeg 87% 1280 x 960 249 kB
SQ2 jpeg 73% 640 x 480 62 kB
Quality
Most cameras store the images in jpeg format. This is
a compressed format that reduces the file size of the images.
Some cameras also have an option to store the pictures in
an uncompressed format (like TIFF). Generally you will want
to use the jpeg format because the uncompressed pictures will
quickly eat up the storage space on your camera. There are
different levels of compression for the jpeg format. Some
cameras will have good, better, best setting. These settings
can be equated to a quality level parameter of jpeg
compression. If the quality level gets down into the 60
percent range, you might start to notice little squiggles and
extra graininess. The graphic below shows the relative
picture quality and file sizes for different jpeg quality
levels.
Size
The picture size is usually adjustable too. The picture size
is measured in pixels, so you need to pay attention to how
many pixels wide and high the pictures you take are.
Generally, a computer screen is 800 to 1200 pixels wide, with
800 being the most common setting. If you are e-mailing
someone a picture that they are going to look at on their
screen, then there is no reason to send them a picture bigger
than their screen. Many cameras take pictures at 640 x 480
pixels, which is a good size for viewing on a screen. For
comparison, the largest photos we use at How Stuff Works are
about 400 x 300 pixels.
For printing, the general rule is that you want 150 to 200
pixels per inch of print size. On this page, Kodak recommends
the following as minimum resolutions for these different
print sizes.
Print Size Megapixels Image Resolution
Wallet 0.3 640 x 480 pixels
4 x 5 inches 0.4 768 x 512 pixels
5 x 7 inches 0.8 1152 x 768 pixels
8 x 10 inches 1.6 1536 x 1024 pixels
On our camera, the SQ2 pictures are perfect for e-mailing.
The SQ1 pictures are good for printing at 5 x 7 inches, which
is nice because you can get two pictures onto a single sheet
of 8.5 x 11 paper. And the HQ, SHQ and TIFF settings all make
nice full-page prints. But you can see that the file size of
the biggest images quickly gets too big to e-mail.
RFID Waves Visualized and Demystified Using a LED Wand
Two Oslo-based design researchers have created a visual model
of RFID fields in an effort to show curious designers how
RFID looks and works, and help shed light on its
functionality.
The project was carried out by Jack Schulze and Timo Arnall
as a collaboration between BERG, a British design consulting
firm, and Touch, a research project housed in the Oslo School
of Architecture and Design, with the express purpose to study
near-field communication and wireless proximity-based
technologies.
The video was created using a custom-created LED wand that
lights up whenever it is in the presence of an RFID field.
The collected images of the wand glowing at various points,
then created a composite animation with those pics, which
turns out looking like the atomic orbital 3d.
With this video, the Touch Project hopes to educate designers
about a technology they feel is generally misunderstood, and
inspire them to use RFID in creative new ways. They even
created a fancy new logo, which is based around the distinct
look of the RFID electromagnetic field.
Scientists Use Precise Flashes of Light to Implant False
Memories in Fly Brains
Neuroscientists have already spent the better part of
a decade manipulating animal minds by using light signals to
trigger genetically encoded switches. But a new study has now
directly reprogrammed flies to fear and avoid certain smells,
and all without the usual Pavlovian shock treatments.
The technique supposedly permits "writing directly to
memory," and allowed one scientist to enthuse about being
able to "seize control of the relevant brain circuits" for
producing all sorts of mental states and behavior.
Researchers have discovered 12 specific brain cells that they
can stimulate to implant the false memories of events that
never occurred -- except in the mind, of course.
This represents just one of the latest steps in the
relatively new field of optogenetics, where scientists encode
genetic switches inside certain cells and trigger the
switches using tailored flashes of light. The genetic
switches are made from eye cells that can translate light
into the electrical signals used for communication by neurons.
Plenty of past research has manipulated the minds of animals
and humans alike by using more blunt methods such as
electrodes inserted into the brain. But optogenetics has
taken mind control to a new level by permitting researchers
to target very specific types of brain cells by merely
flashing specific light signals.
The team from the University of Virginia and Oxford
University in the UK even hints that such work could
eventually go beyond flies. Their technique certainly makes
the brainwashing of The Manchurian Candidate look rather
coarse by comparison -- if that drama were enacted by tiny
insects.
20 Teams Build High-Tech Houses in "Solar Village" Competition
The National Mall was transformed into a futuristic commune
for the past two weeks as 20 teams from four countries
erected solar-powered homes.
The bright future of green living has been on display for the
past two weeks at the National Mall in Washington, D.C.,
during the Department of Energy's 2009 Solar Decathlon. The
biennial contest, which wraps up this weekend, brings
hundreds of university students from around the world to
a temporary solar village for two weeks, where spectators can
walk through student-designed houses and marvel at the latest
green tech.
These solar homes have it all, including things that aren't
commercially available yet -- like self-activating
curled-metal shades; walls made of plants, both living and
recycled; and roofs that tilt at the sun, making them
efficient sun-catchers from Phoenix to Fargo. Worried about
efficiency while you're away? How about an iPhone app that
controls your entire house?
Teams include engineering, architecture, graphic arts and
marketing students, who typically wouldn't work together
until they reach the workforce.
Team Germany's "surprising" design took first place overall,
partly because their house performed so well in the net
metering contest, which measured how much net energy the
house produced and consumed throughout the competition. The
house had solar panels on the walls as well as the roof,
which improved its performance even with cloudy conditions.
Team Germany scored 150 of 150 points in net metering,
catapulting them over the University of Illinois at
Urbana-Champaign to win the title.
Aside from being an unrivaled educational opportunity, the
decathlon is a proving ground for a new generation of
energy-efficient products and designs. Some, like floor
heating tubes warmed by the sun, seem so obvious it's
a wonder every house doesn't already have them; others are
most certainly modern.
A few stand-outs:
* Team Arizona's hinged roof, which moves to match the
angle of the sun's rays
* Team Missouri's eco-roof and wall materials, harvested
from crops grown in the state, including sorghum and oak
* Team California's instantly-hot showers, which work by
circulating water through a heat pump activated by a bathroom
motion detector
* Team Boston's micro-inverters, which power a few solar
panels each and cost a fraction of the price of a regular
photovoltaic electricity converter
* Team California's and Virginia Tech's use of iPhone
apps to control the homes' solar-electric, entertainment,
heating and lighting systems
* Team Boston's windows, developed with Hunter-Douglas,
which combine gas, gel and air layers to form
a heat-absorbing wall when the sun hangs low in the winter;
heat radiates throughout the house when the sun sets.
"You learn skills like communicating, team-building,
executing, all these things -- we call it Startup 101,"
said Preet Anand, a senior at Santa Clara University in
Santa Clara, Calif., who is the lead water engineer on Team
California's Refract House. "What we have learned ... you
can't compare that to any other college experience."
Part of the decathlon's mission is to speed up delivery of
emerging technology to the marketplace. Several teams worked
with companies in their home states to invent new materials
or products, some of which are awaiting new patents.
Valence Energy, a company comprised of Santa Clara University
alumni who participated in the 2007 contest, helped Team
California design a whole-house control system that can be
operated via an iPhone app, Anand said. Lighting,
entertainment, heating and water systems, even the window
shades all connect to a master computer users can access
remotely.
"They helped make everything talk to each other. So you can
be on the iPhone or the Web site, and you can change the
temperature of your house from the car on the way home," he
said.
Iowa State University's team worked with a firm called
AccuTemp Energy Solutions and with Pella, the window and door
manufacturer, to create a better-insulated door for its
Interlock House.
Timothy Lentz, a mechanical engineering graduate student at
Iowa State, said the door uses vacuum insulation panels to
reach an insulation value of R40 -- the level of a typical
ceiling, and an unheard-of rate for a door.
"This makes it almost a wall," Lentz said.
Incidentally, many homes are so well-sealed that special
ventilation systems also had to be invented. Team Alberta,
comprised of students from four post-secondary schools in the
Canadian province, designed a hot and cold air exhaust system
that saves as much energy as possible. An energy recovery
ventilator, which is basically a box fan covered in a special
material, allows heat transfer between outgoing and incoming
air.
"In old homes, you don't need to worry about mechanical
ventilation because the homes were so leaky. That is not
really true in newer, high-performance homes," said Michael
Gestwick, who is pursuing a master's degree in environmental
design from the University of Calgary. "Our system is highly
integrated, where many other systems that you'll see are kind
of decoupled -- you have one system to do the heating, and
a separate one for the cooling, and a ventilating machine on
top of all that. We took all these pieces and put them
together and wrote control logic to make it work together."
The teams all spent about two years designing, planning and
building their homes. Each house had to be assembled at its
respective university, taken apart to be trucked to
Washington, and re-assembled on the Mall before the
competition began. The houses all feature the latest
energy-efficient appliances and home entertainment systems --
teams must cook, do laundry and host movie night, among other
typical household activities. The 10 categories are meant to
prove that solar homes can not only be cool and efficient,
but comfortable and livable.
Some teams took the latter to heart, knowing many
eco-conscious consumers might not want to live in a house
resembling something out of the Jetsons.
"The other houses, while they are really cool and have all
the bells and whistles, they kind of look like a spaceship.
They wouldn't really fit in areas that we think of, like
mid-size Iowa towns," Lentz said.
The Iowa State house resembles a ranch-style home sliced in
half, with a roof that slants toward the sun. Others took it
even further -- the University of Minnesota and University of
Illinois Urbana-Champaign teams designed homes with
traditional-looking gabled roofs, even opting to sacrifice
energy-collection capacity for the purpose of aesthetics.
Form versus function is always cause for debate in
architecture; Jeff Stein, dean of Boston Architectural
College, part of Team Boston, said the decathlon provides
a new way of thinking about both. He noted that in Western
society, people spend an average of 72 minutes a day outdoors.
"Buildings are hugely important, and they are way more
important than the amount of attention we've been giving them
in the last generation. Now, here comes a different way of
thinking about them, and the solar decathlon is a trigger for
making that (transformation) come into play," he said.
Plus, it will help students find jobs. Lentz, from Iowa
State, said he wants to work in the realm of building and
energy efficiency.
"There is a lot of room for improvement there," he said.
"I've met a lot of people who donated products or services
who keep asking, 'When do you graduate?'"
Holographic Projector Puts Heads-Up Displays on Your Car's Side View Mirrors
Compact holographic projection displays info through
a two-way wing mirror for drivers.
Soon the Dark Knight and other wealthy folk may not represent
the only people tearing around with a holographic heads-up
display (HUD) for their rides. A new prototype unveiled today
is small enough to fit inside a rear-view or wing mirror and
display car speed or distance between vehicles in real time.
This holographic projection device has a far smaller size
than current car HUD systems, which require large
liquid-crystal arrays and optics. By contrast, the device
developed by Light Blue Optics uses constructive and
destructive interference of light to compose its holographic
images.
The company designed its prototype to project an image
through a two-way mirror, so that information appears
superimposed over the reflected road view. The device went on
display at the Society for Information Display's Vehicles and
Photons 2009 symposium held in Dearborn, Michigan.
Such a device could improve driving safety by allowing
drivers to get critical information without having to look
away from the road. Light Blue Optics also says that their
technology works just as well on forward displays, such as
a car windshield. Just don't expect to drive cars with the
devices for at least several more years.
dimanche 22 novembre 2009
Power Supply Problems
The PC power supply is probably the most failure-prone item
in a personal computer. It heats and cools each time it is
used and receives the first in-rush of AC current when the PC
is switched on. Typically, a stalled cooling fan is
a predictor of a power supply failure due to subsequent
overheated components. All devices in a PC receive their DC
power via the power supply.
A typical failure of a PC power supply is often noticed as
a burning smell just before the computer shuts down. Another
problem could be the failure of the vital cooling fan, which
allows components in the power supply to overheat. Failure
symptoms include random rebooting or failure in Windows for
no apparent reason.
For any problems you suspect to be the fault of the power
supply, use the documentation that came with your computer.
If you have ever removed the case from your personal computer
to add an adapter card or memory, you can change a power
supply. Make sure you remove the power cord first, since
voltages are present even though your computer is off.
Power Supply Improvements
Recent motherboard and chipset improvements permit the user
to monitor the revolutions per minute (RPM) of the power
supply fan via BIOS and a Windows application supplied by the
motherboard manufacturer. New designs offer fan control so
that the fan only runs the speed needed, depending on cooling
needs.
Recent designs in Web servers include power supplies that
offer a spare supply that can be exchanged while the other
power supply is in use. Some new computers, particularly
those designed for use as servers, provide redundant power
supplies. This means that there are two or more power
supplies in the system, with one providing power and the
other acting as a backup. The backup supply immediately takes
over in the event of a failure by the primary supply. Then,
the primary supply can be exchanged while the other power
supply is in use.
Inscription à :
Articles (Atom)