FACE IT

 Uncategorized
Oct 112007
 
Authors: Beth Malmskog

Imagine sitting down at a computer with morning coffee to a warm, personalized “Good Morning, (your name here)” or hitting the Trailhead with buddies and, instead of fumbling for your ID, a computer scans your face at the door.

Imagine computers that could look at grainy, unidentifiable images of people in the subway and pick out wanted criminals.

These are just some of the long-term possibilities of computer face recognition.

CSU mathematicians and computer scientists are working to bring this future closer, using geometry to teach computers to recognize human faces.

The version of geometry the researchers are using isn’t quite the version people learn in high school. It involves shapes, but not in a flat plane or even three-dimensional solids.

CSU researchers look at pictures of a person as individual points in high-dimensional space, then look at the shapes that groups of pictures make in that space.

Jen-Mei Chang, a Ph.D. student working on the face recognition project, imagines a system that makes traditional forms of identification obsolete.

“You never have to bring an ID anymore, your face is the key to everything,” Chang said in an e-mail interview.

Chang, who has taught several levels of calculus and other math courses in her previous five years at CSU, now works on face recognition full-time from her desk in the basement of the Weber building. The research is the pet project of CSU mathematician Dr. Michael Kirby. Mathematicians Dr. Holger Kley and Dr. Chris Peterson, and computer scientists Dr. J. Ross Beveridge and Dr. Bruce Draper have also collaborated on the project.

The group recently filed for a patent related to their ideas, with the help of the CSU Research Foundation.

Lighting is one problem for computer face recognition. Photos of a given person taken under different lighting conditions can vary drastically in ways that can fool computers, even when a human could easily tell the pictures were of the same person.

But the CSU research group turns this lighting variation, usually considered an obstacle to face recognition, into an advantage. They use objects called “illumination subspaces” to match photos of people with names and pictures stored in a database.

How it works

Pixels are the boxes of color that make up digital images. Each pixel in a photo corresponds to a number that tells the shade of that pixel. These numbers are read off in rows to form one long list.

The list of numbers gives a location in high-dimensional space, just as latitude and longitude give a location on the globe.

Photographs of a single person taken under different lighting conditions will be different points in this high-dimensional space. A person’s illumination subspace is essentially the shape created by lots of pictures of that person’s face (in a fixed position), as lighting on the face changes.

CSU researchers were not the first to discover illumination subspaces or even apply them to face recognition. The group is taking the idea in new directions, however.

For example, they use changing lighting to make it possible to identify faces from very low-resolution photographs. They have successfully identified individuals from photos with as few as 25 pixels.

To a human, 25-pixel photos of faces look like abstract art. Features are lost in large, solid blocks of color that reflect the average shades of regions of the image.

The CSU group has found that illumination subspaces are different enough for different people that they can be used effectively in face recognition. They have used illumination subspaces to correctly match individuals in the Yale and Carnegie Mellon University Pose, Illumination and Expression (CMU-PIE) databases with 100 percent accuracy.

Best accuracy comes from comparing groups of photographs. First, a database is built using several photographs of each person. Then a few pictures of some person under different lights are compared to the database.

Researchers use the computer to determine the distances from the shape that correspond to the new pictures to the different illumination subspaces stored in the database. The person whose subspace is closest to the pictures is considered a match. This “set-to-set comparison” technique appears to work better than normal industry practice.

Where to go from here

Dr. Chris Peterson said they are taking advantage of the fact that, as Peterson puts it, “computers have their own ways of perceiving.”

“People are good at pulling pertinent information from complicated scenes,” Peterson said. “Computers are good at extracting information from lots of simple scenes. Computers can discover patterns that aren’t apparent to us.”

Though results so far have been as good as possible, the largest publicly available database, with the variations in illumination that the CSU group needs, consists of only 67 people.

That doesn’t measure up to the hundreds of people required for a system to be taken seriously in the National Institute of Standards and Technology (NIST) Vendor Test, the industry standard test for face recognition systems.

The CSU group needs more data.

Privacy concerns and industrial secrecy make large sets of pictures of the type they need hard to come by, so the CSU group has decided to develop their own data set. This is the first project for the brand-new Pattern Analysis Lab at CSU.

The Pattern Analysis Lab consists of two small, windowless rooms on the second floor of the math building. One room holds only a massive computer and the computer’s refrigerator-sized power supply.

The computer has 40 processors, 160 gigabytes of random access memory (RAM) and 10,000 gigabytes of storage memory. For those who are less tech-savvy, new home/laptop computers generally have 1-4 gigabytes of RAM and 80-500 gigabytes of storage memory.

The other room holds monitors, lights and cameras, as well as a hard-to-come-by Fischer-Price record player.

“The most important piece of equipment,” Dr. Michael Kirby joked.

Researchers put plastic jack-o-lanterns on the turntable and turn the lights on to experiment with varying lighting and pose in their photographs.

Most people might find the computer impressive, but Chang emphasizes that it’s not the computer or the props that make a system powerful. Instead, ideas make the system work.

“It’s the power of the math behind it,” Chang said. “You could have thousands of supercomputers, bad math, and it’s not going to do anything.”

You can check out the Pattern Analysis Lab for yourself by contributing your face to the project. More information is available at the project’s website, www.math.colostate.edu/~kirby/DATA%20SETS.html.

Staff writer Beth Malmskog can be reached at news@collegian.com.

 Posted by at 5:00 pm

Sorry, the comment form is closed at this time.