Did you know that larynxes (voice boxes) have personalities? When you see the cartilaginous vocal cords vibrating, surrounded by softer throat tissue, they look like little faces, contorting as they expand and contract, lengthen and shorten. And Rich Little's voice box, simulating different vocal qualities as he does imitations, is as much fun to watch as an animated cartoon.

The voice has long been a puzzle, for it couldn't really be laid open to examination while it was alive. With the advent of scopes, its workings can be made visible in action and living color.All this Ingo Titze explained in a film lecture at Brigham Young University. He brought along his friend, Pavarobotti, a synthetic tenor who conveniently sang the high notes when Titze's tenor played out.

"Though Pavarobotti can easily simulate a human tremolo, he has plenty of limitations," Titze explained later. "What is easy for machines is often hard for people, and what is easy for people is hard for machines. For instance, machines can't blend vowels or consonants. The balance between variability and consistency is what makes a voice interesting and natural. The computer is best at consistency, and the human voice at variability.

"More and more, the computer is showing us things about the voice that the human ear doesn't detect, and it can perform long-term tracking much better than the human brain. By computer you can scientifically compare where a voice is now, and where it was two months ago; the human ear can't remember this. You can compare properties of two voices side by side."

Titze, a professor of speech pathology and audiology at the University of Iowa, is 20 years into a career studying the many aspects of speech, singing and acoustics, all of which he finds fascinating.

"It was in the 1950s that a synthesizer was first able to simulate the voice, providing the ABC's of the technique," said Titze. "At first it was unmodulated, blah sound. In 1990 we have advanced greatly.

"But the voice is still difficult to study. To do so properly we would have to invade the human body, take the human voice apart; so the computer model is the next best thing. With it, we are able to explore the whole spectrum of the voice - methods of production, clinical disorders, what makes voices exceptional. We can simulate and experiment."

Titze found his life's calling when he took his first voice lessons. He soon developed a passion for investigating the acoustics of voice, and with B.S. and M.S. degrees in electrical engineering from the University of Utah, and a Ph.D. in physics from Brigham Young University, he was well-equipped to look at the voice both as a musical and mechanical instrument.

After wide-ranging experience in teaching and working as a research engineer in aviation and acoustics, Titze settled at the University of Iowa in 1979. He's received dozens of federal and private grants for investigation of the voice, has worked as a director, reviewer and editor for associations and publications, and director of conferences on voice, singing and acoustics.

"The voice is a highly variable instrument," he said. "Because it's biologic, unlike the piano or violin, it changes drastically with time, environment and body conditions. Even though basic principles of physics apply, those have to be augmented by what we know about the biological machine.

"We deal with a whole motor system, and one principle of the human body is that it's very goal oriented. Accordingly, the voice can produce a given output, such as fine quality of sound, in multiple ways - a fact that impacts upon voice teaching, for each person must be approached individually."

Titze likes to work with the computer because "it is infinitely patient, it never gets destroyed, you can't break anything in there," he said. "It can assimilate fragments of information and make them into a complete whole, ultimately containing all the information you need.

"You can simulate vocal chord or cleft palate surgeries by computer, and predict what the outcome will be. You can study the vast number of ways that human speech is affected by operations, accidents and diseases. Computers can help to solve problems of pathologies, such as lumps and nodes; what causes speech deterioration, stuttering, why articulatory deficiencies occur, and what to do about these things, how therapy can help.

"If you want to improve the voice, you can display features of the sound instantaneously, fed in by microphone, and the singer or speaker can see its vibrations and characteristics, and modify instantly," Titze continued. "You can attach sensors to the neck, face, mouth and pick up additional information.

"Say a salesman who talks all day long has hoarseness or fatigue; just at the moment of an important presentation, his voice gives out. We can look at the kind of vocal habits he has, analyze on the computer, and decide how he should change them. We can see the effects of stress, then give exercises that give more output for less effort, show actors how to project, get `more bang for the buck.' The computer gives a handle on what constitutes efficient voice production."

Titze distinguished two kinds of computer simulation: simulation to explain how something works naturally, and simulation to replace it. "They have very different goals," he said. "Much of our simulation is the former, as a research tool, to help clinicians help voices."

He finds an exciting challenge in trying to identify what makes a simulated voice sound really human. "Many have tried, but no one has yet succeeded in creating a synthetic voice that sounds natural, warm and appealing. The what we have, but the how is still a mystery. For instance, how does vibrato originate, and how can we reproduce it?

"Those seeming intangibles between the lines of human speech - emotions, psychological state, personality factors - how do you reproduce them? They come across the distance from one person to another, they are in the air, in the signals we pick up. If we as humans can understand and process them, a computer can too, they are in the acoustic code. But it looks like another 10 or 20 years of research before we will know how to simulate them."

At present, Titze is directing a major grant from the National Institutes of Health, shared by three schools (the Universities of Iowa, Wisconsin and Utah) and the National Center for Voice and Speech in Denver.

The project's goals are four: to research the voice and speech mechanisms, and learn more about how speech is created; to provide training for young researchers, who are coming into the field from physics, biology and engineering as well as medicine; to write continuing education materials and provide workshops and seminars to help update practitioners in the field; and to disseminate information to the public, thus promoting vocal health and good speech habits.