Surgeons might someday use a keystroke instead of a scalpel to conduct exploratory surgery.

New research in three-dimensional computer imaging pioneered at Brigham Young University is making it possible."Who would not prefer to be dissected graphically on a computer screen rather than on an operating table?" asks William A. Barrett, an associate professor of computer science.

Barrett's expanded personal computer depicts a heart pumping as the left ventricle contracts and expands. In another image, a shaded three-dimensional skull turns in several directions for a comprehensive examination. The likenesses are clear and understandable.

Generating the image is like taking 50 or 60 cross-sectional X-rays - each slightly different from the other - and putting them on top of each other to create a realistic, three-dimensional likeness.

"What we're working on could become a meaningful part of presurgical planning," Barrett says. "If doctors can identify precisely where they will be operating before surgery, it not only enhances accuracy but also saves time."

Using the emerging technology, a physician needing to perform a bone graft or some kind of skull implant could customize it in advance with computer-aided technology - as opposed to developing it from scratch once the patient's head is opened during surgery.

Three-dimensional images could also aid patients needing a prosthesis design or diagnostic radiology and those who plan to undergo reconstructive or plastic surgery.

Barrett calls the computer images "inherently volumetric technologies," meaning it is possible to capture a volume of data from the head, torso, vertebrae, etc., and look at it as a natural three-dimensional object.

"The result is so accessible, you want to pick it up in your hands and examine it rather than interpret a series of anatomy slices," he adds.

Creating a realistic image, however, is only half the goal with three-dimensional imaging, Barrett says.

"Sure, we can see a patient's anatomy in living color, but my question is, `Can't we do better than that?' "

He believes doctors will need to be able to hold onto the image, put calipers on it, make measurements and extract numbers.

"I've wanted to create sequences that I could interactively move around with all parts of the body, including the heart. The heart is tricky because not only do we have three dimensions, but it is moving over time as well. To be able to tell you what that looks like in three dimensions - especially when there is pathology on top of that - is exceedingly difficult, but possible."

In response to the desire to interact directly with the image, Barrett and Shawn C. Becker, a BYU graduate studying for a doctorate at Massachusetts Institute of Technology, are developing new three-dimensional technologies.

They have devised shading that offers information about an image's depth and have designed tools that allow the user to go beyond the "static, pretty-picture stage."

"By using some simple geometric devices, we have come up with a fairly rich set of probes or interactive graphic tools to examine the anatomy in a natural way," says Barrett.

The probe is actually a three-dimensional caliper the user can manipulate up, down, around and into the image, providing visual feedback. It's the visual equivalent of rubbing a hand over the surface while also providing measurements.

"If you wanted to examine the inside of an eye socket, the probe could do it for you, says Barrett. "At the same time, the probe provides other information, such as the socket's exact circumference."

By using additional mathematical formulas, the tool performs intersections and defines the areas through which the point passes, all of which allow for a thorough handling of the image.

"These technologies are gradually coming of age, and I foresee a day not too far in the future when a `souped up' personal computer will be a regular part of a physician's desktop."