A Salt Laker complained in a letter to the Deseret News just before the election about a poll that showed 55 percent of the electorate against the removal of the sales tax on food.

"I am curious how these polls are taken. Are they taken from a selected group? Or are they taken at random from people in a wide area? I've never been asked to participate in a poll, and I've never talked to anyone who has," she wrote.These are fair questions, because pollsters don't explain their methods often or well enough.

Your typical poll report as published in the newspapers merely notes how many people were polled and admits that there is a margin of error of plus or minus so many percentage points from the reported figure.

- GEORGE GALLUP once said that he did not see much point in sending out more information than this with every poll story. He explained that the statistical concept would not be easily understood by all readers, most of whom didn't want to be burdened with it in any event. He also argued that the explanation would require too much space, and, in successive releases, would be repetitive.

Yet because that complaint, "nobody asked me," is so common, pollsters ought at least to explain themselves occasionally, perhaps in a box accompanying poll findings.

When Gallup did so the explanation went something like this:

"We use a random probability sample, that is, draw a sample of people in a way that gives everyone in the population being surveyed an equal opportunity of being questioned. A well-drawn sample is a reasonably reliable microcosm of the group being surveyed.

"When we survey less than the full population, however, we admit that some variability will occur due to chance. That error can be computed with a simple statistical measurement that shows by how many percentage points the findings we got could deviate from the findings had we questioned everyone."

If I were a pollster I might take a chance and go further than this for that part of the readership that wants more detail:

- "THE LARGER THE SAMPLE, the less the sampling error. We could cut the sampling error in half by increasing the sample size fourfold because, according to our formulas, the sampling error diminishes as the square root of the sample increases. But larger samples than the one we've used in this survey result in relatively little reduction in sampling error, so usually aren't worth the added cost."

(In Germany a few years ago the Catholic Church commissioned from a polling agency a survey that literally had millions of respondents, all the adult Catholics in the country. The survey company explained to the church that polling everyone was needlessly cumbersome, time-consuming and expensive. But the sponsors said that the issues in this survey were theological: the soul wasn't divisible, so no one could answer for someone else!)

Despite their occasional imprecision, the polls are still the best way for probing public opinion. And because elections lead to concrete, verifiable results, they tell pollsters whether their methods are yielding accurate results or need to be tuned up.

After the elections pollsters should level with their readers on what they do, how they guard against going wrong and why they sometimes miss the boat.

Pollsters might well tell readers then that statistical reliability in itself doesn't necessarily mean that the survey measured what it was intended to measure. Although we often talk of "scientific surveys," designing a poll is more an art than a science. A survey can meet the tests of statistical reliability but be lousy because it fails to frame the right questions or ask them in a nonprejudicial way. Readers ought in self-defense to look critically at the wording of the questions in any poll.

One thing discerning readers will have noticed in this year's election polling is that even the best of the polls are at the mercy of changes due to events, or a candidate's late foibles and gaffes or achievements.

- THIS YEAR THE USUAL PROBLEMS were compounded by such factors as the general anger of voters, which may have influenced not only how but whether they voted. And in some districts the rise of the minority voter and black and female candidates complicated polling, presumably because voters are afraid of expressing their real intentions for fear of being considered bigoted. This year at least one last-minute ad, for Karl Snow, boomeranged, and other attack ads might have had either a positive or negative effect. In pre-election polls, one difficulty is assessing how or whether the undecideds will vote.

Dan Jones, who conducts the Deseret News-KSL poll, is always forthcoming and candid in his post-election analyses in public forums. This year he admitted that he failed to fathom a late "groundswell of Democratic support," especially in the 3rd Congressional District.

This was a treacherous year for the polling profession.

In the polls on the congressional races released the Sunday before the election, both Jones and the Tribune's Bardsley surveys failed to project the winner of the stunning Snow-Orton and Horiuchi-Shimizu races but also were way off target in the vote spread in several others, such as the Barker-Bradley contest.

In the final survey on the food-tax initiative that bothered that letter writer, however, both polls were pretty much on the button. The measure succumbed by 56-44 percent. The Tribune's Bardsley Poll had it 55-36 against with 9 percent undecided and the final Jones survey 58-37 against with 6 percent undecided (the arithmetic in the News' story was off a forgivable one point).

MORE AND MORE NEWSPAPERS are using not only "tracking" polls in elections, but other kinds of surveys as well. Once survey research in the print press was largely confined to a few studies of what readers looked at and were done mostly by advertising departments. There were only a few exceptions, such as the famous Iowa poll done by the Des Moines Register.

Editors tended to be impatient with this kind of research because they hadn't grown up with it and didn't understand it.

Now most newspapers with more than 100,000 circulation not only commission surveys but even have their own research departments. These conduct surveys and gather studies done by others. Some of this research is low-grade, but much is excellent, and it is getting better as the papers share their techniques and experiences. In 1977 a Newspaper Research Council was formed by the American Newspaper Publishers Association. It has grown from 75 to 250 members.