March 20, 2023

Beware the Surveys Remember the Delphi Technique

NewOldRopeThe State of Maine is in the middle of gathering information from licensed hunters and fishermen, and in some cases the general public, that we are told they want in order to better make decisions on how to proceed with and create new wildlife management plans. Science used to be and still is the best way.

Two surveys were recently completed by Responsive Management in cooperation with the Maine Department of Inland Fisheries and Wildlife (MDIFW). One is the 2016 Maine Big Game Survey that collects responses from licensed hunters, both resident and non resident, large land owners and the general public, about hunting in general and specifically about land access and big game species of bear, moose, deer and turkeys.

The other survey, completed by the same entities, collects responses from only licensed fishermen, both resident and non resident. There will also be public meetings held across parts of the state to listen to the public about big game management and fisheries management. Check the MDIFW website for times and places.

Surveys and polls are basically useless instruments. To what severity the uselessness is achieved is dependent upon the methodology of the poll or survey, i.e. who is surveyed, the demographics, who funded the survey, and how the questions are asked and the words used to form the questions. I have taken some time to examine the survey results for both the hunting and fishing reports. As far as surveys go, these two are not terribly bad at manipulating questions and/or concluding answers that are misleading. But…..

Before I get into a couple of specifics of these two surveys, let me give readers a chance to understand how questions and answers are manipulated to achieve desired results. I’m not suggesting that anything in these surveys was done deliberately. I am suggesting that through indoctrination over the years, survey and poll administrators learn how to present questions from trained experts who do know how to fudge data. There is lots of money to made from doing that. Even though in this case the administrators may not have deliberately devised questions to mislead, human nature, along with the truth of polls and surveys, will render faulty results.

For those who have read my writings, and in particular read my book, “Wolf: What’s to Misunderstand?“, we know that during the process that led up to the (re)introduction of wolves into the Greater Yellowstone Ecosystem, Congress granted $200,000 to a group who wanted to introduce wolves, to answer specific questions Congress had about the wolf. (The questions are immaterial to this article)

Because this group wanted wolves badly, they set out to prove to Congress, truth be damned, that wolves were a good thing. This is where a faintly recognized term surfaced – the Delphi 15. This came about because this Congressional-appointed group went out and contracted (not necessarily for money) 15 “scientist” to answer some questions that the group would use to convince Congress. The 15 scientists were kept secret and none of them knew that there were any others involved, thus, they did not know their names.

I discovered through research that this group, ordained by Congress, quietly and as secretly as they could, opted to use what is called the Delphi Technique to achieve the answers they were looking for that would fit their wolf narrative. 15 scientists, unknowingly conned via the Delphi Technique, and there we get the title, The Delphi 15.

To better explain about the Delphi Technique, below is posted what I wrote in my “Wolf: What’s to Misunderstand?” book.

 

The Delphi Technique

Take notice that Dr. Bergerud, in his email states that: “I believe US Fish and Wildlife hired a consultant with questionnaire skills.” Bingo! That is what the Delphi technique is1. Some of us may not know much of anything about the Delphi Technique but I’m willing to wager most of us have been Guinea pigs to it.

The best way to define what the Delphi Technique is in simple terms is to say that it is a method in which those administering a brain storming session, or in this case a “questionnaire,” manipulate the questions and the procedures in order to get the end result that is desired. Let me give two classic examples of this.

First would be a poll. Every day in our lives, perhaps more so in the news, we are constantly being barraged by the results of polling. Should we believe the results of the polls we are given? Absolutely not, especially when we are not given the questions and the structure and context to which those questions were presented.

Most of us have taken poll questions. Have you ever been given one in which you really could not find the “correct” answer? That is the honest answer you would give if you had that opportunity. That becomes the result of the Delphi Technique.

The other, perhaps not quite as obvious and a part of everybody’s life, is a brain storming session. These take on several names, such as, symposium, seminar, public forum, town hall, etc. I’ve been involved in many but for most of them I had no idea what was really going on. I certainly do now and avoid them like the plague.

Those administering the event, are usually led by one or two people. It is those people who “know what they are doing.” They want to achieve a specific result and therefore must manipulate the setting and events to their advantage – much in the same way as a magician.

Let’s say that you attend a public forum to gather input from the public on ways to make your community a “better place.” Who gets to decide what a “better place” is? What most people don’t know is those administering the forum have already decided what will make your community a “better place.” Their job is to make you think you were part of the decision making process. All they have to do is present some kind of evidence that shows the majority of people in your community decided what was a “better place” and to make it happen.

Often we find ourselves being placed into “breakout sessions.” These come complete with a table and/or chairs in a circle, an easel board and a facilitator. It’s the trained facilitator’s job to force the hand to achieve a desired result.

During this brain storming breakout session, you might be asked to offer ideas on what would make your community a better place. Take notice the next time you find yourself in this setting, that the facilitator will prompt or edge the group with “ideas” of his or her own. These “ideas” are predetermined. Seldom are there ever results during the forum. We might be told that what the consensus was will be shared. It can easily be said, because each facilitator, by instruction, added to the list the same “idea”, making it a majority “consensus.”

Imagine what the results would be if the administrator and the facilitators only offered up questions, like in a poll, that forced participants to provide answers they didn’t really want to.

Dr. Bergerud indicated in his statement, the tactics used by Delphi administrators. He said that nobody in the group of 15 knew who the others were. This is very important. They could not, before, during or after, consult with each other. After all, they might discover they had been duped.

In the Volume I Summary of “Wolves for Yellowstone: A Report for the United States Congress,” the report willingly exposed some of the schemes of the Delphi Technique when they wrote that they had withheld important information from the 15 members, seeking their opinions of the subject. Does the “Best Available Science” operate on opinions obtained from scientists who are denied information and data? Does this “science” have “different meaning for different people?”

In essence that is the Delphi Technique that was used on the “Delphi 15” of those commissioned by the United States Congress to get answers to 4 questions. Do we have the exact questions given to the 15 members? Do we have the exact answers provided by the 15 experts? Is this what is described by the United States Government as “Best Available Science?”

<end>

Within the two surveys used in Maine, sometimes questions are asked while seeking an answer from more than two choices. The wording of the choices can be crafted in such a way as to mislead, or misdirect the survey taker. In addition, the responses sought after may not cover the full spectrum of what the person being survey might answer if simply asked to tell their opinion of something. It is also relevant to report that when people read such reports, they cherry pick, or are misled, and see only what they want to see or what the administrators of the survey want them to see.

Let me give a couple of examples from the hunting survey. In offering summaries and explanations of methodology, the surveyors wrote: “Another question gauged respondents’ comfort level regarding wildlife around their homes. Using a continuum from the most comfortable (“I enjoy seeing and having wildlife around my home or on my property”) to the least comfortable (“I generally regard wildlife around my home or on my property as dangerous”), a large majority of each group (70% of the general population and 80% each of landowners and hunters) chose the highest comfort level, and nearly all the rest chose the second most comfortable level.”

Many of you may ask what’s wrong with that. It’s easy to explain really. Surveyors are seeking what they call “comfort level” but they don’t directly define what that means. Instead, they offer us an example of both ends of what they call a “continuum.” Note that at the “least comfortable” the choice offered is that they “generally regard wildlife…as dangerous.” Dangerous? Where did dangerous come in? Can’t property owners not care whether they see wildlife even if they don’t think it’s dangerous? Think of the possibilities of how questions and choices of responses can certainly misrepresent truth.

Another example in the hunting survey has to do with gathering input about the respondents on their knowledge of the animal specie bear, moose, deer and turkeys. Exactly how the survey takers were asked the question I don’t know, but the results show that an overwhelming majority of people answered that they knew “a great deal or a moderate amount” about the four species. This, of course, is simple self-perception. If the survey question does not contain qualifiers, like do you have a degree in wildlife biology, surely of what value does such a question hold? Is this used to convince the uneducated public and unsuspecting wildlife managers that because the respondents know so much about the species, their answers have scientific value? (only if convenient?)

In the fishing survey what struck me most about this survey was this, written in the Executive Summary: “The study entailed a telephone survey of resident and nonresident licensed anglers in Maine, age 16 years or older. ” The survey is designed ONLY for licensed fishermen – resident and non resident. While the hunting survey involved the general population, the fishing survey does not. While perhaps not completely necessary, from what I gather, the surveyors didn’t go out of their way to explain why the general public was not also survey about fishing and their support or non support of the sport. Isn’t this important to wildlife managers? They tell us repeatedly that as far as hunting goes, they must make their decisions based on social toleration. Therefore, the survey provides no examples of why a member of the general public might choose not to fish. Isn’t it just as important to understand the reasons to not fish as well as what kind of fishing one prefers.

Let me further explain. I was reading George Smith’s article this morning about how this fishing survey proves that anglers don’t care if they catch big fish or a lot of fish. This may or may not be true, but do we really know that? Smith also writes: “Is it possible that if the fishing or hunting sucks for a long, long time folks forget what they are doing.  Habit without product?  The new breed of conservationist?”

This statement also holds a certain amount of truth. There’s also another byproduct of Maine fishing and tainted surveys that can be misleading and/or not giving the whole picture, as we see above. Smith writes in his article that Maine fishermen, 77%, support catch and release rules and yet the survey says that 81% of fishermen did not choose to fish in designated catch and release waters. Choosing what fits our narrative?

To fully understand the results of the fishing survey, you would have to fully understand the demographics. While demographics are in the survey, the average person cannot distinguish why people mostly fish with a spinning rod and reel and yet support catch and release rules. Is it that they prefer everyone else practice catch and release so they have more fish to catch and keep? Or, as it appears to me, whoever presented the question asked it in such a way as to create confusion and/or mislead. According to what Smith wrote, he said that 77% of fishermen “support” catch and release rules. Does that mean they WANT more catch and release rules or we just being told to think that catch and release is preferred over catch and keep?

The answer is, we don’t know, nor does this survey tell us. Therefore, if we bear in mind that statistics prove that statistics can prove anything, managers, sportsmen, pundits and my grandmother can pull out of these surveys anything they want that fits their narrative. I suppose MDIFW managers will do the same thing, which leaves us with the question, why did we spend all this time and money on these useless surveys?

There will be public meetings where citizens can go and say their piece. There will also be an Online survey where anyone interested can offer comments. Please bear in mind what I hope I have taught you here.

When it’s all said and done, MDIFW will spend the greatest part of their time copying and pasting all the previous game management plans and adding a little change here and a little change there. So why can’t we spend the money and time on worthwhile events?

Share