February 7, 2023

More Proof That the Scientific Method Has Gone to Hell

I just finished reading an article in which the author claims that utilizing “spotlight surveys” to count deer is a “waste of resources.” Spotlight surveys are when an individual or a biologist sets up cameras in the forest in order to “spot” deer, identify them and try to determine how many deer inhabit a prescribed area. The author states that the information from these cameras is so inconsistent that the data becomes useless. I’m not so sure one can make such a broad, sweeping statement completely disregarding the tool and what information is gleaned from the equipment, if they don’t know the processes used by all spotlight surveys. I’m also left with some puzzling questions that need to be asked, along with seeking who, if anyone anymore, has even the basic knowledge of the scientific process.

The author explains why the data taken from spotlight surveys can be so variable it may become useless, and I sort of, almost, tend to agree. But there’s a lot more to this than is being discussed. Let’s look at the bigger picture first.

The author says that deer biologists wrongly state that, “…[deer] density estimates are a requisite for good deer management.” And further states this to be a fallacy and without explanation. I, like probably a few million other deer hunters in America, would like to know how any biologist, or group of such, can responsibly “manage” a deer herd if they don’t have a solid idea of about how many deer they are dealing with. This population estimate must go beyond just a simple statewide guesstimate. It should be broken up into the smallest wildlife management areas as is practical. This increases accuracy.

If we continue with the belief that deer populations are an important and integral part of deer management, then the honest question will become, “Do deer managers ever exactly know how many deer are at any point at any time?” Of course they don’t. But how precise are they in their guesstimate?

One thing most of us understand is that, generally speaking, the more precise we want to be in knowing deer numbers, the more money needs to be spent to do that. However, please understand that most tactics used to count or estimate are riddled with poor scientific method.

Before I get into poor scientific method, I did want to point out something that was written in the article about the author knowing that spotlight surveys were inaccurate…at best. The first question I had about that was how does he know that? To make a statement this bold, one must know the number of deer there actually are. Otherwise, how then can one state the other information is wrong and at what percentage of the time it is wrong?

This information is not provided so can we, or do we, assume that within a test area, procedures were undertaken in which an “exact” count of deer was taken and then was compared to what the spotlight surveys said? If that is factual, imagine attempting to do this statewide in Texas. It easily becomes cost prohibitive. Therefore, it is the reason shortcuts of estimating deer populations have been employed.

Are they accurate? As the old saying goes, garbage in – garbage out. Or it’s only as good as its weakest link.

Maine is in the midst of deer and moose studies. A few years ago, as they began the studies, they began aerial counts, while boasting of how accurate these fly-overs were. Are they? Perhaps in comparison to other methods but don’t bet your farm on the results.

Let’s return to basic science. I remember in 7th Grade the first thing I learned in order to be able to honestly assess and obtain useless data was that all things must be consistent – never changing. In other words, if biologists are doing aerial counts for deer, each time they go up, it must be identical to the last time they went up and the time before that, etc. If it’s 10 or 20 years between aerial counts, all effort must be made to do things exactly as they were done before.

I have spoken with pilots and counters in the past. They explained to me that aerial counting presents a bunch of problems that few people can understand. All stated that the most important aspect to aerial counting is the relationship between the pilot and the counter(s). Each time managers fly, is it always the same counter and same pilot? Is it even the same plane or helicopter (think noise or size)? Is the weather and visibility, in the air and on the ground, the exact as before? Does the aircraft fly at the exact same altitude? And these are some of the obvious questions.

Does this mean that we throw the baby out with the bathwater? No, it means that without this basic understanding of consistency, then how reliable is any information collected which translates into poor and inaccurate determinations? The more “scientific” the process, the more accurate the results. Surely we can all agree on that. One of the problems that exists with those who argue in support of global warming, is that scientists keep changing the locations of test equipment and the processes they are using to collect the data.

Let’s return to the spotlight surveys for a moment. According to information provided in the article, the author makes statements which leads one to believe that enough work and collection of data was done that he was able to tell readers that spotlight surveys only “averaged” accuracy about 41% of the time. Again I ask, how did he arrive at that conclusion?

It is stated that there was inconsistency in the use of the cameras, i.e. locations changing, observers, equipment, etc. If the spotlight surveys were set up and run with a consistent scientific process, employing the utmost in consistent testing, can that 41% be raised to something higher? I believe it can.

Once again, assuming that deer populations are important to know and that there is no real way to ever exactly know deer populations, on a wider scale than just 20 or 30 acres, deer population estimates then become an estimation based on known values. The more consistent the testing for known values becomes, the more accurate the estimating then becomes.

If enough research was done to establish a solid 50% accuracy rate with spotlight surveys, then employing surveys as part of the process, doesn’t it all become relative? In other words, if the data at this moment in time is good data that tells me that my spotlight surveys are consistently giving me deer estimates that are 50% below actual, how then is the employment of spotlight surveys a waste of time and resources?

If the deer managers industry is or soon will be, employers of the notion that it is a fallacy that good deer management doesn’t require a good handle on the population, then none of this any longer matters – there soon will be no deer left. But, how would they know this?