Why “I’m Possibly Wrong” Isn’t Good Enough
When I presented my talk at Seekerfest STL 2015, I got a lot of pushback for phrasing one of my last points as “Start from the standpoint that you are probably wrong.” One of those individuals was none other than Matt Dillahunty, who after my talk came up to me and said something along the lines of, “You should change that line to ‘possibly wrong’ instead of probably. If you think you are probably wrong, you should just go ahead and change your position.”
Being who I am, and who Matt is, I took this advice to heart and pondered it from just about every angle I could think of. And, going where even atheist angels fear to tread, I think Matt is dead wrong on this one, as were the other folks for whom this phrasing was uncomfortable. In this post, I’m going to explain why a starting point of “I am possibly wrong” is not nearly good enough if one wants to rid themselves of delusional thinking from both a semantic/pedantic standpoint, as well as a practical/strategic standpoint. And then the Internet is free to step in and give me the whupping I most assuredly deserve, if I’m wrong. ;)
The Semantic Argument
Acknowledging that you are probably wrong on any given position, especially those to which you are personally, culturally, and emotionally attached, does *not* necessarily mean you should change your position. It is simply reality.
Think about it – for any given religious/political/cultural debate topic, there are an infinite number of possible positions to take. For example, it is *possible* (if improbable) that the solution to global warming is for every firstborn descendent of a certain German barrel-maker born in 1807 to pray to Loki to soothe his burning jealous anger towards his brother Thor, thus cooling the globe. I can construct an infinite number of possible fixes for global warming, or answers to any other political/cultural/religious point of contention. Aliens. Illuminati. Vaccines. Bacteria. Carbon. Virtual particles. You name it. Thus, my odds of picking the correct position out of the infinite number of available positions is mathematically almost zero.
Even if we eliminate the highly improbable scenarios, in most cases we are left with dozens, if not hundreds, of plausible answers to any given question. When does life begin? Given labor conditions around the world, is it ethical to buy clothes at Wal-Mart, or consumer electronics from just about anyone who sells them? Can any form of Socialism succeed in the long-run? Does free trade between nations help or hurt in the long run? Are Washington outsiders better choices to govern than career politicians? And on and on.
In any of those types of questions where contention exists, it exists largely because the answers depend on the context one considers when thinking about them, and a degree of ambiguity on what actually constitutes the “best” answer in any given situation. The odds that you have picked the best possible answer out of all plausible alternatives, that you have seen and considered all available information and all possible nuances, and landed on the one answer that is quantitatively and qualitatively better than any other possible answer, is still very close to zero. It doesn’t matter, in most cases, how well-researched you are, how airtight your logic is, and how many possible alternatives you have given equal consideration. From a pure probability perspective, there is almost certainly a better answer than the one you currently have. It might be similar to the position you hold today, but it might not, and you have no way of knowing it at this moment. Thus you are very much, pedantically/semantically/mathematically speaking, still probably wrong.
This doesn’t mean you should “go ahead and change positions.” Admitting that I am probably wrong does not mean I have a better answer than the one I hold to today. It simply means I acknowledge the reality that there is likely a better answer out there, and when I run across it, I can move to it.
The Strategic Argument
The other, and possibly more important argument about why probably is better than possibly in this scenario has to do with human psychology. If you are like me, when I make the statement “I possibly could be wrong…” what I am thinking to myself is “…but I am probably right.” I consider myself a reasonable individual, and someone who has thought deeply about issues – perhaps more so than most. But herein lies the danger of “possibly wrong” – if I think I am probably right, the most common components of human nature will dictate that I am automatically, perhaps unconsciously, and irrevocably going to be at least a tiny bit emotionally invested in my positions. When I am emotionally invested in my positions, I tend to want to defend them. And if I am defending positions, I am unconsciously raising the bar and increasing the level of proof required to unseat me from those positions. Like I said in my talk, this is how delusional thinking begins.
However, if I start from a position of “I am probably wrong” instead, my emotional attachment to whatever position I currently hold is very low, and it is easy to switch from one position to another given more or better data. If two parties in disagreement both start from this position, then there is no emotional investment in winning the argument. Losing the argument (i.e. coming to the conclusion that the other party is actually more correct than I am) actually means winning because you are now closer to what is objectively true. I dare say that this positioning would lead to an easier discovery of common ground and fewer contentious exchanges, even among people who start off in positions of strong disagreement. And that would be the goal.
What’s the Main Thing?
I understand that there will be those who claim that my semantic argument is meaningless because what *they* mean when they say “possibly wrong” is different. Perhaps. But I would then encourage those individuals to consider why it is important for them to distinguish between possibly and probably in this context. Why are they so emotionally invested in defending their beliefs? And do they understand that emotional investment puts them at risk for delusion? If your goal is the search for objective truth and not “feeding red meat to the base” as it were, that emotional trigger should serve as a warning that you might not be quite as objective as you’d like to believe.
In the end, I personally think the most important thing is that we find common ground and work together to make our world a better place in tangible ways. I don’t necessarily need convince you all that there is no supernatural realm, gods, angels, or mystic powers in or outside the universe. I simply need to find ways for us to work together, despite our disagreements. And thus, I start from a standpoint that I am probably wrong on all of that, and invite you to help me find objective truth – as long as you are willing to start from the same position and join me in the exploration. :) Even if not, though, we can still find ways to agree on moral issues even if we disagree on why those positions are considered moral in the first place, and we can work together towards common ends to reduce suffering, evil, and injustice in our world. That’s the main thing as far as I’m concerned.