Why “I’m Possibly Wrong” Isn’t Good Enough

By
October 27, 2015
Philosophy
2 Comments

When I presented my talk at Seekerfest STL 2015, I got a lot of pushback for phrasing one of my last points as “Start from the standpoint that you are probably wrong.” One of those individuals was none other than Matt Dillahunty, who after my talk came up to me and said something along the lines of, “You should change that line to ‘possibly wrong’ instead of probably. If you think you are probably wrong, you should just go ahead and change your position.”

Being who I am, and who Matt is, I took this advice to heart and pondered it from just about every angle I could think of. And, going where even atheist angels fear to tread, I think Matt is dead wrong on this one, as were the other folks for whom this phrasing was uncomfortable. In this post, I’m going to explain why a starting point of “I am possibly wrong” is not nearly good enough if one wants to rid themselves of delusional thinking from both a semantic/pedantic standpoint, as well as a practical/strategic standpoint. And then the Internet is free to step in and give me the whupping I most assuredly deserve, if I’m wrong.  ;)

The Semantic Argument

Acknowledging that you are probably wrong on any given position, especially those to which you are personally, culturally, and emotionally attached, does *not* necessarily mean you should change your position. It is simply reality.

Think about it – for any given religious/political/cultural debate topic, there are an infinite number of possible positions to take. For example, it is *possible* (if improbable) that the solution to global warming is for every firstborn descendent of a certain German barrel-maker born in 1807 to pray to Loki to soothe his burning jealous anger towards his brother Thor, thus cooling the globe. I can construct an infinite number of possible fixes for global warming, or answers to any other political/cultural/religious point of contention. Aliens. Illuminati. Vaccines. Bacteria. Carbon. Virtual particles. You name it. Thus, my odds of picking the correct position out of the infinite number of available positions is mathematically almost zero.

Even if we eliminate the highly improbable scenarios, in most cases we are left with dozens, if not hundreds, of plausible answers to any given question. When does life begin? Given labor conditions around the world, is it ethical to buy clothes at Wal-Mart, or consumer electronics from just about anyone who sells them? Can any form of Socialism succeed in the long-run? Does free trade between nations help or hurt in the long run? Are Washington outsiders better choices to govern than career politicians? And on and on.

In any of those types of questions where contention exists, it exists largely because the answers depend on the context one considers when thinking about them, and a degree of ambiguity on what actually constitutes the “best” answer in any given situation. The odds that you have picked the best possible answer out of all plausible alternatives, that you have seen and considered all available information and all possible nuances, and landed on the one answer that is quantitatively and qualitatively better than any other possible answer, is still very close to zero. It doesn’t matter, in most cases, how well-researched you are, how airtight your logic is, and how many possible alternatives you have given equal consideration. From a pure probability perspective, there is almost certainly a better answer than the one you currently have. It might be similar to the position you hold today, but it might not, and you have no way of knowing it at this moment. Thus you are very much, pedantically/semantically/mathematically speaking, still probably wrong.

This doesn’t mean you should “go ahead and change positions.” Admitting that I am probably wrong does not mean I have a better answer than the one I hold to today. It simply means I acknowledge the reality that there is likely a better answer out there, and when I run across it, I can move to it.

The Strategic Argument

The other, and possibly more important argument about why probably is better than possibly in this scenario has to do with human psychology. If you are like me, when I make the statement “I possibly could be wrong…” what I am thinking to myself is “…but I am probably right.” I consider myself a reasonable individual, and someone who has thought deeply about issues – perhaps more so than most. But herein lies the danger of “possibly wrong” – if I think I am probably right, the most common components of human nature will dictate that I am automatically, perhaps unconsciously, and irrevocably going to be at least a tiny bit emotionally invested in my positions. When I am emotionally invested in my positions, I tend to want to defend them. And if I am defending positions, I am unconsciously raising the bar and increasing the level of proof required to unseat me from those positions. Like I said in my talk, this is how delusional thinking begins.

However, if I start from a position of “I am probably wrong” instead, my emotional attachment to whatever position I currently hold is very low, and it is easy to switch from one position to another given more or better data. If two parties in disagreement both start from this position, then there is no emotional investment in winning the argument. Losing the argument (i.e. coming to the conclusion that the other party is actually more correct than I am) actually means winning because you are now closer to what is objectively true. I dare say that this positioning would lead to an easier discovery of common ground and fewer contentious exchanges, even among people who start off in positions of strong disagreement. And that would be the goal.

What’s the Main Thing?

I understand that there will be those who claim that my semantic argument is meaningless because what *they* mean when they say “possibly wrong” is different. Perhaps. But I would then encourage those individuals to consider why it is important for them to distinguish between possibly and probably in this context. Why are they so emotionally invested in defending their beliefs? And do they understand that emotional investment puts them at risk for delusion? If your goal is the search for objective truth and not “feeding red meat to the base” as it were, that emotional trigger should serve as a warning that you might not be quite as objective as you’d like to believe.

In the end, I personally think the most important thing is that we find common ground and work together to make our world a better place in tangible ways. I don’t necessarily need convince you all that there is no supernatural realm, gods, angels, or mystic powers in or outside the universe. I simply need to find ways for us to work together, despite our disagreements. And thus, I start from a standpoint that I am probably wrong on all of that, and invite you to help me find objective truth – as long as you are willing to start from the same position and join me in the exploration. :) Even if not, though, we can still find ways to agree on moral issues even if we disagree on why those positions are considered moral in the first place, and we can work together towards common ends to reduce suffering, evil, and injustice in our world. That’s the main thing as far as I’m concerned.

  • An intriguing suggestion Jason, but as you already know, I think it’s probably wrong. :)

    Your attempt to address the semantic issue and try and lever the brain away from its native cognitive bias seems very right-headed.

    The difficulty as I see it, is that the problem is more than semantic. My mind naturally rebels against a position of “I am probably wrong”, and either adds an invisible and implicit ” – I don’t really believe that” or else adds an implicit “and so I’m not willing to defend it properly”.

    That is, if I hold a position, and want to argue for it, I need to believe that I am probably right to the best of my knowledge.

    If I genuinely don’t believe I am right, which is what “I am probably wrong” necessarily implies, then for me, that will compromise my ability to argue for my position in the marketplace of ideas, even though I think it’s the best one out there.

    My way of thinking is slightly different. I want to say that I am always open to new data and new ways of looking at it. I try to invite new information and to actively seek it out. I know that my position will continue to change, and in that sense I know that my current position is “probably wrong” as you have put it, but right now, it’s the best position I am aware of with the information available to me.

    The two positions are essentially very similar, but we’re trying to play different mind games with ourselves. My perspective seems to work for the way my brain is configured, but I don’t doubt that yours works for the way your brain is configured.

    At the moment, anyway. But I’m happy for you to clarify anything you think I’ve missed.

    Thoughts?

    • Jason Eden

      If admitting you are probably wrong (which is true) compromises your ability to genuinely support those ideas you currently hold to be closest to true, then I would challenge you to consider whether what you hold are really ideas subject to reality, or positions and postures with a need to defend, if necessary, against facts and evidence that might contradict them. That makes you no better than the religious zealot, from a fundamental first-principles perspective. It is my wish that the atheist movement as a whole would be better – not just better from a “we have more evidence and logic” perspective, which I believe is true, but also better from a “we hold ourselves to a much higher standard of thinking, and thus hold all of our current ideas with an open hand and open mind, so that only that which is really closest to actual truth remains.” It’s more than a mind game, it’s a mindset, and one that I think is clearly superior to the alternative. People who lack confidence in their beliefs *SHOULD* be buffered by alternative ideas, and *SHOULD* be worried that they’ve got the wrong ones, until they reach the point where they have thoroughly vetted all reasonable alternatives. To do otherwise, again, makes you no better than any fundamentalist ideology out there, if you ask me, regardless of how much more open to evidence you might be. It’s a matter of degrees, not essence. I want to be different as a matter of essence. I think we’d all be better off it that were true for everyone.