The fundamental question of philosophy is: how many times do I need to check if I closed the door, before I can be certain that I did? In this post I will start tackling this problem, and I'd like to dedicate it to those of my friends who always check the door more times than necessary.

Knowledge and belief share a dramatic love story, and, while rarely considered equally important, they can hardly exist without each other. Looking at some problems associated with knowledge can help us understand better why beliefs can be an interesting topic to work with.

Is it not easy to capture what we really mean when we talk about knowledge, and what is its relation with certainty and truth. Some people set impossible standards for knowledge, going as far as to conclude that it is impossible to know anything at all. On the other extreme, some others seem to set the bar too low, and fail to account for the difference between knowledge and mere opinion. It's tricky to strike the right balance, and sometimes it helps to distinguish between different kinds of knowledge, too. Some of these nuances will be discussed below.

Okay then, knowledge

The notion of knowledge is notoriously difficult to define precisely, and it has bugged philosophers for centuries. Ancient Greeks were already well aware that even what appears right in front of our eyes cannot always be trusted. Our senses and our reason deceive us in numerous ways, so can we really know anything?

In order to solve this problem, it was suggested that our views can only count as knowledge if certain conditions are met. First of all, knowledge always needs to be justified. It is not just a state of mind. It is not implanted in our brains by aliens or wise spirits. I cannot claim that I know something because I was telepathically illuminated by Neil deGrasse Tyson – regardless how strongly convinced I am, most people won't call it "knowledge". The crucial thing that I'd be missing is evidence, and without evidence my beliefs and convictions are not sufficiently justified.

Obviously, the process of acquiring evidence and drawing conclusions is not arbitrary, either. It needs to follow some rules – rules of logic, for instance.

Let me illustrate it with an example. Suppose Alice makes a following deal with her dog: she will present her dog with a sausage; if the dog barks, Alice will believe that the Earth is round; otherwise, she will believe that it is flat. She pulls out a sausage, her dog barks, and she forms a belief that the Earth is round. It might be true, but did her reasoning method really justify that conclusion? Does she now know that the Earth is round? One needs to be able to demonstrate how they know something – being right is not enough.

But what if – instead of believing her dog – Alice bought all the fancy telescopes, and after years of research concluded that the Earth is round... yet it was actually flat? Many scientific theories in the past seemed to be reasonable and well justified, and still turned out to be false. It is fair to say that people believed in aether or élan vital, but it would be strange to say that they knew about it. This is the second requirement: truth. If we want our beliefs to be knowledge, justification and truth must go hand in hand.

Finally, as obvious as it may seem, there is one last condition of knowledge: belief itself. Suppose Bob is a flat-Earther. Suppose the Earth is actually round, and that Bob is aware of all the most compelling evidence and arguments supporting the round Earth hypothesis, as somebody just explained it to him. However, he still refuses to believe in it, and continues with his flat Earth crap. Can we say that Bob knows that the Earth is round? Probably not.

To sum up, I know that A only if: I believe in A, I have a good reason to believe in A, and A is actually true. If any of these conditions does not hold, I can't claim that I know that A. This definition of knowledge, often called the Justified True Belief account (JTB), dates back at least to Plato, and has been widely accepted for centuries.

But wait. Does it mean that, in order to claim that I know that A, I first need to know that those conditions are satisfied? How would I check that? – Do I believe in A? Check. Is my belief in A justified? Probably – at least I believe it is. Is A true? I will be tempted to say "yes", but (as we've seen) my justification itself doesn't guarantee that A is true, nor does my belief. But I know that A is true, why can't I say "yes"? The fact that I know something (or I think I know it) doesn't seem to give any additional guarantee that it is true. I can know that A, but I can never know if I know. And nobody else can, for that matter. Sounds pretty bad.

What if we just agree that if I know that A, then I also (automatically) know that I know that A, etc.? But think about Jon Snow – he was supposed to know exactly nothing, and now suddenly he knows something: that he knows nothing. And actually he knows infinite number of things, because the knowledge is infinitely recursive. Jon Snow throws an OutOfMemoryError and crashes.

Even if this example doesn't convince you – you could argue that knowing nothing is not really knowledge – you may still appreciate the fact that such recursion creates some kind of automatic, unexplained self-awareness, which comes completely out of the blue.

Let's try to take it a bit more down-to-Earth. Perhaps perfect knowledge like this is not achievable, but I can try to check what I know, right? Even better – other people can help me. You know, I have this theory that A, and I believe it's true. But I might be biased, I might be hallucinating, I might be delusional. Let me ask my friend to double check that. Oh... it seems my friend is actually drunk, and also clearly delusional. Do I have some other friends who are not drunk and delusional? Well fine, let's ask a very qualified and sober academic professor to check my theory. Maybe two professors, just to be sure. How many professors should check it before we can conclude that it is true? If an army of professors check my theory and find no flaws, can it still be false?

One can easily take it to the extreme. Does the external world exist? Uhm, we have some opinions about it, but we can't really be certain. Some people got really frustrated with this situation, including a respected philosopher G. E. Moore, who went on to argue that, while he might easily be mistaken about some more obscure problems, he can, however, be sure about some things that are obvious. In particular, he does know that the external world exists, as his own two hands, that he sees in front of him, are a sufficient proof for that. Nice argument, Mr. Moore.

Philosophy solved

Finally, somebody called bullshit on all of that, and that person's name was Ludwig Wittgenstein. Philosophers cannot – he said – go around and tell people what it means to "know" things. People already know very well what this word means, because they've been using it successfully since the invention of English language.

What does it mean then? This depends on the context, or, as Wittgenstein called it, the language game. A language game is bound to a particular social situation, and consists of a set of unspoken rules and conventions, that all the participants know and understand. If we know the language game, it is usually effortless to understand what another person means when they say something, even if a phrase they use would be ambiguous, or even make no sense at all taken out of context. Meaning of a word is its usage, and we only learn it by using it, or observing how other people do.

Alice: "Why don't you buy cigarettes in this store?"
Bob: "Because I know they don't have any."

How can Bob know it? Maybe he's been there yesterday and asked.

Alice: "Can I have a cigarette?"
Bob: "I don't have any."
Alice: "I know you do."

How can Alice know? With her x-ray vision? Nah, probably she'd just caught Bob being a lying greedy bastard a couple of times in the past, so she doesn't need to check. Notice, this kind of "knowing" is completely acceptable in everyday life, it doesn't sound wrong.

In contrast, a scientist in a lab will be much more careful with "knowing" things. If they say they do know something, their colleagues might assume that there's a scientific consensus, or at least that some sort of experiment was performed. Those rules, however, are almost never explicit. If you make some mathematical calculations, nobody will tell you how many times you need to check them to be sure that they are correct; or how many people you need to ask to help you double-check. Instead, you will learn it from practice, by participating in a community of people who do it.

But what does "know" mean when you take it out of the context it evolved in? Well – nothing. Yet this is exactly what philosophers do! They take words out of context, and try to deliberate on their meaning; which, according to Wittgenstein, is a nonsense, and a waste of time. The criteria for what "truth" or "knowledge" mean are set by a language-speaking community, and only apply within a given language game.

gemooreshand

Nice! So the problem is solved, right? Not really.

It's too easy to say that words mean what they mean, we cannot define them, so let's not overthink it. This simplistic reading for sure doesn't do justice to Wittengstein's position. He pointed out that philosophers tend to get confused by the language use, mixing ordinary language with metaphysics, and getting weird results. Moore's example – and many others – confirm that. But it doesn't solve the entire philosophy just yet.

We still want to acquire knowledge, get as close to the truth as possible, and be able to tell between "better" and "worse" beliefs – especially in science. For sure, "Earth is round" and "Earth is flat" are not just equally valid opinions or socio-cultural conventions. While doing science also involves playing a language game, this game can be analyzed and developed to make our decisions and actions better and more efficient.

Enter uncertainty

Historically, the pursuit of knowledge was too often linked with a wild-goose chase for absolute certainty. Perhaps it was never a good approach. "Is it absolutely certain that A?" is a question without answer. Maybe instead we should ask: "How certain I am that A?", and "How certain I should be that A (given other things I know)?".

This seems to be much closer to how we, humans, operate, and much more useful in practice. Even if we can never be certain about anything, it doesn't mean we're clueless. We observe the world, we reason about what we see, and we use our conclusions quite successfully to make decisions – always under uncertainty. We can try to quantify how convinced we are, and to what extent the evidence supports our beliefs. We can increase or decrease our level of certainty, depending on our new observations. Instead of looking for ultimate certainty, we can try to estimate the probability of things being one way or another, as well as possible.

This finally brings us to the idea of degrees of belief – which I will write about in my next posts.

Read more


Photo: unsplash-logoCarrie Yang