> When I talk to Philosophers on zoom my screen background is an exact replica of my actual background just so I can trick them into having a justified true belief that is not actually knowledge.
The cases cited in the article don't seem to raise any interesting issues at all, in fact. The observer who sees the dark cloud and 'knows' there is a fire is simply wrong, because the cloud can serve as evidence of either insects or a fire and he lacks the additional evidence needed to resolve the ambiguity. Likewise, the shimmer in the distance observed by the desert traveler could signify an oasis or a mirage, so more evidence is needed there as well before the knowledge can be called justified.
I wonder if it would make sense to add predictive power as a prerequisite for "justified true knowledge." That would address those two examples as well as Russell's stopped-clock example. If you think you know something but your knowledge isn't sufficient to make valid predictions, you don't really know it. The Zoom background example would be satisfied by this criterion, as long as intentional deception wasn't in play.
Cushman 67 days ago [-]
It’s not super clear there, but those are examples of a pre-Gettier type of argument that originally motivated strengthening, and externalizing, the J in JTB knowledge— just like you’re doing!
Gettier’s contribution — the examples with Smith — sharpens it to a point by making the “knowledge” a logical proposition — in one example a conjunction, in one a disjunction — such that we can assert that Smith’s belief in the premise is justified, while allowing the premise to be false in the world.
It’s a fun dilemma: the horns are, you can give up justification as sufficient, or you can give up logical entailment of justification.
But it’s also a bit quaint, these days. To your typical 21st century epistemologist, that’s just not a very terrifying dilemma.
One can even keep buying original recipe JTB, as long as one is willing to bite the bullet that we can flip the “knowledge” bit by changing superficially irrelevant states of the world. And hey, why not?
ryanjamurphy 64 days ago [-]
> But it’s also a bit quaint, these days. To your typical 21st century epistemologist, that’s just not a very terrifying dilemma. One can even keep buying original recipe JTB [...]
Sorry, naive questions: what is a terrifying dilemma to 21st century epistemologist? What is the "modern" recipe?
bonoboTP 67 days ago [-]
One should distinguish between one instance and a mechanism/process for producing them. We could take randomness and entropy as an analogy: Shannon entropy quantifies randomness of a sequence generator, not the randomness/complexity of individual instances (which would be more akin to Kolmogorov complexity).
Similarly, the real interesting stuff regards the reliability and predictive power of knowledge-producing mechanisms, not individual pieces produced by it.
Another analogy is confidence intervals, which are defined through a collective property, a confidence interval is an interval produced by a confidence process and the meat of the definition concerns the confidence process, not its output.
I always found the Gettier problems unimpressive and mainly a distraction and a language game. Watching out for smoke-like things to infer whether there is a fire is a good survival tool in the woods and advisable behavior. Neither it nor anything else is a 100% surefire way to obtain bulletproof capital-letter Truth. We are never 100% justified ("what if you're in a simulation?", "you might be a Boltzmann brain!"). Even stuff like math is uncertain and we may make a mistake when mentally adding 7454+8635, we may even have a brainfart when adding 2+2, it's just much less likely, but I'm quite certain that at least one human manages to mess up 2+2 in real life every day.
It's a dull and uninteresting question whether it's knowledge. What do you want to use the fact of it being knowledge or not for? Will you trust stuff that you determine to be knowledge and not other things? Or is it about deciding legal court cases? Because then it's better to cut the middle man and directly try to determine whether it's good to punish something or not, without reference to terms like "having knowledge".
efitz 67 days ago [-]
Like arguing which level of the OSI model a particular function of a network stack operates at. I’d love to have those hours back from 20’s me.
roenxi 67 days ago [-]
Well ... obviously any Gettier-style example will not have enough evidence because someone came to the wrong conclusion. But there is a subtle flaw in your objections to Wikipedia's examples - to have a proper argument you would need to provide a counterexample where there is enough evidence to be certain of a conclusion. And the problem is that isn't possible - no amount of evidence is enough to come to a certain conclusion.
The issue that Gettier & friends is pointing to is that there are no examples where there is enough evidence. So under the formal definition it isn't possible to have a JTB. If you've seen enough evidence to believe something ... maybe you'd misinterpreted the evidence but still came to the correct conclusion. That scenario can play out at any evidence threshold. All else failing, maybe you're having an episode of insanity and all the information your senses are reporting are wild hallucinations but some of the things you imagine happening are, nonetheless, happening.
acchow 67 days ago [-]
The example is also a joke. Many things shown on screens are CGI or AI-generated, so the belief here is not justified.
ImHereToVote 67 days ago [-]
I believe traders call this Lambda.
naasking 67 days ago [-]
Funny comment, but it either fails JTB or is JTB: a) nobody thinks backgrounds on Zoom have to represent your actual background, eg. they don't think it's justified to assert this conclusion, and b) that the background corresponds 1:1 the real background means even if you had some other justification for thinking it was the background, the proposition is true so you would have a JTB.
lo_zamoyski 67 days ago [-]
I would dispute your definition of "justification". I would also draw a distinction between "definition of knowledge" and "knowing that you know". I.e., that you know that the viewer doesn't know, even though he thinks he knows, is itself grounded in justification (here, the knowledge that you have put up a fake background).
I could just as easily construct a problem in which I quietly turn off your background, which would mean your Zoom partner does possess knowledge while you do not, even though now it is you who thinks he does.
merryocha 67 days ago [-]
I was a philosophy major in college and semantic quibbling over Gettier problems was popular while I was there. I have always believed that Gettier's popularity was due to the fact that the paper was only three pages, and therefore it was the only paper that the academics actually read to the end. I never thought there was anything particularly deep or noteworthy about the problem at all - it is fundamentally a debate over the definition of knowledge which you could debate forever, and that's exactly what they were doing - arguing about the definition of knowledge, one 30-page paper at a time.
jerf 67 days ago [-]
This is one of the places that I think some training in "real math" can help a lot. At the highest levels I think philosophers generally understand this, but a lot of armchair philosophers and even some nominally trained and credentialed ones routinely make the mistake of thinking there is a definition of "knowledge", and that arguing and fighting over what it is is some sort of meaningful activity, as if, if we could just all agree on what "knowledge" is that will somehow impact the universe in some presumably-beneficial way. That somehow the word itself is important and has its own real ontological existence, and if we can just figure out "exactly what 'knowledge' really is" we'll have achieved something.
But none of that is actually true. Especially the part where it will have some sort of meaningful impact if we can just nail it down, let alone whether it would be beneficial or not.
There are many definitions of knowledge. From a perspective where you only know something if you are 100% sure about something and also abstractly "correct", which I call "abstract" because the whole problem in the first place is that we all lack access to an oracle that will tell us whether or not we are correct about a fact like "is there a cow in the field?" and so making this concrete is not possible, we end up in a very Descartian place where just about all you "know" is that you exist. There's some interesting things to be said about this definition, and it's an important one philosophically and historically, but... it also taps out pretty quickly. You can only build on "I exist" so far before running out of consequences, you need more to feed your logic.
From another perspective, if we take a probabilistic view of "knowledge", it becomes possible to say "I see a cow in the field, I 'know' there's a cow there, by which I mean, I have good inductive reasons to believe that what I see is in fact a cow and not a paper mâché construct of a cow, because inductively the probability that someone has set up a paper mâché version of the cow in the field is quite low." Such knowledge can be wrong. It isn't just a theoretical philosophy question either, I've seen things set up in fields as a joke, scarecrows good enough to fool me on a first glance, lawn ornamentation meant to look like people as a joke that fooled me at a distance, etc. It's a real question. But you can still operate under a definition of knowledge where I still had "knowledge" that a person was there, even though the oracle of truth would have told me that was wrong. We can in fact build on a concept of "knowledge" in which it "limits" to the truth, but doesn't necessarily ever reach there. It's more complicated, but also a lot more useful.
And I'm hardly exhausting all the possible interesting and useful definitions of knowledge in those two examples. And the latter is a class of definitions, not one I nailed down entirely in a single paragraph.
Again, I wouldn't accuse the most-trained philosophers of this in general, but the masses of philosophers also tend to spend a lot of time spinning on "I lack access to an oracle of absolute truth". Yup. It's something you need to deal with, like "I think, therefore I am, but what else can I absolutely 100% rigidly conclude?", but it's not very productive to spin on it over and over, in manifestation after manifestation. You don't have one. Incorporate that fact and move on. You can't define one into existence. You can't wish one into existence. You can't splat ink on a page until you've twisted logic into a pretzel and declared it that it is somehow necessary. If God does exist, which I personally go with "Yes" on, but either way, He clearly is not just some database to be queried whenever we wonder "Hey, is that a cow out there?" If you can't move on from that, no matter how much verbiage you throw at the problem, you're going to end up stuck in a very small playground. Maybe that's all they want or are willing to do, but it's still going to be a small playground.
mistermann 67 days ago [-]
Are you one of the rare individuals who was cool as a cucumber during the various mass psychological meltdowns we experienced as a consequence of wars, pandemics and various other causes of human death in the last few years?
Also: how did you come to know all the things you claim to in your comment (and I suspect in a few others in your history)?
artursapek 67 days ago [-]
I was going to say, I don’t even understand how the second example in this post is a gettier. He thought one event caused the issue to start, but a different event did instead. And they happened around the same time. Ok? This doesn’t seem very philosophical to me.
Maxatar 67 days ago [-]
That's what a gettier is, it's when you have a justified true belief about a proposition but the justification is merely coincidental. You were still justified to believe the proposition, the proposition is still true, and so under the "justified, true, belief" model of knowledge you would be considered to have known the proposition, and yet as this example and others demonstrate, that's not really what we'd consider to be knowledge, indicating that knowledge is more than justified, true, belief.
Whether you agree or disagree is a separate matter and something you can discuss or ponder for 5 minutes. The article is about taking a somewhat interesting concept from philosophy and applying it to a routine software development scenario.
artursapek 66 days ago [-]
Right but merely what time two events occurred doesn’t seem like enough to “justify” a belief. It can be a suspicion but that’s about it.
feoren 67 days ago [-]
The vast majority of philosophical arguments are actually arguments about definitions of words. You can't actually be "wrong" in philosophy -- they never prove ideas wrong and reject them (if they did, we'd just call it "science"), so it's just an ever-accumulating body of "he said, she said". If you ask a philosophical question, that's the answer you get: "well, Aristotle said this, and Kant said that, and Descartes said this, and Searle said that." "... so, what's the answer?" "I just told you." So if you want to actually argue about something, you argue about definitions.
goatlover 67 days ago [-]
Science doesn't prove things, it provides empirical support for or against theories. Philosophical ideas can be shown to be wrong if their reasoning is shown to be invalid. Words have meaning, and philosophical arguments are over the meaning of those words. The problem is there is a "loose fit between mind and world", as one contemporary philosopher put it. We naively thinks words describe the word as is, but they really don't. There's all sorts of problems with the meaning of our words when examined closely.
For example, it feels like we have free will to many people, but the meaning is hard to pin down, and there are all sorts of arguments for and against that experience of being able to freely choose. And what that implies for things like punishment and responsibility. It's not simply an argument over words, it's an argument over something important to the human experience.
mistermann 67 days ago [-]
> Science doesn't prove things, it provides empirical support for or against theories.
There's been some progress science must have missed out on then:
That is one organization, many others claim they've also achieved the impossible.
goatlover 66 days ago [-]
Since this is a discussion on philosophy in the context of knowledge and metaphysics, scientific organizations don't claim they provide proof (in the sense of logic and truth), rather they provide rigorous scientific evidence to support their claims, such as vaccines not causing autism. But science is always subject to future revision if more evidence warrants it. There is no truth in the 100% certainty sense or having reached some final stage of knowledge. The world can always turn out to be different than we think. This is certainly true in the messy and complex fields of biology and medicine.
mistermann 66 days ago [-]
Your claims are demonstrably false, there are many instances of authoritative organizations that explicitly and literally assert that vaccines do not cause autism.
Out of curiosity, can you realize I am arguing from a much more advantageous position, in that I only have to find one exception to your popular "scientific organizations don't claim" meme (which I (and also you) can directly query on Google, and find numerous instances from numerous organizations), whereas you would have had to undertaken a massive review of all communications (and many forms of phrasing) from these groups, something we both know you have not done?
A (portion of) the (I doubt intentional or malicious) behavior is described here:
I believe the flaw in scientists (and their fan base) behavior is mainly (but not entirely) a manifestation of a defect in our culture, which is encoded within our minds, which drives our behavior. Is this controversial from an abstract perspective?
It is possible to dig even deeper in our analysis here to make it even more inescapable (though not undeniable) what is going on here, with a simple series of binary questions ("Is it possible that...") that expand the context. I'd be surprised if you don't regularly utilize this form of thinking when it comes to debugging computers systems.
Heck, I'm not even saying this is necessarily bad policy, sometimes deceit is literally beneficial, and this seems like a prime scenario for it. If I was in power, I wouldn't be surprised if I too would take the easy way out, at least in the short term.
feoren 67 days ago [-]
[dead]
visarga 67 days ago [-]
Knowledge and truth are "centralized" concepts. I prefer "models" which don't have such issues, all models are imperfect and provisional, and there are many models for the same process. Knowledge and truth have a way to lead to endless debates, while models are better understood in their limitations and nobody claims they can reach perfection. In programming we call them abstractions and we know they are always leaky.
I think there are plenty of philosophical problems that emerge from our desire to describe things in centralized ways. Consciousness, understanding and intelligence are three of them. I prefer "search" because it is decentralized, and cover personal/inter-personal and social domains. Search defines a search space unlike consciousness which is silent about the environment and other people when we talk about it. Search does what consciousness, understanding and intelligence are for. All mental faculties: attention, memory, imagination, planning - are forms of search. Learning is search for representation. Science is search, markets are search, even DNA evolution and protein folding are search. It is universal and more scientific. Search removes a lot of the mystery and doesn't make the mistake to centralize itself in a single human.
goatlover 67 days ago [-]
But search is a functional term, while consciousness is experiential. Deep philosophical problems exist because there are very puzzling things about existence. Changing terms doesn't make that puzzlement go away.
CleverLikeAnOx 68 days ago [-]
An old timer I worked with during my first internship called these kinds of issues "the law of coincidental failures" and I took it to heart.
I try a lot of obvious things when debugging to ascertain the truth. Like, does undoing my entire change fix the bug?
EasyMark 67 days ago [-]
When something absolutely doesn’t make sense to me I often go back to a point in time and do a checkout of when I was 100% sure “it worked” and if it doesn’t then I assume something external changed, hardware, backend service, the earth’s wobble. If it does work then I will bisect the timeline until I Iocate it. This works for me 99% of the time on tough bugs that just defy any logic. It’s kind of known quantity as opposed to going through endless logs, blames, file diffs, etc. I know in some cases it isn’t really possible but in code that you can have a fairly quick turn around on build/install/test, it works really well.
K0balt 68 days ago [-]
Yeah, good times. I just recently had one that was a really strong misdirection, ended up being 2 simultaneous other, non related things that conspired to make it look convincingly like my code was not doing what it was supposed to. I even wrote tests to see if I had found a corner-case compiler bug or some broken library code. I was half way through opening an issue on the library when the real culprit became apparent. It was actually a subtle bug in the testing setup combined with me errantly defining a hardware interface on an ancient protocol as an HREG instead of an IREG, which just so happened to work fine until it created a callback loop inside the library through some kind of stack smashing or wayward pointer. I was really starting to doubt my sanity on this one.
foobarian 67 days ago [-]
> corner-case compiler bug
They say never to blame the compiler, and indeed it's pretty much never the compiler. But DNS on the other hand... :-)
recursive 67 days ago [-]
Unless you wrote the compiler
K0balt 66 days ago [-]
Yeah, it’s basically never the compiler. That’s how you know you are truly desperate… when you think you’ve eliminated everything else lol.
throwup238 68 days ago [-]
The joys of modbus PLCs, I take it?
K0balt 67 days ago [-]
Ah, yes. But a roll- your own device with C++ on bare metal, so lots more fun.
(we’ll need a few thousand of these, and the off the shelf solution is around 1k vs $1.50 for RYO )
By the way, the RISC V espressif esp32-C3 is a really amazing device for < $1. It’s actually cheaper to go modbus-tcp over WiFi then to actually put RS485 on the board like with a MAX485 and the associated components. Also does ZIGBEE and BT, and the espressif libraries for the radio stack are pretty good.
Color me favorably impressed with this platform.
m463 67 days ago [-]
I wonder if there is a law of coincidental succeeses too. (if you're an old timer, you might call this some sort of Mr. Magoo law, or maybe "it seems to work for me")
taneq 67 days ago [-]
This is the root of 'pigeon religions'. Someone sees a string of events and infers a causal link between them and an outcome. Confirmation bias kicks in and they notice when this string of events occurs again, which is made more likely by the fact that the events in the string are largely the person's own actions which they believe the events will produce the desired outcome. They tell their friends and soon a whole group of people believe that doing those things is necessary to produce that outcome.
That's how you get things like equipment operators insisting that you have to adjust the seat before the boot will open.
codeulike 68 days ago [-]
“I am sitting with a philosopher in the garden; he says again and again 'I know that that’s a tree', pointing to a tree that is near us. Someone else arrives and hears this, and I tell him: 'This fellow isn’t insane. We are only doing philosophy.”
― Ludwig Wittgenstein
tucnak 67 days ago [-]
I often wonder if LLMs would have made Wittgenstein cry...
pjc50 67 days ago [-]
It's remarkable how LLMs have skipped any kind of philosophical grounding for "how do we know that this output is valid?" and just gone straight to "looks good to me". Very postmodernist. Also why LLMs are going to turn us into a low-trust society.
A tool for filling the fields with papier-mache cows.
ndndjdjdn 67 days ago [-]
The scary thing is excellent advances in all the other AI/ML need to fake people: text to speech and back, yolo, video generation. The illusion might become the reality. We need a few generations to die (100 years time?) before we will shake of this need to even be human! Who is going to say no to a perfect memory implant. Now a never get dementia implant. And so on! Finally what is the cow even?
LemonyOne 67 days ago [-]
Because reality is imperfect, and our perception of reality is even less perfect than that. And reality is full of "good enough" things, so if nature is "ok" with "good enough" things, why not artificial things?
ben_w 67 days ago [-]
> A tool for filling the fields with papier-mache cows.
Cargo culting as a service.
osullivj 67 days ago [-]
Suspect not as later Wittgenstein tells us "the meaning is the use"; don't look at the dictionary definition, look at many examples. And that's what LLMs do.
tucnak 65 days ago [-]
Tears of joy!
awanderingmind 67 days ago [-]
I don't know, but I reckon he would have been unimpressed.
tucnak 67 days ago [-]
I would argue the opposite; LLM technology is the first viable means to computing "language games" as such, and quite in line with the late W. theory.
orbisvicis 67 days ago [-]
I'm not sure I see the big deal. Justification is on a scale of 0 to 1, and at 1 you are onmiscient. We live in a complicated world; no one has time to be God so you just accept your 0.5 JTB and move on.
Or for the belief part, well, "it's not a lie if you believe it".
And as for the true bit, let's assume that there really is a cow, but before you can call someone over to verify your JTB, an alien abducts the cow and leaves a crop circle. Now all anyone sees is a paper-mache cow so you appear the fool but did have a true JTB - Schroedinger's JTB. Does it really matter unless you can convince others of that? On the flip side, even if the knowledge is wrong, if everyone agrees it is true, does it even matter?
JTB only exist to highlight bad assumptions, like being on the wrong side of a branch predictor. If you have a 0.9 JTB but get the right answer 0.1 times and don't update you assumptions, then you have a problem. One statue in a field? Not a big deal! *
* Unless it's a murder investigation and you're Sherlock Holmes (a truly powerful branch predictor).
orbisvicis 67 days ago [-]
edit: Then there's the whole "what is a cow" thing. Like if you you stuffed a cow carcass with a robot and no one could tell the difference, would that still be a cow? Or what if you came across a horrifying cow-horse hybrid, what would you call that? Or if the cow in question had a unique mutation possessed by no other cow - does it still fit the cow archetype? For example, what if the cow couldn't produce milk? Or was created in lab? Which features are required to inherit cow-ness? This is an ambiguity covered by language, too. For example, "cow" is a pejorative not necessarily referring to a bovine animal.
edit: And also the whole "is knowledge finite or infinite?". Is there ever a point at which we can explain everything, science ends and we can rest on our laurels? What then? Will we spend our time explaining hypotheticals that don't exist? Pure theoretical math? Or can that end too?
pelorat 67 days ago [-]
A robot in a cow carcass is not a cow, it's a "robot in a cow carcass". Someone might believe it's a cow because they lack crucial information but that's on them, doesn't change the fact.
A cow-horse hybris is not a cow, it's a cow-horse hybrid.
A cow with a genetic mutation is a cow with a genetic mutation.
A cow created in a lab, perhaps even grown 100% by artificial means in-vitro is of course still a cow since it has the genetic makeup of a cow.
The word cow is the word cow, its meaning can differ based on context.
Things like this is why philosophers enjoy zero respect from me and why I'm an advocate for abolishing philosophy as a subject of study and also as a profession. Anyone can sit around thinking about things all day. If you spend money on studying it at a university you're getting scammed.
Also knowledge is finite based purely on the assumption that the universe is finite. An observer outside the universe would be able to see all information in the universe and they would conclude; you can't pack infinite amounts of knowledge into a finite volume.
williamdclt 67 days ago [-]
While I tend to also wave away philosophers as it always boil down to unclear definitions, I don’t think your argument answers the question at all.
From “it has the genetic makeup of a cow”, you’re saying that what make a cow a cow is the genetic makeup. But then part of that ADN defines the cow? What can vary, by how much, before a cow stops being a cow?
The point is that you can give any definition of “cow”, and we can imagine a thing that fits this definition yet you’d probably not consider a cow. It’s a reflection on how language relates to reality. Whether it’s an interesting point or not is left to the reader (I personally don’t think it is)
fenomas 67 days ago [-]
I have this pet theory that Philosophy is kind of the Alternative Medicine of intellectual pursuits. In the same way that Alternative Medicine is doomed to consist of stuff that doesn't work (because anything proven to work becomes "Medicine"), Philosophy is made entirely of ideas that can't be validated through observation (because then they'd be Science), and also can't be rigorously formalized (because then they'd be Math).
So for any given claim in Philosophy, if you could find a way to either (a) compare it to the world or (b) state it in unambiguous symbolic terms, then we'd stop calling it Philosophy. As a result it seems like the discipline is doomed to consist of unresolvable debates where none of the participants even define their terms quite the same way.
Crazy idea, or no?
pegasus 67 days ago [-]
From Will Durant's The Story of Philosophy:
"Some ungentle reader will check us here by informing us that philosophy is as useless as chess, as obscure as ignorance, and as stagnant as content. “There is nothing so absurd,” said Cicero, “but that it may be found in the books of the philosophers.” Doubtless some philosophers have had all sorts of wisdom except common sense; and many a philosophic flight has been due to the elevating power of thin air. Let us resolve, on this voyage of ours, to put in only at the ports of light, to keep out of the muddy streams of metaphysics and the “many-sounding seas” of theological dispute. But is philosophy stagnant? Science seems always to advance, while philosophy seems always to lose ground. Yet this is only because philosophy accepts the hard and hazardous task of dealing with problems not yet open to the methods of science—problems like good and evil, beauty and ugliness, order and freedom, life and death; so soon as a field of inquiry yields knowledge susceptible of exact formulation it is called science. Every science begins as philosophy and ends as art; it arises in hypothesis and flows into achievement. Philosophy is a hypothetical interpretation of the unknown (as in metaphysics), or of the inexactly known (as in ethics or political philosophy); it is the front trench in the siege of truth. Science is the captured territory; and behind it are those secure regions in which knowledge and art build our imperfect and marvelous world. Philosophy seems to stand still, perplexed; but only because she leaves the fruits of victory to her daughters the sciences, and herself passes on, divinely discontent, to the uncertain and unexplored."
fenomas 67 days ago [-]
Thanks, that's a hell of a quote! Though one suspects that Alternative Medicine would describe itself in similar terms, given the chance..
grvbck 67 days ago [-]
> for any given claim in Philosophy, if you could find a way to either (a) compare it to the world or (b) state it in unambiguous symbolic terms…
Not a crazy idea – that is called logic. Which is a field of philosophy. Philosophy and math intersect more than many people think.
dogleash 67 days ago [-]
Science and Math started as part of Philosophy. They just split out and became large specializations of their own. Schools for Math and Science still graduate Doctors of Philosophy for a reason.
Even the Juris Doctor is a branch of philosophy. After all, what is justice?
fenomas 67 days ago [-]
Sure, I hoped it might go without saying that I meant Philosophy as the term is used now - post-axiomatic systems and whatnot, not as the term was used when it encompassed the two things I'm comparing it to.
grvbck 67 days ago [-]
My pet peeve is that a lot of people who have never studied an hour of philosophy think that this is what people who study philosophy do.
"Anyone can sit around thinking about things all day" is like saying "anybody can sit and press keys on a keyboard all day".
I took a semester of philosophy at uni, perhaps the best invested time during my years there and by far more demanding than most of what followed. 100 % recommend it for anyone who wants to hone their critical reasoning skills and intellectual development in general.
orbisvicis 67 days ago [-]
Oh, this is a fun Gettier, with some language ambiguities, and some ship of Theseus sprinkled in! Let's say some smart-aleck travels back in time to when the English language was being developed and replaces all cows with robot cows such that current cows remain biological. So technically the word "cow" refers only to robot cows. What then?
yldedly 67 days ago [-]
You've called J and T into question, so let's do B as well. Physicists know that QM and relativity can't be true, so it's fair to say that they don't believe in these theories, in a naive sense at least. In general anyone who takes Box' maxim that all models are wrong (but some are useful) to heart, doesn't fully believe in any straightforward sense. But clearly we'd say physicists do have knowledge.
Maxatar 67 days ago [-]
Sure we'd say physicists have knowledge of quantum mechanics and general relativity. And we can also say physicists have knowledge of how to make predictions using quantum mechanics and general relativity. In this sense, general relativity is no more wrong than a hammer is wrong. Relativity is simply a tool that a person can use to make predictions. Strictly speaking then relativity is not itself right or wrong, rather it's the person who uses relativity to predict things who can be right or wrong. If a person uses general relativity incorrectly, which can be done by applying it to an area where it's not able to make predictions such as in the quantum domain, then it's the person who uses relativity as a tool who is wrong, not relativity itself.
As a matter of linguistic convenience, it's easier to say that relativity (or theory X) is right means that people who use relativity to make predictions make correct predictions as opposed to relativity itself being correct or incorrect.
yldedly 67 days ago [-]
My point is that QM and GR make very different claims about what exists. Perhaps it's possible to unify the descriptions. But more likely there will be a new theory with a completely different description of reality.
On small scales, GR and Newtonian mechanics make almost the same predictions, but make completely different claims about what exists in reality. In my view, if the theories made equally good predictions, but still differed so fundamentally about what exists, then that matters, and implies that at least one of the theories is wrong. This is more a realist, than an instrumentalist position, which perhaps is what you subscribe to, but tbh instrumentalism always seemed indefensible to me.
Maxatar 67 days ago [-]
If you are aware that "Maxatar's conjecture is that 1 + 1 = 5", then it's correct to say that you have knowledge about "Maxatar's conjecture", regardless of whether the conjecture is actually true or false. Your knowledge is that there is some conjecture that 1 + 1 = 5, not that it's actually true.
In that sense, it's also correct to say that physicists have knowledge of relativity and quantum mechanics. I don't think any physicist including Einstein himself thinks that either theory is actually true, but they do have knowledge of both theories in much the same way that one has knowledge of "Maxatar's conjecture" and in much the way that you have knowledge of what the flat Earth proposition is, despite them being false.'
It seems fairly radical to believe that instrumentalism is indefensible, or at least it's not clear what's indefensible about it. Were NASA physicists indefensible to use Newtonian mechanics to send a person to the moon because Newtonian mechanics are "wrong"?
What exactly is indefensible? The observation that working physicists don't really care about whether a physical theory is "real" versus trying to come up with formal descriptions of observed phenomenon to make future predictions, regardless of whether those formal descriptions are "real"?
If someone choses to engage in science by coming up with descriptions and models that are effective at communicating with other people observations, experimental results and whose results go on to allow for engineering advances in technology, are they doing something indefensible?
yldedly 67 days ago [-]
Yes, it's correct to say that I have knowledge of your conjecture, and in the same way that physicists have knowledge of QM and GR regardless of their truth status, but beyond just having knowledge of the theory, they also have knowledge of the reality that the theory describes.
>Were NASA physicists indefensible to use Newtonian mechanics to send a person to the moon because Newtonian mechanics are "wrong"?
No, it was defensible, and that's exactly my point. Even though they didn't believe in the content of the theory (and ignoring the fact that they know a better theory), they do have knowledge of reality through it.
I don't think instrumentalism makes sense for reasons unrelated to this discussion. A scientist can hold instrumentalist views without being a worse scientist for it, it's a philosophical position. Basically, I think it's bad metaphysics. If you refuse to believe that the objects described by a well-established theory really exist, but you don't have any concrete experiment that falsifies it or a better theory, then to me it seems like sheer refusal to accept reality. I think people find instrumentalism appealing because they expect that any theory could be replaced by a new one that could turn out very different, and then they see it as foolish to have believed the old one, so they straight up refuse to believe or care what any theory says about reality. But you always believe something, whether you are aware of it or not, and the question is whether your beliefs are supported by evidence and logic.
ninetyninenine 67 days ago [-]
You're view is more inline with the philosophy of science which holds nothing an ever be justified.
It’s a bit long winded and gets into much more detail but I will post ChatGPT’s most relevant response below:
You’re right to point out that complexity alone doesn’t necessarily rule out deduction. Deduction can, in principle, work even in highly complex systems as long as the premises are perfectly known and logically valid. So the real issue with why deduction fundamentally does not exist in reality comes down to the nature of human knowledge and the way we interact with reality itself. Here’s why deduction struggles at a more fundamental level:
1. The Problem of Incomplete Knowledge
In mathematics and formal logic, deduction works because the premises are often abstract, well-defined, and complete within a given system (e.g., “All triangles have three sides”). In contrast, human knowledge of reality is never complete. We can never be sure we have all the relevant facts, laws, or variables. Even with the most advanced observational tools, there are always things we don’t know or can’t foresee.
• In mathematics: Premises like “All even numbers are divisible by 2” are universally true within that system.
• In reality: We might observe many instances of a phenomenon and think we know the rules, but there could always be exceptions or unknown factors (as in the Black Swan problem).
Because we cannot ever have perfect, complete premises about the world, any deductions we make based on our observations are always vulnerable to being undermined by new information.
2. Reality is Unbounded and Open-Ended
Mathematical systems and formal logic operate in closed systems with clearly defined rules. Reality, on the other hand, is open-ended and continuously evolving. There’s no fixed “set” of all knowledge about the universe. What we take to be the laws of nature today might change tomorrow with new discoveries, so the premises we use for deductive reasoning in the real world are inherently uncertain.
For instance, before the discovery of quantum mechanics, classical mechanics seemed to perfectly describe the physical world. Once quantum theory emerged, the premises on which classical deductions were made had to be re-examined.
• In a closed system (like math): You can set the premises and they remain stable.
• In reality: The premises are constantly subject to change or reinterpretation as we learn more.
3. The Problem of Infinite Regress (Uncertainty in Premises)
Even if you think you have solid premises to base deduction on, those premises themselves depend on other premises, which in turn depend on further premises. This infinite regress means that you can never be certain that your foundational premises are truly, absolutely correct.
For example:
• You might reason deductively that “all swans are white” because all observed swans have been white.
• But this premise itself is based on inductive observation, which is inherently fallible.
• Even if we had observed all swans ever seen by humans, we still couldn’t deduce that “all swans are white” without being omniscient, since future discoveries could prove otherwise.
In the end, any attempt to apply deduction to reality rests on premises that ultimately cannot be guaranteed to be perfectly, universally true, leading to a breakdown in the validity of deduction in real-world scenarios.
4. The Distinction Between Reality and Abstraction
Mathematics and logic are abstract constructs—they exist independently of the physical world and follow internally consistent rules. Reality, on the other hand, is not an abstract system; it is something we experience, observe, and interact with. This creates a fundamental mismatch:
• Abstractions (like mathematics) allow us to create premises and rules that are certain, because we define them.
• Reality doesn’t conform to these strict, definable rules—it involves uncertainty, chance, and emergent properties that abstractions can’t fully capture.
Because reality is not abstract, we cannot reduce it to a system of premises and rules in the same way we can with mathematics. Any attempt to do so will always miss something essential, undermining the validity of deduction in practice.
5. Chaos and Uncertainty in Physical Systems (ChatGPT is wrong here, I deleted it… it references chaos theory which is technically still deterministic, only quantum theory says things are fundamentally unknowable so ChatGPT is right from the perspective of fundamental uncertainty but he used chaos theory wrongly here in his reasoning)
Conclusion: Fundamental Uncertainty and Incompleteness
The fundamental issue with deduction in reality is that human knowledge is inherently incomplete and uncertain. Reality is an open, evolving system where new discoveries and unforeseen events can change what we thought we knew. Deduction requires absolute certainty in its premises, but in reality, we can never have that level of certainty.
At its core, the reason deduction doesn’t fully apply to reality is because reality is far more complex, open-ended, and fundamentally uncertain than the closed, abstract systems where deduction thrives. We cannot create the perfect, unchanging premises needed for deduction, and as a result, deductions in the real world are always prone to failure when confronted with new information or complexities we hadn’t accounted for.
orbisvicis 67 days ago [-]
Does the philosophy of science theorize anything about the end or limits of science and knowledge? I find that topic fascinating.
ninetyninenine 67 days ago [-]
Yes. It says nothing can be proven in science and therefore reality as we know it. Things can only be falsified. But proof is the domain of mathematics… not of reality.
Read the example of the black swan in the wiki link.
Maxatar 67 days ago [-]
Seems contradictory but you can clarify. If a proposition can be falsified then that is knowledge that said proposition is false, and the negation is true. If nothing can be proven or known then it must follow that nothing can be falsified.
ninetyninenine 67 days ago [-]
First you need intuition on what's going on.
"All swans are white."
This statement cannot be proven because it's not possible to observe all swans. There may be some swan in some hidden corner of the earth (or universe) that I did not see.
If I see one black swan, I have falsified that statement.
When you refer to "Not all swans are white" This statement can be proven true but why? This is because the original statement is a universal claim and the negation is a particular claim.
The key distinction between universal claims and particular claims explains why you can "prove" the statement "Not all swans are white." Universal claims, like "All swans are white," attempt to generalize about all instances of a phenomenon. These kinds of statements can never be definitively proven true because they rely on inductive reasoning—no matter how many white swans are observed, there’s always the possibility that a counterexample (a non-white swan) will eventually be found.
In contrast, particular claims are much more specific. The statement "Not all swans are white" is a particular claim because it is based on falsification—it only takes the observation of one black swan to disprove the universal claim "All swans are white." Since black swans have been observed, we can confidently say "Not all swans are white" is true.
Popper's philosophy focuses on how universal claims can never be fully verified (proven true) through evidence, because future observations could always contradict them. However, universal claims can be falsified (proven false) with a single counterexample. Once a universal claim is falsified, it leads to a particular claim like "Not all swans are white," which can be verified by specific evidence.
In essence, universal claims cannot be proven true because they generalize across all cases, while particular claims can be proven once a falsifying counterexample is found. That's why you can "prove" the statement "Not all swans are white"—it’s based on specific evidence from reality, in contrast to the uncertain generality of universal claims.
To sum it up. When I say nothing can be proven and things can only be falsified... it is isomorphic to saying universal claims can't be proven, particular claims can.
mistermann 67 days ago [-]
Is 1=1 disputed in philosophy of science?
orbisvicis 67 days ago [-]
Probably only inasmuch as 1 is a theoretical framework. While 1*N dollars is nice to have, I'd probably have more dollars without fractional rounding.
dogleash 67 days ago [-]
> I'm not sure I see the big deal.
Tumblr is loginwalled now, so I can't find the good version of this, but I'll try and rip it:
Philosophical questions like "what is knowledge" are hard precisely because everyone has an easy and obvious explanation that is sufficient to get them trough life.
But, when forced to articulate that explanation, people often find them to be incomparable with other people's versions. Upon probing, the explanations don't hold at all. This is why some ancient Greek thought experiments can be mistaken for zen koans.
Yeah, you can get by in life without finding a rigorous answer. The vast majority of human endeavor beyond subsistence can be filed under the category "I'm not sure I see the big deal."
To say that about the question of knowledge and then vamp for 200 words is not refusing to engage. It's patching up a good-enough answer to suit a novel challenge and moving on. Which is precisely why these questions are hard, and why some people are so drawn to exploring for an answer.
ilbeeper 67 days ago [-]
Bayesian epistemology is indeed one of the developments in the field that avoids Gettie.
jumping_frog 67 days ago [-]
Biology makes it even more complicated. If you see your mother, you consider her to be imposter. While if you hear your mother's voice, you consider her to be real.
while individual particles remain in quantum superposition, their relative positions create a collective consensus in the entanglement network. This consensus defines the structure of macroscopic objects, making them appear well-defined to observers, including Schrödinger's cat.
nmaley 68 days ago [-]
Gettier cases tell us something interesting about truth and knowledge. This is that a factual claim should depict the event that was the effective cause of the claim being made. Depiction is a picturing relationship: a correspondence between the words and a possible event (eg a cow in a field). Knowledge is when the depicted event was the effective cause of the belief. Since the paper mache cow was the cause of the belief, not a real cow, our intuitions tell us this is not normal knowledge. Therefore, true statements must have both a causal and depictional relationship with something in the world. Put another way, true statements implicitly describe a part of their own causal history.
SuchAnonMuchWow 67 days ago [-]
Mathematicians already explored exactly what you describe: this is the difference between classical logic and intuitionistic logic:
In classical logic statements can be true in and of themselves even if there as no proof of it, but in intuitionistic logic statements are true only if there is a proof of it: the proof is the cause for the statement to be true.
In intuitionistic logic, things are not as simple as "either there is a cow in the field, or there is none" because as you said, for the knowledge of "a cow is in the field" to be true, you need a proof of it. It brings lots of nuance, for example "there isn't no cow in the field" is a weaker knowledge than "there is a cow in the field".
ndndjdjdn 67 days ago [-]
It is a fascinating topic. I spent a few hours on it once. I remember vaguely that the logic is very configurable and you had a lot of choices. Like you choose law of excluded middle or not I think, and things like that depending on your taste or problem. I might be wrong it was 8 years ago and I spent a couple of weeks reading about it.
Also no suprise the rabbit hole came from Haskell where those types (huh) are attracted to this more.foundational theory of computation.
Dumb counterpoint: if it’s not a true belief, is it a false negative or a false positive? Any third option I can think of starts with “true”…
QED - proof by terminological convention!
PaulDavisThe1st 68 days ago [-]
> true, because it doesn't make sense to "know" a falsehoood
That's a problem right there. Maybe that made sense to the Greeks, but it definitely doesn't make any sense in the 21st century. "Knowing" falsehoods is something we broadly acknowledge that we all do.
JadeNB 68 days ago [-]
> That's a problem right there. Maybe that made sense to the Greeks, but it definitely doesn't make any sense in the 21st century. "Knowing" falsehoods is something we broadly acknowledge that we all do.
I think the philosophical claim is that, when we think we know something, and the thing that we turns out to be false, what has happened isn't that we knew something false, but rather that we didn't actually know the thing in the first place. That is, not our knowledge, but our belief that we had knowledge, was mistaken.
(Of course, one can say that we did after all know it in any conventional sense of the word, and that such a distinction is at the very best hair splitting. But philosophy is willing to split hairs however finely reason can split them ….)
PaulDavisThe1st 67 days ago [-]
The problem with the hair splitting is that it requires differentiating between different brain states over time where the only difference is the content.
On Jan 1 2024 I "know" X. Time passes. On Jan 1 2028, I "know" !X. In both cases, there is
(a) something it is like to "know" either X or !X
(b) discernible brain states the correspond to "knowing" either X or !X and that are distinct from "knowing" neither
Thus, even if you don't want to call "knowing X" actually "knowing", it is in just about every sense indistinguishable from "knowing !X".
Also, a belief that we had the knowledge that relates to X is indistinguishable from a belief that we had the knowledge that relates to !X. In both cases, we possess knowledge which may be true or false. The knowledge we have at different times alters; at all times we have a belief that we have the knowledge that justifies X or !X, and we are correct in that belief - it is only the knowledge itself that is false.
kragen 67 days ago [-]
Maybe the people who use "know" in the way you don't are talking about something other than brain states or qualia. There are lots of propositions like this; if I say, "I fathered Alston", that may be true or false for reasons that are independent of my brain state. Similarly with "I will get home tomorrow before sunset". It may be true or false; I can't actually tell. The same is true of the proposition "I know there are coins in the pocket of the fellow who will get the job", if by "know" we mean something other than a brain state, something we can't directly observe.
You evidently want to use the word "know" exclusively to describe a brain state, but many people use it to mean a different thing. Those people are the ones who are having this debate. It's true that you can render this debate, like any debate, into nonsense by redefining the terms they are using, but that in itself doesn't mean that it's inherently nonsense.
Maybe you're making the ontological claim that your beliefs about X don't actually become definitely true or false until you have a way to tell the difference? A sort of solipsistic or idealistic worldview? But you seem to reject that claim in your last sentence, saying, "it is only the knowledge itself that is false."
PaulDavisThe1st 67 days ago [-]
"I know I fathered Alston" .. the reasons it may be true or false are indeed independent of brain state. But "knowing" is not about whether it is true or false, otherwise this whole question becomes tautological.
If someone is just going to say "It is not possible to know false things", then sure, by that definition of "know" any brain state that involves a justified belief in a thing that is false is not "knowing".
But I consider that a more or less useless definition of "knowing" in context of both Gettier and TFA.
kragen 67 days ago [-]
I wasn't talking about whether it was true or false that I know I fathered Alston. I didn't say anything about knowing I fathered Alston at all. I was talking about whether it was true or false that I fathered Alston, which (I hope you'll agree) is not a question of my brain state; it's a question of Alston's genetic constitution, and my brain state is entirely irrelevant.
I think that, without using a definition of "knowing" that fits the description of definitions you are declaring useless, you won't be able to make any sense of either Gettier or TFA. So, however useful or useless you may find it in other contexts, in the context of trying to understand the debate, it's a very useful family of definitions of "knowing"; it's entirely necessary to your success in that endeavor.
mistermann 67 days ago [-]
How about "beliefs that seem to be true are not necessarily true, and the causes of those beliefs may not be valid, especially if examined more closely"?
Or, try renaming the variables and see if it still bothers you identically.
kragen 67 days ago [-]
No, I think many people use a definition of "know" that doesn't include "knowing" falsehoods. Possibly you and they have fundamentally beliefs about the nature of reality, or possibly you are just using different definitions for the same word.
mannykannot 67 days ago [-]
I agree that it is not often helpful to to avoid the issue by redefining a term in a way not originally intended (though it may be justified if the original definition is predicated on an unjustifiable (and sometimes tacit) assumption.)
Furthermore, OP’s choice of putting “know” in quotes seems to suggest that author is not using the word as conventionally understood (though, of course, orthography is not an infallible guide to intent.)
IMHO, Gettier cases are useful only on that they raise the issue of what constitutes an acceptable justification for a belief to become knowledge.
Gettier clauses are specifically constructed to be about true beliefs, and so do not challenge the idea that facts are true. Instead, one option to resolve the paradox is to drop the justification requirement altogether, but that opens the question of what, if anything, we can know we know. At this point, I feel that I am just following Hume’s footsteps…
kragen 67 days ago [-]
I think making sense of the Gettier debate does depend on using a definition of "know" that isn't just a question of what state the "knower's" brain is in. Gettier's point is not that truth isn't necessary; it's that, generally when people say "know", they are referring not only to brain states and truth, but also something else, specifically, some kind of causal connection between the two. I don't think you can construct a definition of "know" by which Gettier cases aren't "knowing" to which truth is irrelevant.
mannykannot 66 days ago [-]
Gettier clauses can be readily understood as a response to the conventional starting point for epistemology: the position that having knowledge is a matter of having a justified belief in a true proposition (often abbreviated to JTB.) (This really only concerns propositional knowledge, as opposed to, for example, knowing how to ride a bicycle.)
To know something in this sense seems to require several things: firstly, that the relevant proposition is true, which is independent of one's state of mind (not everyone agrees, but that is another issue...) Secondly, it seems to require that one knows what the relevant proposition is, which is a state of mind. Thirdly, having a belief that it is true, which is also a state of mind.
If we left it at that, there's no clear way to find out which propositions are true, at least for those that are not clearly true a priori (and even then, 'clearly' is problematic except in trivial cases, but that is yet another issue...) Having a justification for our belief gives us confidence that what we believe to be true actually is (though it rarely gives us certainty.)
But what, then, is justification? If we take the truth of the proposition alone as its justification, we get stuck in an epistemic loop. I think you are right if you are suggesting that good justifications are often in the form of causal arguments, but by taking that position, we are casting justification as being something like knowledge: having a belief that an argument about causes (or anything else, for that matter) is sound, rather than a belief that a proposition states a fact - but having a justified belief in an argument involves knowing that its premises are correct...
It is beginning to look like tortoises all the way down (as in Lewis Carroll's "What the Tortoise Said to Achilles".)
This is the true analytic answer! More fundamentally, “know” is a move in whatever subtype of the English language game you’re playing at the moment, and any discussions we have about what it “really” or “truly” means should be based on those instrumental concerns.
E.g. a neurologist would likely be happy to speak of a brain knowing false information, but a psychologist would insist that that’s not the right word. And that’s not even approaching how this maps to close-but-not-quite-exact translations of the word in other languages…
throw310822 67 days ago [-]
And, do they know if their definition is the right one? And how do they know it? And, is it actually true?
naasking 67 days ago [-]
False propositions are not knowledge, only true propositions are knowledge. Therefore you cannot know something true that is actually false, you can only believe something true that is actually false. Precisely describing how one moves from belief to knowledge is exactly what epistemology is about.
throw310822 67 days ago [-]
> False propositions are not knowledge, only true propositions are knowledge
From my point of view, "to know" is a subjective feeling, an assessment on the degree of faith we put on a statement. "Knowledge" instead is an abstract concept, a corpus of statements, similar to "science". People "know" false stuff all the time (for some definition of "true" and "false", which may also vary).
trashtester 67 days ago [-]
Precisely, but I think the feeling of knowing may be defined differently for the person having the feeling and from the viewpoint of others.
A flat-earther may feel they "know" the earth is flat. I feel that i "know" that their feeling isn't "true" knowledge.
This is the simple case where we all (in this forum, or at least I hope so) agree. If we consider controversial beliefs, such as the existence of God, where Covid-19 originated or whether we have free will, people will often still feel they "know" the answer.
In other words, the experience of "knowinging" is not only personal, but also interpersonal, and often a source of conflicts. Which may be why people fight over the defintion.
In reality, there are very few things (if any) that can be "known" with absolute certainty. Anyone who has studied modern Physics would "know" that our intuition is a very poor guide to fundamental knowledge.
The scientific method may be better in some ways, but even that can be compromized. Also, it's not really useful for people outside the specific scientific field. For most people, scientific findings are only "known" second hard from seeing the scientists as authorities.
A bigger problem, though, is that a lot of people are misusing the label "scientific" to justify beliefs or propaganda that has only weak (if any) support from the use of hard science.
In the end, I don't think the word "knowledge" has any fundamental correspondence to something essential.
Instead, I see the ability to "know" something as a characteristic of the human brain. It's an ability that causes the brain to lock onto one belief and disregard all others. It appears to be tendency we all have, which means it's probably evolved by evolution due to providing some evolutionary advantage.
The types of "knowledge" that we feel we "know", to the extend that we learn them from others, seem to evolve in parallel to this as memes/memeplexes (using Dawkin's original use of "meme").
Such memes spread in part virously by pure replication. But if they convey advantages to the hosts they may spread more effectively.
For example, after Galilei/Newton, Physics provided several types of competitive advantage to those who saw it as "knowledge". Some economic, some military (like calculating artillery trajectories). This was especially the case in a politically and religously fragmented Europe.
The memeplex of "Science" seems to have grown out of that. Not so much because it produces absolute truths, but more because those who adopted a belief in science could reap benefits from it that allowed them to dominate their neighbours.
In other areas, religious/cultural beliefs (also seen as "knowledge" by te believers) seem to have granted similar power to the believers.
And it seems to me that this is starting to become the case again, especially in areas of the world where the government provides a welfare state to all that prevent scientific knowledge to grant a differential survival/reproductive advantage to those who still base their knowledge on Science.
If so, Western culture may be heading for another Dark Age....
mistermann 67 days ago [-]
Many great points!
I thought this was interesting:
> Instead, I see the ability to "know" something as a characteristic of the human brain. It's an ability that causes the brain to lock onto one belief and disregard all others. It appears to be tendency we all have, which means it's probably evolved by evolution due to providing some evolutionary advantage.
It is substantially hardware (the brain) and software (the culturally conditioned mind).
Rewind 100 years and consider what most people "knew" that black people were. Now, consider what most people nowadays "know" black people are not. So, definitely an improvement in my opinion, but if we can ever get our heads straight about racial matters I think we'll be well on our way to the second enlightenment.
paganel 67 days ago [-]
> False propositions are not knowledge, only true propositions are knowledge.
This is something that a lot of Greeks would have had issues with, most probably Heraclitus, and Protagoras for sure. Restricting ourselves to Aristotelian logic back in the day has been extremely limiting, so much so that a lot of modern philosophers cannot even comprehends how it is to look outside that logic.
naasking 67 days ago [-]
> Restricting ourselves to Aristotelian logic back in the day has been extremely limiting, so much so that a lot of modern philosophers cannot even comprehends how it is to look outside that logic.
That's arguably good. If you restrict yourself to something that you know is a valid method of ascertaining truth, then you have much higher confidence in the conclusion. The fact that we still struggle even with getting this restricted method shows that restrictions are necessary and good!
Then you bootstrap your way to a more comprehensive method of discourse from that solid foundation. Like Hilbert's program, which ultimately revealed some incredibly important truths about logic and mathematics.
paganel 67 days ago [-]
That’s the thing, it only ascertains a restricted form of truth (in the case of Aristotelian logic what would be called “Aristotelian” truth), and I’m not sure you can then make the step from “Aristotelian” truth to “Heraclitean” (let’s say) truth, first, because of the sociology of science (for example everything seen as not-“Aristotelian” might be regarded by default as suss and intellectually non-touchable, just look at the bad renown Protagoras still has after 2500 years), and second, and I’m not sure how best to call it, because restricting ourselves for so long focusing on one thing and one thing only when it comes to the foundations of truth has made us “blind” to any other options/possibilities, we can not take our eyes off the cave walls and turn them towards the outside world and towards the light anymore.
And to give a concrete example related to this as a whole, people should have known that getting to know something by not knowing it more and more is a valid epistemological take, just look at Christian Orthodox Isichasm and its doctrine about God (paraphrased it goes like this: the more you are aware of the fact that you don’t know God then the more you actually know/experience God”). Christian Orthodox Isichasm is, of course, in direct connection with neo-Platonism/Plotinism, but because the neo-Platonist “doctrine” on truth has never been mathematically formalized (presuming that that would even be possible) then the scientific world chooses to ignore it and only focuses on its own restricted way of looking at truth and, in the end, of experiencing truth.
mistermann 67 days ago [-]
> "Knowing" falsehoods is something we broadly acknowledge that we all do.
Only in abstract discussions like this one. And in some concrete discussions on certain topics, not "knowing" seems to be essentially impossible for most non-silent participants.
n4r9 68 days ago [-]
Could you elaborate what you mean by that?
PaulDavisThe1st 67 days ago [-]
We all carry around multiple falsehoods in our heads that we are convinced are true for a variety of reasons.
To say that this is not "knowing" is (as another commenter noted) hair-splitting of the worst kind. In every sense it is a justified belief that happens to be false (we just do not know that yet).
bee_rider 67 days ago [-]
What exactly does it mean to know something then? As distinct from believing it. Just the justification, and then, I guess it doesn't have to be a very good justification if it can be wrong?
mrbombastic 67 days ago [-]
I think like many things “know” and “believe” are just a shorthand for convenient communication that makes binary something that is really a continuum of probability. That continuum might be something from loose theory to fundamental truth about the universe in our minds. Justifications and evidence move things down the continuum, such that we might assign a probability a thing is true, things can approach 100% probability but never get there, but we as mortals need to operate in the world as if we know things so we say anything close to 100% we “know”. Even though history tells us even some things we believe to be fundamental truths can be discovered to be wrong.
PaulDavisThe1st 67 days ago [-]
I think I would say that knowing means that your belief can resist challenges (to some degree) and that it is capable of driving behavior that changes others' beliefs.
The strength of the justification is, I would suggest, largely subjective.
n4r9 67 days ago [-]
My issue with this definition is that it includes deluded charlatans, can be applied to unfalsifiable (unknowable, even) propositions, and depends on the gullibility and cognitive biases of the general populace. So for example, Jesus "knew" that he was the son of God, even though a more rational interpretation is that he was mistaken in his own belief but charismatic enough to convince many others. (Please replace Jesus for another religion's prophet if you are Christian!)
Also I don't think this definition fits with people's intuition. At least, certainly not my own. There are times where I realise I'm wrong about something I thought I knew. When I look back, I don't say "I knew this, and I was wrong". I say "I thought I knew this, but I didn't actually know it".
dahart 67 days ago [-]
> What exactly does it mean to know something then?
This is one of the best questions ever, not just for philosophers, but for all us regular plebes to ponder often. The number of things I know is very very small, and the number of things I believe dramatically outnumbers the things I know. I believe, but don’t know, that this is true for everyone. ;) It seems pretty apparent, however, that we can’t know everything we believe, or nothing would ever get done. We can’t all separately experience all things known first-hand, so we rely on stories and the beliefs they invoke in order to survive and progress as a species.
throw310822 67 days ago [-]
> In every sense it is a justified belief that happens to be false
Not to mention what does it even mean for something to be false. For the hypothetical savage the knowledge that the moon is a piece of cheese just beyond reach is as true as it is for me the knowledge that it's a celestial body 300k km away. Both statements are false for the engineer that needs to land a probe there (the distance varies and 300k km is definitely wrong).
ndndjdjdn 67 days ago [-]
The problem is the word "know" being overloaded.
First person know is belief. To some extent: this is just faith! Yes we have faith that the laws of physics wont change tomorrow, or we remember yesterday happened etc. Science tries to push that faith close to fact by verifying the fuck out of everthing. But we will never know why anything...
The other "know" is some kind of concept of absolute truth and a coincidence that what someone belives matches this. Whether that coincidence is chance or astute observations or in the paper's case: both.
EasyMark 67 days ago [-]
Maybe, but it’s still a useful overload. No one wants to be pedantic every time they say they know something. It’s more of a spectrum which is really my own overloaded expression. My other one is “it’s a probabilistic matter, there’s no absolute here”
EVa5I7bHFq9mnYK 67 days ago [-]
So the statement is "I know every knowledge is a probabilistic spectrum". Sounds a liar paradox to me.
nanna 67 days ago [-]
Maybe it shook analytic philosophy, or some subdiscipline thereof, but this really was not registered beyond that within philosophy. Analytic philosophy likes to imagine itself to be philosophy propper, which is nonsense. It's just an over confident, aggressively territorial branch which hogs all the resources, even though the majority of students yearn for the richness and breadth of something more akin to what is today going by the moniker of Post-Kantian Philosophy (formerly Continental or Modern European philosophy).
tedheath123 67 days ago [-]
Which texts would you recommend students read to learn about Post-Kantian philosophy?
namuol 68 days ago [-]
I always come back to this saying:
“Debugging is the art of figuring out which of your assumptions are wrong.”
(Attribution unknown)
throwawayForMe2 67 days ago [-]
I always thought of what I learned in some philosophy class, that there are only two ways to generate a contradiction.
One way is to reason from a false premise, or as I would put it, something we think is true is not true.
The other way is to mix logical levels (“this sentence is false”).
I don’t think I ever encountered a bug from mixing logical levels, but the false premise was a common culprit.
hnick 67 days ago [-]
I'm not sure if it qualifies as mixing logical levels but I once tracked down a printer bug where the PDF failed to print.
The culprit was an embedded TrueType font that had what (I think) was a strange but valid glyph name with a double forward slash instead of the typical single (IIRC whatever generated the PDF just named the glyphs after characters so /a, /b and then naturally // for slash). Either way it worked fine in most viewers and printers.
The larger scale production printer on the other hand, like many, converted to postscript in the processor as one of its steps. A // is for an immediately evaluated name in postscript so when it came through unchanged, parsing this crashed the printer.
So we have a font, in a PDF, which got turned into Postscript, by software, on a certain machine which presumably advertised printing PDF but does it by converting to PS behind the scenes.
A lot of layers there and different people working on their own piece of the puzzle should have been 'encapsulated' from the others but it leaked.
motohagiography 67 days ago [-]
some possible examples:
security with cryptography is mostly about logical level problems, where each key or operation forms a layer or box. treating these as discrete states or things is also an abstraction over a seqential folding and mixing process.
debugging a service over a network has the whole stack as logical layers.
most product management is solving technical problems at a higher level of abstraction.
a sequence diagram can be a multi-layered abstraction rotated 90 degrees, etc.
PaulDavisThe1st 68 days ago [-]
As long as "your assumptions" includes "I know what I am doing", then OK.
But most people tend not to include that in the "your assumptions" list, and frequently it is the source of the bug.
recursive 67 days ago [-]
What if you never believed that in the first place?
PaulDavisThe1st 67 days ago [-]
Then you're good to ignore that as a possible source of the problem.
mannykannot 67 days ago [-]
Then, to be consistent, you should not trust either your deductions or even your choice of axioms.
In other words, it looks like a form of solipsism.
orbisvicis 67 days ago [-]
You can not know what you are doing and still trust in logic.
But what world it would be if you could flip a coin on any choice and still survive! If the world didn't follow any self-consistent logic, like a Roger Zelazny novel, that would be fantastic. Not sure that qualifies as solipsism, but still. Would society even be possible? Or even life?
Here, as long as you follow cultural norms, every choice has pretty good outcomes.
mannykannot 67 days ago [-]
Logic will only tell you what follows from your choice of axioms, not how to choose them, and only if you can trust your ability to apply it correctly. Absent that, your only option appears to be to put your trust in other people - which is, I suppose, what you are saying in your final paragraph.
mistermann 67 days ago [-]
In certain geographic regions of the planet at least.
recursive 67 days ago [-]
I trust things to varying degrees until I test them. Then I trust them more or less.
yyyfb 67 days ago [-]
Reminds me of "how did this ever work in the first place" bugs. Something that used to work stops working, you look into it, and it seems that the thing looked broken to begin with, but by some lucky miracle something was making it work.
rjrodger 67 days ago [-]
Good to have a name for this!
My favourite debugging technique is "introduce a known error".
This validates that your set of "facts" about the file you think you're editing are actually facts about the actual file you are editing.
For example: is the damn thing even compiling?
meowface 67 days ago [-]
I rely on this one a lot. Can save a ton of time that would otherwise be wasted.
Animats 68 days ago [-]
The link to the actual paper [1] seems to be overloaded.
I enjoy this metaphor of the cow and the papier-mâché.
Presumably, there is a farmer who raised the cow, then purchased the papier-mâché, then scrounged for a palette of paints, and meticulously assembled everything in a field -- all for the purpose of entertaining distant onlookers.
That is software engineering. In Gettier's story, we're not the passive observers. We're the tricksters who thought papier-mâché was a good idea.
dsr_ 67 days ago [-]
[Among the problems with] Justifed True Beliefs being "knowledge" is that humans are very bad at accurately stating their beliefs, and when they state those beliefs, they often adopt the inaccurate statement as the actual belief.
Let's take the obscured cow example. Nobody outside the confines of a philosophy experiment believes that there is a cow in the field. They believe that they see something which looks like a cow (this is justified and true) and they also believe, based on past evidence, that what they are seeing is a cow (this is justified but not, in this special case, true.) But if you play this joke on them repeatedly, they will start to require motion, sound, or a better look at the cow shaped object before assigning a high likelihood of there being an actual cow in the field that they are observing. They will also ask you how you are arranging for the real cow to always be conveniently obscured by the fake cow.
Unsurprisingly, gaining additional evidence can change our beliefs.
The phenomenon of a human gaining object permanence is literally the repeated updating of prior possibility estimations until we have a strong base estimation that things do not cease to exist when we stop observing them. It happens to all of us early on. (Bayes' Theorem is a reasonable approximation of mental processes here. Don't conclude that it accurately describes everything.)
The papier-mache cow simulation is not something we normally encounter, and hypothesizing it every time is a needless multiplication of entities... until you discover that there is a philosophical jokester building cow replicas. Then it becomes a normal part of your world to have cow statues and cows in fields.
Now, software engineering:
We hold models in our brains of how the software system works (or isn't working). All models are wrong, some are useful. When your model is accurate, you can make good predictions about what is wrong or what needs to be changed in the code or build environment in order to produce a desired change in software behavior. But the model is not always accurate, because we know:
- the software system is changed by other people
- the software system has bugs (because it is non-trivial)
- even if the software system is the same as our last understanding of it, we do not hold all parts of the model in our brains at the same weights. A part that we are not currently considering can have effects on the behaviour we are trying to change.
Eventually we gain the meta-belief that whatever we are poking is not actually fixed until we have tested it thoroughly in practice... and that we may have introduced some other bug in the process.
mjburgess 67 days ago [-]
Bayes theorem isnt a reasonable approximation, because it isnt answering the question -- it describes what you do when you have the answer.
With bayes, you're computing P(Model|Evidence) -- but this doesnt explain where Model comes from or why Evidence is relevant to model.
If you compute P(AllPossibleModels|AllSensoryInput) you end up never learning anything.
What's happening with animals is that we have a certain, deterministic, non-bayesian primitive model of our bodies from which we can build more complex models.
So we engage in causal reasoning, not bayesian updating: P(EvidenceCausedByMyBody| do(ActionOfMyBody)) * P(Model|Evidence)
bumby 67 days ago [-]
I'm not sure I'm understanding your stance fully, so please forgive any poor interpretation.
>certain, deterministic, non-bayesian primitive model of our bodies
What makes you certain the model of our body is non-Bayesian? Does this imply we have an innate model of our body and how it operates in space? I could be convinced that babies don't inherently have a model of their bodies (or that they control their bodies) and it is a learned skill. Possibly learned through some pseudo Bayesian process. Heck, the unathletic among us adults may still be updating our Bayesian priors with our body model, given how often it betrays our intentions :-)
mjburgess 67 days ago [-]
Because bayesian conditioning doesn't resolve the direction of causation, and gives no way of getting 'certain' data which is an assumption of the method (, as well as assuming relevance).
In bayesian approaches it's assumed we have some implicity metatheory which gives us how the data relates to the model, so really all bayesian formulae should have an implicit 'Theory' condition which provides, eg., the actual probability value:
P(Model|Evidence, Theory(Model, Evidence))
The problem is there's no way of building such a theory using bayesianism, it ends in a kind of obvious regress: P(P(P(M|E, T1)|T2)|T3,...)
What theory provides the meaning of 'the most basic data'? ie., how it relates to the model? (and eg., how we compute such a probability).
The answer to all these problems is: the body. The body resolves the direction of causation, it also bootstraps reasoning.
In order to compute P(ShapeOfCup|GraspOnCup, Theory(Grasp, Shape)), I first (in early childhood) build such a theory by computing P(ShapeSensaton|do(GraspMovemnt), BasicTheory(BasicMotorActions, BasicSensations).
Were 'do' is non-bayesian conditioning, ie., it denotes the probability distribution which arises specifically from causal intervention. And "BasicTheory" has to be in-built.
In philosophical terms, the "BasicTheory" is something like Kant's synthetic a priori -- though there's many views on it. Most philosophers have realised, long before contemporary stats, that you cannot resolve the under-determination of theory by evidence without a prior theory.
67 days ago [-]
bumby 67 days ago [-]
Is the extension of your position that we are born with a theory of the body, irrespective of experience? How does that relate to the psychological literature where babies seem to lack a coherent sense of self? I.e., they can't differentiate what is "body" and what is "not body"?
If it's an ability that later develops independent of experience with the exterior world, it seems untestable. I.e., how can you test the theory without a baby being in the world in the first place?
mjburgess 67 days ago [-]
It might be that its vastly more minimal than it appears I'm stating. I already agree with the high adaptability of the motor system -- indeed, that's a core part of my point, since its this system which does the heavy lifting of thinking.
Eg., it might be that the kind of "theory" which exists is un/pre-conscious. So that it takes a long time, comparatively, for the baby to become aware of it. Until the baby has a self-conception it cannot consciously form the thought "I am grasping" -- however, consciousness imv is a derivative-abstracting process over-and-above the sensory motor system.
So the P(Shape|do(Grasp), BasicTheory(Grasp, Shape)) actually describes something like a sensory-motor 'structure' (eg., a distribution of shapes associated with sensory-motor actions). The proposition that "I am grasping" which allows expressing a propositional confidence requires (self-)consciousness: P(Shape|"I have grasped", Theory(Grasp, Shape)) -- bayesianism only makes sense when the arguments of probability are propositions (since its about beliefs).
What's the relationship between the bayesian P(Shape|"I have...") and the causal P(Shape|do(Grasp)) ? The baby requires a conscious bridge from the 'latent structural space' of the sensory-motor system to the intentional belief-space of consciousness.
So P(Shape|do(Grasp)) "consciously entails" P(Shape| "I have..") iff the baby has to developed a theory, Theory(MyGrasping|Me)
But, perhaps counter-intutively, it is not this theory which allows the baby to reliably compute the shape based on knowing "its their action". It's only the sensory-motor system which needs to "know" (metaphorically) that the grapsing is of the shape.
Maybe a better way of putting it then is that the baby requires a procedural mechanism which (nearly-) guarentees that it's actions are causally associated with its sensations such that it's sensations and actions are in a reliable coupling. This 'reliable coupling' has to provide a theory, in a minimal sense, of how likely/relevant/salient/etc. the experiences are given the actions
It is this sort of coupling which allows the baby, eventually, to develop an explicit conscious account of its own existence.
bumby 67 days ago [-]
I think that makes sense as a philosophical thought, but do you think it's testable to actually tell us anything about the human condition?
E.g., If motor movement and causal inference are coupled, would you expect a baby born with locked in syndrome to have a limited notion of self?
mjburgess 67 days ago [-]
Probably one of the most important muscles is in the eye. If all the muscles of the body are paralysed from birth, yes, no concepts would develop.
This is not only testable, but central to neuroscience, and i'd claim, to any actual science of intelligence -- rather the self-aggrandising csci mumbojumbo.
On the testing side, you can lesion various parts of the sensory-motor system of mice, run them in various maze-solving experiments under various conditions (etc.) and observe their lack of ability to adapt to novel environments.
cfiggers 67 days ago [-]
> If you compute P(AllPossibleModels|AllSensoryInput) you end up never learning anything.
...whoa. That makes complete sense.
So you're saying that there must be some form of meta-rationality that gives cues to our attempts at Bayesian reasoning, directing those attempts how to make selections from each set (the set of all possible models and the set of all sensory inputs) in order to produce results that constitute actual learning.
And you're suggesting that in animals and humans at least, the feedback loop of our embodied experience is at least some part of that meta-rationality.
That's an incredible one-liner.
mjburgess 67 days ago [-]
In order to think, we move.
rhelz 68 days ago [-]
The impossibility of solving the Gettier problem meshes nicely with the recent trend to Baysianism and Pragmatism. Instead of holding out for justified true belief and "Bang-Bang" either labeling them True or False, give them degrees of belief which are most useful for prediction and control.
ars 67 days ago [-]
I don't understand the Gettier problem. The example of the cow for example: You do not have a justified belief there is a cow there, all you can justify is that there is the likeness of a cow there.
To be able to claim there is a cow there requires additional evidence.
rhelz 67 days ago [-]
The cow example is a confusing example; I like the clock example much better. You are in a school building, with hundreds of classrooms, and there is a clock on each wall. All of the clocks are working perfectly, except for one classroom where the clock is stuck at 2:02.
Every other time you've been in that school building, the clocks have shown you the right time, so you feel very confident that the clocks on the wall are accurate.
But this time, you happen to be in the room with the non-functioning clock. It says "2:02" but by great good fortune, it actually happens to be 2:02.
So your belief is:
1. True. It actually is 2:02.
2. Justified. The vast majority of the time, if you see a clock on a wall in that building, it is working fine.
But should we say that you know the time is 2:02? Can you get knowledge of the time from a broken clock? Of course not. You just got lucky.
In order to count as knowledge, it has to be justified in the right way, which, alas, nobody has been able to specify exactly what that way should be. So far, nobody has come up with criteria which we can't find break in a similar way.
// all you can justify is that there is the likeness of a cow there //
If you see something which looks real, you are justified in believing it is real. If you see your friend walking into the room, sure, you've seen your friend's likeness in the room. But you are justified in believing your friend is in the room.
So if you see something that looks like a cow in a field, you are justified in believing there is a cow in a field, even though looks may be deceiving.
ars 66 days ago [-]
> In order to count as knowledge, it has to be justified in the right way, which, alas, nobody has been able to specify exactly what that way should be.
First of all you have to be able to test your knowledge, you would test that the clock is correct for every minute of the day. If you missed any minutes then your knowledge is incomplete, you instead have probable knowledge, (using the same methods that physics uses to decide if an experimental result is real, you can assign a probability that the clock is correct).
Also, since when is knowledge absolute? You can never be completely certain about any knowledge, you can only assign (or try to assign) a probability that you know something, and testing your belief greatly increases the probability.
(PS. Thank you for the reply.)
mistermann 67 days ago [-]
> You do not have a justified belief there is a cow there, all you can justify is that there is the likeness of a cow there.
Is this assertion not self-refuting though?
skybrian 67 days ago [-]
This seems to essentially be saying that coincidences will happen and if you’re fooled by them, sometimes it’s not your fault - they are “justified.” But they may be caused by enemy action: who put that decoy cow there? I guess they even made it move a little?
How careful do you have to be to never be fooled? For most people, a non-zero error rate is acceptable. Their level of caution will be adjusted based on their previous error rate. (Seen in this sense, perfect knowledge in a philosophical sense is a quest for a zero error rate.)
In discussions of how to detect causality, one example is flipping a light switch to see if it makes the light go on and off. How many flips do you need in order to be sure it’s not coincidence?
bhickey 67 days ago [-]
> How careful do you have to be to never be fooled?
This is where Contextualism comes into play. Briefly, your epistemic demands are determined by your circumstances.
> “I started poking around, only to discover that I couldn't seem to get the correct behavior back. No matter what code I changed, which lines I commented out“
Hello darkness my old friend…
pugworthy 67 days ago [-]
When wondering if there was some dumb programming joke about "settiers" to go along with "getteirs" it dawned on me that there's a certain gettier nature to Get/Set encapsulation.
Suppose you've got a class library with no source, and the documentation defines a get method for some calculated value. But suppose that what the get method actually does is return an incorrectly calculated value. You're not getting the right calculated value, but you're getting a calculated value none the less. But then finally suppose that in the same code is the right calculated value in unreachable code or an undocumented method.
On the one hand, you have a justified true belief that "the getter returns a calculated value": (1) you believe the getter returns a value; (2) that belief didn't come from nowhere, but is justified by you getting values back that look exactly like calculated values; (3) and the class does, in fact, have code in it to return a correctly calculated value.
recursive 67 days ago [-]
Seems somehow related to "parallel" construction of evidence.
verisimi 67 days ago [-]
It absolutely is. A fake cow or whatever, provides the evidence for justified belief.
w10-1 68 days ago [-]
The "programmer's model" is their mental model of what's happening. You're senior and useful when you not only understand the system, but can diagnose based on a symptom/bug what aspect of the system is implicated.
But you're staff and above when you can understand when your programming model is broken, and how to experiment to find out what it really is. That almost always goes beyond the specified and tested behaviors (which might be incidentally correct) to how the system should behave in untested and unanticipated situations.
Not surprisingly, problems here typically stem from gaps in the programming model between developers or between departments, who have their own perspective on the elephant, and their incidence in production is an inverse function of how well people work together.
hibikir 68 days ago [-]
You are defining valid steps in understanding of software, but attaching them to job titles is just going to lead to very deceptive perspectives. If your labeling was accurate, every organization I've ever worked at would at least triple the number of staff engineers than it does.
kreyenborgi 67 days ago [-]
The non-bovine examples are in a way more complex (but also more common), since they involve multiple possible causes for an event. In software engineering, bugs and outages and so on are not just caused by lack of testing, but lack of failsafes, lack of fallbacks, coding on a Monday morning, cosmic background radation, too many/few meetings, etc. etc. And it's hard to pinpoint "the cause" (but perhaps we shed some light on a graph of causes, some of which may be "blocked" by other causes standing in the way).
Are there any good examples of gettiers in software engineering that don't rely on understanding causality, where we're just talking about "what's there" not explaining "how it got there"?
yak90 67 days ago [-]
I would even say the two real-life examples given in the blog are not Gettier cases at all.
Gettier is about the cause of the "knowledge," not about knowledge of the cause.
For the autofocus example, if the statement in question was "my patch broke the autofocus," it would not be Gettier because it is not true (the unrelated pushed changes did);
if the statement in question was "my PR broke the autofocus," it would not be Gettier because it is JTB, and the justification (it was working before the PR, but not after) is correct, i.e., the cause of the belief, the perception, and deduction, are correct;
Same if the statement in question was "the autofocus is broken."
It would be Gettier if the person reporting the bug was using an old (intact) version of the app but was using Firefox with a website open in another window on another screen, which was sending alerts stealing the focus.
The most common example of true Gettier cases in software dev is probably the following:
A user reports a bug but is using an old version, and while the new version should have the bug fixed, it's still there.
The statement is "the current version has the bug."
The reporter has Justified Belief because they see the bug and recently updated, but the reporter cannot know, as they are not on the newest version.
williamdclt 67 days ago [-]
Physics has kinda-solved what it means to know something.
- JTB is not enough, for something to be “true” it needs _testability_. In other words, make a prediction from your knowledge-under-test which would be novel information (for example, “we’ll find fresh cow dung in the field”).
- nothing is really ever considered “true”, there’s only theories that describe reality increasingly correctly
In fact, physics did away with the J: it doesn’t matter that your belief is justified if it’s tested. You could make up a theory with zero justification (which doesn’t contradict existing knowledge ofc), make predictions and if they’re tested, that’s still knowledge. The J is just the way that beliefs are formed (inference)
PaulRobinson 67 days ago [-]
"Testability" to me sounds like a type of "justified" - I can be justified for many reasons, and testability is just one of those. But there are reasons where I might be justified but where testability is impossible.
For example, if I toss a coin and it comes up heads, put the coin in my pocket and then go about my day, and later on say to somebody "I tossed a coin earlier, and it came up heads", that is a JTB, but it's not testable. You might assume I'm lying, but we're not talking about whether you have a JTB in whether I tossed a heads or not, we're talking about if I have one.
There are many areas of human experience where JTB is about as good as we are going to get, and testability is off-limits. If somebody tells me they saw an alien climb out of a UFO last night, I have lots of reasons to not believe them, but if this a very trustworthy individual who has never lied to me about anything in my decades of experience of knowing them, I might have a JTB that they think this is true, even if it isn't. But none of it is testable.
Physics - the scientific method as a whole - is a superb way to think about and understand huge swathes of the World, but it has mathematically proven limits, and that's fine, but let's not assume that just because something isn't testable it can't be true.
diggan 67 days ago [-]
Why did the physicist stop hanging out with philosophers?
Because every time they said, "I've found the absolute truth," the philosophers just replied, "Only in your frame of reference!"
versteegen 67 days ago [-]
Testability as you describe it seems to give you more than just knowledge, but also some amount of understanding: understanding of consequences (not necessarily understanding of causes) — you mentioned the ability to make predictions about the consequences of actions (e.g. 'tests'). (Aside: it seems that you can say you know something, it's a narrow enough concept to be sharp, while understanding something can only ever be true to a degree: it's broad without limit!)
But you may have conflated 'testability' and 'tested'. Can I know there is a cow in the field if I don't check? Seeing it was already evidence, testing just collects more evidence, so how can that matter? Should we set a certainty threshold on knowledge? Could be reasonable.
Maybe prediction-making is too strong to be necessary for 'knowing', if we allow knowing some fact in a domain of knowledge of which you're otherwise clueless. Although very reasonable to not call this knowledge. Suppose I learn of an mathematical theorem in a field that's so unfamiliar that I can't collect evidence to independently gain confidence in it.
mistermann 67 days ago [-]
> JTB is not enough, for something to be “true” it needs _testability_
How could something become true in the first place such that it could be tested to discover that it is true, if the test precedes and is a condition for truth?
Do you have tests I can run on each of your many assertions here that prove their truth?
jayd16 68 days ago [-]
Hmm, are there better cases that disprove JTB? Couldn't one argue that the reliance on a view that can't tell papermache from a cow is simply not a justified belief?
Is the crux of the argument that justification is an arbitrary line and ultimately insufficient?
These are correct but contrived and unrealistic, so later examples are more plausible (e.g. being misled by a mislabelled television program from a station with a strong track record of accuracy).
The point is not disproving justified true belief so much as showing the inadequacy of any one formal definition: at some point we have to elevate evidence to assumption and there's not a one-size-fits-all way to do that correctly. And, similarly to the software engineering problems, a common theme is the ways you can get bitten by looking at simple and seemingly true "slices" of a problem which don't see a complex whole.
It is worth noting that Gettier himself was cynical and dismissive of this paper, claiming he only wrote it to get tenure, and he never wrote anything else on the topic. I suspect he didn't find this stuff very interesting, though it was fashionable.
abeppu 68 days ago [-]
I like the example of seeing a clock as you walk past. It says it's 2:30. You believe that the time is 2:30. That seems like a perfectly reasonable level of justification -- you looked at a clock and read the time. If unbeknownst to you, that clock is broken and stuck at 2:30, but you also just happened to walk by and read it at 2:30, then do you "know" that it's 2:30?
I think a case can't so much "disprove" JTB, so much as illustrate that adopting a definition of knowledge is more complex than you might naively believe.
dherls 68 days ago [-]
I was thinking that one solution might be to specify that the "justification" also has to be a justified true belief. In this case, the justification that you see a cow isn't true, so it isn't a JTB.
Of course that devolves rapidly into trying to find the "base case" of knowledge that are inherent
These things happen very often in programming because programming has feedback loops.
If there's a bug - things on other levels will adapt to that bug, creating a "gettier" waiting to happen.
Other feedback-related concept is false independence. Imagine a guy driving a car over a hilly road with 90 mph speed limit. The speed of his car is not correlated with the position of the foot on the gas pedal (it's always 90 mph). On the other hand the position of the gas pedal and the angle of the road is correlated.
This is example popular in macroeconomics (to explain why central bank interest rates and inflation might seem to be independent).
ern 67 days ago [-]
Would a merge instead of a rebase have made it easier to find the bug? (Serious question)
salomonk_mur 67 days ago [-]
Yes, most likely. Rebase hides the fact the 2 changes happened separately, a merge would make it much easier to see the different avenues that may lead to the bug.
We purposefully try not to do rebases in my team for this reason.
IAmLiterallyAB 67 days ago [-]
I'm not sure I follow, do you have an example where merging makes things more obvious?
Izkata 67 days ago [-]
In this case it would not, you can see the commits between tag A and tag B either way. He simply didn't bother checking, either that there was no bug after rebasing or that the commits he'd rebased onto hadn't been released yet.
onursurme 65 days ago [-]
Today, I encountered an issue where the modal's close button appeared to be unresponsive. Upon reviewing the modal and the button, everything seemed to be set up correctly, yet clicking the button didn’t close the modal. I later realized that multiple instances of the same modal had been opened. When I clicked the close button, only the topmost modal closed, leaving the others still visible underneath.
JohnMakin 68 days ago [-]
I wasn’t aware there was a term for this or that this was not common knowledge - for me I refer to them as “if I fix this, it will break EVERYTHING” cases that come up in my particular line of work frequently, and my peers generally tend to understand as well. Cause/effect in complex symptoms is of course itself complex, which is why the first thing I typically do in any environment is set up metrics and monitoring. If you have no idea what is going on at a granular level, you’ll quickly jump to bad conclusions and waste a lot of time aka $.
JackFr 68 days ago [-]
I’ve come across (possibly written) code that upon close examination seems to only work accidentally — that there are real failures which are somehow hidden by behavior of other systems.
The classic and oft heard “How did this ever work?”
JohnMakin 68 days ago [-]
I think this stuff is really funny when I find it and I have a whole list of funniest bugs like this I have found. Particularly when I get into dealing with proxies and reponse/error handling between backend systems and frontend clients - sometimes the middle layer has been silently handling errors forever, in a way no one understood, or the client code has adapted to them in a way where fixing it will break things badly - big systems naturally evolve in this way and can take a long time to ever come to a head. When it does, that’s when I try to provide consulting, lol.
macintux 67 days ago [-]
Many years ago I was grading my students’ C programs when I found a program that worked without global variables or passing parameters. Instead, every function had the same variables declared in the same order.
QuercusMax 68 days ago [-]
In at least a few cases I can think of, the answer was almost definitely "it actually never did work, we just didn't notice how it was broken in this case".
K0balt 68 days ago [-]
This is horrifying, and needs a trigger warning lol. It gave me a sense of panic to read it. It’s always bad when you get so lost in the codebase that it’s just a dark forest of hidden horrors.
When this kind of thing tries to surface, it’s a warning that you need to 10x your understanding of the problem space you are adjacent to.
JohnMakin 68 days ago [-]
I guess I’ve worked in a lot of ancient legacy systems that develop over multiple decades - there’s always haunted forests and swaths of arcane or forgotten knowledge. One time I inherited a kubernetes cluster in an account no one knew how to access and when finally hacking into it discovered troves of crypto mining malware shit. It had been serving prod traffic quietly untouched for years. This kind of thing is crazy common, I find disentangling these types of projects to be fun, personally, depending on how much agency I have. But I’m not really a software developer.
namuol 68 days ago [-]
The surest way to get yourself into a mess like this is to assume that a sufficiently complex codebase can be deeply understood in the first place.
By all means you can gain a lot by making things easier to understand, but only in service of shortcuts while developing or debugging. But this kind of understanding is not the foundation your application can safely stand on. You need detailed visibility into what the system is genuinely doing, and our mushy brains do a poor job of emulating any codebase, no matter how elegant.
K0balt 64 days ago [-]
Yeah, if you get over a thousand lines of code you need to be building and documenting it in a way that makes it intelligible in a modular way.
FP can be good for that but I often find that people get so carried away with the pure notion of functional code that they forget to make it obvious in its design. Way, way too much “clever” functional code out there.
The data structures are the key for many things, but a lot of software is all about handling side effects, where basically everything you touch is an input or an output with real world, interrelated global state.
That’s where correctly compartmentalising those state relationships and ample asserts or fail-soft/safe code practices become key. And properly descriptive variable names and naming conventions, with sparse but deep comments where it wasn’t possible to write the code to be self documented by its obvious nature.
oersted 67 days ago [-]
It's likely that I'm wrong, I need to look deeper into it.
But isn't the paper-mache cow case solved by simply adding that the evidence for the justification also needs to be true?
The definition already requires the belief to be true, that's a whole other rabbit hole, but assuming that's valid, it's rather obvious that if your justification is based on false evidence then it is not justified, if it's true by dumb luck of course it doesn't count as knowing it.
EDIT: Okay I see how it gets complicated... The evidence in this case is "I see something that looks like a cow", which I guess is not false evidence? Should your interpretation of the evidence be correct? Should we include into the definition that the justification cannot be based on false assumptions (existing false beliefs)? I can see how this would lead to more papers.
EDIT: I have read the paper and it didn't really change my view of the problem. I think Gettier is just using a sense of "justified" that is somewhat colloquial and ill defined. To me a proposition is not justified if it is derived from false propositions. This kind of solves the whole issue, doesn't it?
To Gettier it is more fuzzy, something like having reasonably sufficient evidence, even if it is false in the end. More like "we wouldn't blame him for being wrong about that, from his point of view it was reasonable to believe that".
I understand that making claims of the absolute truthfulness of things makes the definition rather useless, we always operate on incomplete evidence, then we can never know that we know anything (ah deja vu). But Gettier is not disputing the part of the definition that claims that the belief needs to be true to be known.
EDIT: Maybe the only useful definition is that know = believe, but in speech you tend to use "he knows P" to hint that you also believe P. No matter the justification or truthfulness.
EDIT: I guess that's the whole point that Gettier was trying to make: that all accepted definitions at the time were ill-defined, incomplete and rather meaningless, and that we should look at it closer. It's all quite a basic discussion on semantics. The paper is more flamebait (I did bite) than a breakthrough, but it is a valid point.
kijin 67 days ago [-]
Indeed, there needs to be some sort of connection between the truth and the justification, not just "true && justified".
The problem is that when you're working at such a low level as trying to define what it means to know something, even simple inferences become hellishly complicated. It's like trying to bootstrap a web app in assembly.
csours 67 days ago [-]
Puttiers: When a junior engineer fixes something, but a different error is returned, so they cannot tell if progress was made or not.
***
justified: in the sense of deriving from evidence
true: because it doesn't make sense to "know" a falsehoood
belief: i.e., a proposition in your head
***
Justified: there is an error message
true: there is an error condition
belief: the engineer observes the message and condition
I believe that schrodinger's cat also applies to software bugs. Every time I go looking, I find bugs that I don't believe existed until I observed them.
FigurativeVoid 67 days ago [-]
I have similar belief, but only when it comes to bugs that make me look foolish.
The more likely a bug is to make me look dumb, it will only appear as soon as I ask for help.
orbisvicis 67 days ago [-]
The bugs that always disappear when I try to demonstrate them to others - my favorite. Reminds me of the "Tom Knight and the Lisp Machine" koan. This is largely true, but I remember a failing piece of authentication hardware that I didn't understand but was convinced was failing. Every time I called someone over it would work, so I couldn't get it replaced. Eventually the failure rate got so high that everyone agreed the damn thing had failed, "dead as a brick". But until then I was SoL. What are the chances that something 99% towards "dead as a brick" would always work around a superior.
cratermoon 67 days ago [-]
Question I like to ask my colleagues. Suppose you have a program that passes all the tests. Suppose also that in that program there is a piece of code performs an operation incorrectly. The result of that operation is used in another part of the code that also performs an operation incorrectly, but in such a way that the tested outcome is correct.
Does the code have 0 defects, 1 defect, or 2 defects?
userbinator 67 days ago [-]
I can understand 0 and 2, but what's 1? The "I don't know how to count" answer?
Izkata 67 days ago [-]
The second part could instead be more like "detects an edge case and handles it, accidentally working around the first defect".
cratermoon 67 days ago [-]
It could be counted as a single defect because both pieces of code must be corrected to keep the test happy.
mihaic 67 days ago [-]
After Godel published his landmark incompleteness proof, that a logical system can't be complete and also without any internal inconsistencies, I would have expected this to trickle into philosophical arguments of this type.
I see no practical usefulness in all of these examples, except as instances of the rule that you can get correct results from incorrect reasoning.
dist-epoch 67 days ago [-]
Philosophy is quite far away from pure math for Godel's argument to really matter.
mihaic 67 days ago [-]
Why though? You lose quite a bit of credibility when you say that theorems that apply to any system of logic don't apply to you in any way.
rnhmjoj 67 days ago [-]
Note that Gödel's incompleteness theorems do not apply to just any system of logic: they are about particular formal systems that can prove certain facts about the arithmetics of integers. So, for them to fail, it doesn't even take a non-mathematical formal system, just something that has nothing to do with natural numbers, for example, Euclidean geometry, which happens to be fully decidable.
mjburgess 67 days ago [-]
Godel's theorem is irrelevant to systems of concepts, conceptual analysis, or theorising in general. It's narrowly about technical issues in logic.
It has been "thematically appropriated" by a certain sort of pop-philosophy, but it says nothing relevant.
Philosophy isnt the activity of trying to construct logical embeddings in deductive proofs. If any one ever thought so, then there's some thin sort of relevance, but no one ever has.
dist-epoch 67 days ago [-]
If philosophy was just about logic, it would be called math, wouldn't it.
But it's also about fuzzy stuff which doesn't follow the A or not A logic.
thewileyone 66 days ago [-]
Recently experienced this. Business users were reporting an issue in a system and were demanding an bug-fix. After reviewing their observations and stepping through the issue together, turns out the root cause was something completely unexpected that was in the background.
Saved the tech team time chasing from a wild goose.
oersted 67 days ago [-]
The link to the paper seems to be down, here's an alternative one.
This makes me think of the "Person unimpressed by Place | person amazed by Place (Japan)" meme. In this case it feels like "Person unimpressed by basic concept | person amazed by basic concept (Philosophy)".
lifeisstillgood 67 days ago [-]
Oh this is such a better and more useful idea than some other common ones like “yak shaving” or DRY
Love it
akoboldfrying 67 days ago [-]
I think those other two are also very useful. I've actually had a lot of traction with introducing "yak shaving" in everyday life situations to non-programmers -- it applies to all kinds of things.
EDIT: Deleted paragraph on DRY that wasn't quite right.
wslh 68 days ago [-]
You can also download the paper from [1] since the link on the article seems unavailable.
From the examples he mentions, aren't these just "race conditions"?
conformist 67 days ago [-]
This is very common in finance. Knowing when finance research that made right predictions with good justifications falls into the "Gettier category" or not is extremely hard.
tshaddox 67 days ago [-]
Gettier cases are fun, although the infinite regress argument is a much clearer way to show that JTB is a false epistemology.
barrystaes 68 days ago [-]
Well this is a roundabout way of justified thinking about a belief that just happens to align with some actual facts..
barrystaes 68 days ago [-]
On a more serious note: populist politicians seem to like making gettier claims; they cost a lot of time to refute and are free publicity. Aka the worst kind of fake news.
mistermann 68 days ago [-]
A rather ambitious claim considering the context!
lapphi 67 days ago [-]
Politicians all across the globe know and utilize this concept
Meh, these are just examples of the inability to correctly root cause issues. There is a good lesson in here about the real cause being lack of testing (the teammate’s DOM change should have never merged) and lack of monitoring (upstream mail provider failure should have been setting off alerts a long time ago).
The changes only had adjacency to the causes and that’s super common on any system that has a few core pieces of functionality.
I think the core lesson here is that if you can’t fully explain the root cause, you haven’t found the real reason, even if it seems related.
DJBunnies 68 days ago [-]
Yeah why did he rebase unreleased code?
And the “right” RC only has to be right enough to solve the issue.
abathologist 67 days ago [-]
The hubub around the Gettier paper is surely among philosophy's most shameful moments.
therein 67 days ago [-]
I just called it a red herring or a false signal and everyone understood what I meant.
nighthawk454 67 days ago [-]
Seems like the terminology of calling it a ‘true’ belief led to some confusion. Of course there is an huge difference between evidence and proof. Correlation is not causation, Godel’s incompleteness theorem, all abstractions are leaky, etc.
Desperation to ‘know’ something for certain can be misleading when coincidence is a lot more common than proof.
Worse yet is extending the feeling of ‘justified’ to somehow ‘lessen’ any wrongness, perhaps instead of a more informative takeaway.
silent_cal 67 days ago [-]
One way out is to just admit that "justified true belief" is not a satisfactory definition of knowledge. I know that's not really the point of this article, I'm just saying.
d--b 67 days ago [-]
The original name of Genius was RapExegesis, which only confirms the main point of the article: that Genius' founders were mostly pedantic Yale grads. /s
Rendered at 10:19:24 GMT+0000 (Coordinated Universal Time) with Vercel.
> When I talk to Philosophers on zoom my screen background is an exact replica of my actual background just so I can trick them into having a justified true belief that is not actually knowledge.
https://old.reddit.com/r/PhilosophyMemes/comments/gggqkv/get...
The cases cited in the article don't seem to raise any interesting issues at all, in fact. The observer who sees the dark cloud and 'knows' there is a fire is simply wrong, because the cloud can serve as evidence of either insects or a fire and he lacks the additional evidence needed to resolve the ambiguity. Likewise, the shimmer in the distance observed by the desert traveler could signify an oasis or a mirage, so more evidence is needed there as well before the knowledge can be called justified.
I wonder if it would make sense to add predictive power as a prerequisite for "justified true knowledge." That would address those two examples as well as Russell's stopped-clock example. If you think you know something but your knowledge isn't sufficient to make valid predictions, you don't really know it. The Zoom background example would be satisfied by this criterion, as long as intentional deception wasn't in play.
Gettier’s contribution — the examples with Smith — sharpens it to a point by making the “knowledge” a logical proposition — in one example a conjunction, in one a disjunction — such that we can assert that Smith’s belief in the premise is justified, while allowing the premise to be false in the world.
It’s a fun dilemma: the horns are, you can give up justification as sufficient, or you can give up logical entailment of justification.
But it’s also a bit quaint, these days. To your typical 21st century epistemologist, that’s just not a very terrifying dilemma.
One can even keep buying original recipe JTB, as long as one is willing to bite the bullet that we can flip the “knowledge” bit by changing superficially irrelevant states of the world. And hey, why not?
Sorry, naive questions: what is a terrifying dilemma to 21st century epistemologist? What is the "modern" recipe?
Similarly, the real interesting stuff regards the reliability and predictive power of knowledge-producing mechanisms, not individual pieces produced by it.
Another analogy is confidence intervals, which are defined through a collective property, a confidence interval is an interval produced by a confidence process and the meat of the definition concerns the confidence process, not its output.
I always found the Gettier problems unimpressive and mainly a distraction and a language game. Watching out for smoke-like things to infer whether there is a fire is a good survival tool in the woods and advisable behavior. Neither it nor anything else is a 100% surefire way to obtain bulletproof capital-letter Truth. We are never 100% justified ("what if you're in a simulation?", "you might be a Boltzmann brain!"). Even stuff like math is uncertain and we may make a mistake when mentally adding 7454+8635, we may even have a brainfart when adding 2+2, it's just much less likely, but I'm quite certain that at least one human manages to mess up 2+2 in real life every day.
It's a dull and uninteresting question whether it's knowledge. What do you want to use the fact of it being knowledge or not for? Will you trust stuff that you determine to be knowledge and not other things? Or is it about deciding legal court cases? Because then it's better to cut the middle man and directly try to determine whether it's good to punish something or not, without reference to terms like "having knowledge".
The issue that Gettier & friends is pointing to is that there are no examples where there is enough evidence. So under the formal definition it isn't possible to have a JTB. If you've seen enough evidence to believe something ... maybe you'd misinterpreted the evidence but still came to the correct conclusion. That scenario can play out at any evidence threshold. All else failing, maybe you're having an episode of insanity and all the information your senses are reporting are wild hallucinations but some of the things you imagine happening are, nonetheless, happening.
I could just as easily construct a problem in which I quietly turn off your background, which would mean your Zoom partner does possess knowledge while you do not, even though now it is you who thinks he does.
But none of that is actually true. Especially the part where it will have some sort of meaningful impact if we can just nail it down, let alone whether it would be beneficial or not.
There are many definitions of knowledge. From a perspective where you only know something if you are 100% sure about something and also abstractly "correct", which I call "abstract" because the whole problem in the first place is that we all lack access to an oracle that will tell us whether or not we are correct about a fact like "is there a cow in the field?" and so making this concrete is not possible, we end up in a very Descartian place where just about all you "know" is that you exist. There's some interesting things to be said about this definition, and it's an important one philosophically and historically, but... it also taps out pretty quickly. You can only build on "I exist" so far before running out of consequences, you need more to feed your logic.
From another perspective, if we take a probabilistic view of "knowledge", it becomes possible to say "I see a cow in the field, I 'know' there's a cow there, by which I mean, I have good inductive reasons to believe that what I see is in fact a cow and not a paper mâché construct of a cow, because inductively the probability that someone has set up a paper mâché version of the cow in the field is quite low." Such knowledge can be wrong. It isn't just a theoretical philosophy question either, I've seen things set up in fields as a joke, scarecrows good enough to fool me on a first glance, lawn ornamentation meant to look like people as a joke that fooled me at a distance, etc. It's a real question. But you can still operate under a definition of knowledge where I still had "knowledge" that a person was there, even though the oracle of truth would have told me that was wrong. We can in fact build on a concept of "knowledge" in which it "limits" to the truth, but doesn't necessarily ever reach there. It's more complicated, but also a lot more useful.
And I'm hardly exhausting all the possible interesting and useful definitions of knowledge in those two examples. And the latter is a class of definitions, not one I nailed down entirely in a single paragraph.
Again, I wouldn't accuse the most-trained philosophers of this in general, but the masses of philosophers also tend to spend a lot of time spinning on "I lack access to an oracle of absolute truth". Yup. It's something you need to deal with, like "I think, therefore I am, but what else can I absolutely 100% rigidly conclude?", but it's not very productive to spin on it over and over, in manifestation after manifestation. You don't have one. Incorporate that fact and move on. You can't define one into existence. You can't wish one into existence. You can't splat ink on a page until you've twisted logic into a pretzel and declared it that it is somehow necessary. If God does exist, which I personally go with "Yes" on, but either way, He clearly is not just some database to be queried whenever we wonder "Hey, is that a cow out there?" If you can't move on from that, no matter how much verbiage you throw at the problem, you're going to end up stuck in a very small playground. Maybe that's all they want or are willing to do, but it's still going to be a small playground.
Also: how did you come to know all the things you claim to in your comment (and I suspect in a few others in your history)?
Whether you agree or disagree is a separate matter and something you can discuss or ponder for 5 minutes. The article is about taking a somewhat interesting concept from philosophy and applying it to a routine software development scenario.
For example, it feels like we have free will to many people, but the meaning is hard to pin down, and there are all sorts of arguments for and against that experience of being able to freely choose. And what that implies for things like punishment and responsibility. It's not simply an argument over words, it's an argument over something important to the human experience.
There's been some progress science must have missed out on then:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8207024/
That is one organization, many others claim they've also achieved the impossible.
Out of curiosity, can you realize I am arguing from a much more advantageous position, in that I only have to find one exception to your popular "scientific organizations don't claim" meme (which I (and also you) can directly query on Google, and find numerous instances from numerous organizations), whereas you would have had to undertaken a massive review of all communications (and many forms of phrasing) from these groups, something we both know you have not done?
A (portion of) the (I doubt intentional or malicious) behavior is described here:
https://en.m.wikipedia.org/wiki/Motte-and-bailey_fallacy
I believe the flaw in scientists (and their fan base) behavior is mainly (but not entirely) a manifestation of a defect in our culture, which is encoded within our minds, which drives our behavior. Is this controversial from an abstract perspective?
It is possible to dig even deeper in our analysis here to make it even more inescapable (though not undeniable) what is going on here, with a simple series of binary questions ("Is it possible that...") that expand the context. I'd be surprised if you don't regularly utilize this form of thinking when it comes to debugging computers systems.
Heck, I'm not even saying this is necessarily bad policy, sometimes deceit is literally beneficial, and this seems like a prime scenario for it. If I was in power, I wouldn't be surprised if I too would take the easy way out, at least in the short term.
I think there are plenty of philosophical problems that emerge from our desire to describe things in centralized ways. Consciousness, understanding and intelligence are three of them. I prefer "search" because it is decentralized, and cover personal/inter-personal and social domains. Search defines a search space unlike consciousness which is silent about the environment and other people when we talk about it. Search does what consciousness, understanding and intelligence are for. All mental faculties: attention, memory, imagination, planning - are forms of search. Learning is search for representation. Science is search, markets are search, even DNA evolution and protein folding are search. It is universal and more scientific. Search removes a lot of the mystery and doesn't make the mistake to centralize itself in a single human.
I try a lot of obvious things when debugging to ascertain the truth. Like, does undoing my entire change fix the bug?
They say never to blame the compiler, and indeed it's pretty much never the compiler. But DNS on the other hand... :-)
(we’ll need a few thousand of these, and the off the shelf solution is around 1k vs $1.50 for RYO )
By the way, the RISC V espressif esp32-C3 is a really amazing device for < $1. It’s actually cheaper to go modbus-tcp over WiFi then to actually put RS485 on the board like with a MAX485 and the associated components. Also does ZIGBEE and BT, and the espressif libraries for the radio stack are pretty good.
Color me favorably impressed with this platform.
That's how you get things like equipment operators insisting that you have to adjust the seat before the boot will open.
― Ludwig Wittgenstein
A tool for filling the fields with papier-mache cows.
Cargo culting as a service.
Or for the belief part, well, "it's not a lie if you believe it".
And as for the true bit, let's assume that there really is a cow, but before you can call someone over to verify your JTB, an alien abducts the cow and leaves a crop circle. Now all anyone sees is a paper-mache cow so you appear the fool but did have a true JTB - Schroedinger's JTB. Does it really matter unless you can convince others of that? On the flip side, even if the knowledge is wrong, if everyone agrees it is true, does it even matter?
JTB only exist to highlight bad assumptions, like being on the wrong side of a branch predictor. If you have a 0.9 JTB but get the right answer 0.1 times and don't update you assumptions, then you have a problem. One statue in a field? Not a big deal! *
* Unless it's a murder investigation and you're Sherlock Holmes (a truly powerful branch predictor).
edit: And also the whole "is knowledge finite or infinite?". Is there ever a point at which we can explain everything, science ends and we can rest on our laurels? What then? Will we spend our time explaining hypotheticals that don't exist? Pure theoretical math? Or can that end too?
A cow-horse hybris is not a cow, it's a cow-horse hybrid.
A cow with a genetic mutation is a cow with a genetic mutation.
A cow created in a lab, perhaps even grown 100% by artificial means in-vitro is of course still a cow since it has the genetic makeup of a cow.
The word cow is the word cow, its meaning can differ based on context.
Things like this is why philosophers enjoy zero respect from me and why I'm an advocate for abolishing philosophy as a subject of study and also as a profession. Anyone can sit around thinking about things all day. If you spend money on studying it at a university you're getting scammed.
Also knowledge is finite based purely on the assumption that the universe is finite. An observer outside the universe would be able to see all information in the universe and they would conclude; you can't pack infinite amounts of knowledge into a finite volume.
From “it has the genetic makeup of a cow”, you’re saying that what make a cow a cow is the genetic makeup. But then part of that ADN defines the cow? What can vary, by how much, before a cow stops being a cow?
The point is that you can give any definition of “cow”, and we can imagine a thing that fits this definition yet you’d probably not consider a cow. It’s a reflection on how language relates to reality. Whether it’s an interesting point or not is left to the reader (I personally don’t think it is)
So for any given claim in Philosophy, if you could find a way to either (a) compare it to the world or (b) state it in unambiguous symbolic terms, then we'd stop calling it Philosophy. As a result it seems like the discipline is doomed to consist of unresolvable debates where none of the participants even define their terms quite the same way.
Crazy idea, or no?
"Some ungentle reader will check us here by informing us that philosophy is as useless as chess, as obscure as ignorance, and as stagnant as content. “There is nothing so absurd,” said Cicero, “but that it may be found in the books of the philosophers.” Doubtless some philosophers have had all sorts of wisdom except common sense; and many a philosophic flight has been due to the elevating power of thin air. Let us resolve, on this voyage of ours, to put in only at the ports of light, to keep out of the muddy streams of metaphysics and the “many-sounding seas” of theological dispute. But is philosophy stagnant? Science seems always to advance, while philosophy seems always to lose ground. Yet this is only because philosophy accepts the hard and hazardous task of dealing with problems not yet open to the methods of science—problems like good and evil, beauty and ugliness, order and freedom, life and death; so soon as a field of inquiry yields knowledge susceptible of exact formulation it is called science. Every science begins as philosophy and ends as art; it arises in hypothesis and flows into achievement. Philosophy is a hypothetical interpretation of the unknown (as in metaphysics), or of the inexactly known (as in ethics or political philosophy); it is the front trench in the siege of truth. Science is the captured territory; and behind it are those secure regions in which knowledge and art build our imperfect and marvelous world. Philosophy seems to stand still, perplexed; but only because she leaves the fruits of victory to her daughters the sciences, and herself passes on, divinely discontent, to the uncertain and unexplored."
Not a crazy idea – that is called logic. Which is a field of philosophy. Philosophy and math intersect more than many people think.
Even the Juris Doctor is a branch of philosophy. After all, what is justice?
"Anyone can sit around thinking about things all day" is like saying "anybody can sit and press keys on a keyboard all day".
I took a semester of philosophy at uni, perhaps the best invested time during my years there and by far more demanding than most of what followed. 100 % recommend it for anyone who wants to hone their critical reasoning skills and intellectual development in general.
As a matter of linguistic convenience, it's easier to say that relativity (or theory X) is right means that people who use relativity to make predictions make correct predictions as opposed to relativity itself being correct or incorrect.
On small scales, GR and Newtonian mechanics make almost the same predictions, but make completely different claims about what exists in reality. In my view, if the theories made equally good predictions, but still differed so fundamentally about what exists, then that matters, and implies that at least one of the theories is wrong. This is more a realist, than an instrumentalist position, which perhaps is what you subscribe to, but tbh instrumentalism always seemed indefensible to me.
In that sense, it's also correct to say that physicists have knowledge of relativity and quantum mechanics. I don't think any physicist including Einstein himself thinks that either theory is actually true, but they do have knowledge of both theories in much the same way that one has knowledge of "Maxatar's conjecture" and in much the way that you have knowledge of what the flat Earth proposition is, despite them being false.'
It seems fairly radical to believe that instrumentalism is indefensible, or at least it's not clear what's indefensible about it. Were NASA physicists indefensible to use Newtonian mechanics to send a person to the moon because Newtonian mechanics are "wrong"?
What exactly is indefensible? The observation that working physicists don't really care about whether a physical theory is "real" versus trying to come up with formal descriptions of observed phenomenon to make future predictions, regardless of whether those formal descriptions are "real"?
If someone choses to engage in science by coming up with descriptions and models that are effective at communicating with other people observations, experimental results and whose results go on to allow for engineering advances in technology, are they doing something indefensible?
>Were NASA physicists indefensible to use Newtonian mechanics to send a person to the moon because Newtonian mechanics are "wrong"?
No, it was defensible, and that's exactly my point. Even though they didn't believe in the content of the theory (and ignoring the fact that they know a better theory), they do have knowledge of reality through it.
I don't think instrumentalism makes sense for reasons unrelated to this discussion. A scientist can hold instrumentalist views without being a worse scientist for it, it's a philosophical position. Basically, I think it's bad metaphysics. If you refuse to believe that the objects described by a well-established theory really exist, but you don't have any concrete experiment that falsifies it or a better theory, then to me it seems like sheer refusal to accept reality. I think people find instrumentalism appealing because they expect that any theory could be replaced by a new one that could turn out very different, and then they see it as foolish to have believed the old one, so they straight up refuse to believe or care what any theory says about reality. But you always believe something, whether you are aware of it or not, and the question is whether your beliefs are supported by evidence and logic.
https://www.wikiwand.com/en/articles/Karl_Popper
read The problem of induction and demarcation: https://www.wikiwand.com/en/articles/Falsifiability
Basically to some it all up because we aren't "omniscient" nothing can in actuallity ever be known.
Read this dialogue with ChatGPT to see why:
https://chatgpt.com/share/670e7f9e-d1d0-8001-b1ef-3f4cbc85b9...
It’s a bit long winded and gets into much more detail but I will post ChatGPT’s most relevant response below:
Read the example of the black swan in the wiki link.
"All swans are white."
This statement cannot be proven because it's not possible to observe all swans. There may be some swan in some hidden corner of the earth (or universe) that I did not see.
If I see one black swan, I have falsified that statement.
When you refer to "Not all swans are white" This statement can be proven true but why? This is because the original statement is a universal claim and the negation is a particular claim.
The key distinction between universal claims and particular claims explains why you can "prove" the statement "Not all swans are white." Universal claims, like "All swans are white," attempt to generalize about all instances of a phenomenon. These kinds of statements can never be definitively proven true because they rely on inductive reasoning—no matter how many white swans are observed, there’s always the possibility that a counterexample (a non-white swan) will eventually be found.
In contrast, particular claims are much more specific. The statement "Not all swans are white" is a particular claim because it is based on falsification—it only takes the observation of one black swan to disprove the universal claim "All swans are white." Since black swans have been observed, we can confidently say "Not all swans are white" is true.
Popper's philosophy focuses on how universal claims can never be fully verified (proven true) through evidence, because future observations could always contradict them. However, universal claims can be falsified (proven false) with a single counterexample. Once a universal claim is falsified, it leads to a particular claim like "Not all swans are white," which can be verified by specific evidence.
In essence, universal claims cannot be proven true because they generalize across all cases, while particular claims can be proven once a falsifying counterexample is found. That's why you can "prove" the statement "Not all swans are white"—it’s based on specific evidence from reality, in contrast to the uncertain generality of universal claims.
To sum it up. When I say nothing can be proven and things can only be falsified... it is isomorphic to saying universal claims can't be proven, particular claims can.
Tumblr is loginwalled now, so I can't find the good version of this, but I'll try and rip it:
Philosophical questions like "what is knowledge" are hard precisely because everyone has an easy and obvious explanation that is sufficient to get them trough life.
But, when forced to articulate that explanation, people often find them to be incomparable with other people's versions. Upon probing, the explanations don't hold at all. This is why some ancient Greek thought experiments can be mistaken for zen koans.
Yeah, you can get by in life without finding a rigorous answer. The vast majority of human endeavor beyond subsistence can be filed under the category "I'm not sure I see the big deal."
To say that about the question of knowledge and then vamp for 200 words is not refusing to engage. It's patching up a good-enough answer to suit a novel challenge and moving on. Which is precisely why these questions are hard, and why some people are so drawn to exploring for an answer.
Ramachandran Capgras Delusion Case
https://www.youtube.com/watch?v=3xczrDAGfT4
> On the flip side, even if the knowledge is wrong, if everyone agrees it is true, does it even matter?
This is case of consensus reality (an intuition pump I borrowed from somewhere). Consensus reality is also respected in Quantum realm.
https://youtu.be/vSnq5Hs3_wI?t=753
while individual particles remain in quantum superposition, their relative positions create a collective consensus in the entanglement network. This consensus defines the structure of macroscopic objects, making them appear well-defined to observers, including Schrödinger's cat.
In classical logic statements can be true in and of themselves even if there as no proof of it, but in intuitionistic logic statements are true only if there is a proof of it: the proof is the cause for the statement to be true.
In intuitionistic logic, things are not as simple as "either there is a cow in the field, or there is none" because as you said, for the knowledge of "a cow is in the field" to be true, you need a proof of it. It brings lots of nuance, for example "there isn't no cow in the field" is a weaker knowledge than "there is a cow in the field".
Also no suprise the rabbit hole came from Haskell where those types (huh) are attracted to this more.foundational theory of computation.
QED - proof by terminological convention!
That's a problem right there. Maybe that made sense to the Greeks, but it definitely doesn't make any sense in the 21st century. "Knowing" falsehoods is something we broadly acknowledge that we all do.
I think the philosophical claim is that, when we think we know something, and the thing that we turns out to be false, what has happened isn't that we knew something false, but rather that we didn't actually know the thing in the first place. That is, not our knowledge, but our belief that we had knowledge, was mistaken.
(Of course, one can say that we did after all know it in any conventional sense of the word, and that such a distinction is at the very best hair splitting. But philosophy is willing to split hairs however finely reason can split them ….)
On Jan 1 2024 I "know" X. Time passes. On Jan 1 2028, I "know" !X. In both cases, there is
(a) something it is like to "know" either X or !X
(b) discernible brain states the correspond to "knowing" either X or !X and that are distinct from "knowing" neither
Thus, even if you don't want to call "knowing X" actually "knowing", it is in just about every sense indistinguishable from "knowing !X".
Also, a belief that we had the knowledge that relates to X is indistinguishable from a belief that we had the knowledge that relates to !X. In both cases, we possess knowledge which may be true or false. The knowledge we have at different times alters; at all times we have a belief that we have the knowledge that justifies X or !X, and we are correct in that belief - it is only the knowledge itself that is false.
You evidently want to use the word "know" exclusively to describe a brain state, but many people use it to mean a different thing. Those people are the ones who are having this debate. It's true that you can render this debate, like any debate, into nonsense by redefining the terms they are using, but that in itself doesn't mean that it's inherently nonsense.
Maybe you're making the ontological claim that your beliefs about X don't actually become definitely true or false until you have a way to tell the difference? A sort of solipsistic or idealistic worldview? But you seem to reject that claim in your last sentence, saying, "it is only the knowledge itself that is false."
If someone is just going to say "It is not possible to know false things", then sure, by that definition of "know" any brain state that involves a justified belief in a thing that is false is not "knowing".
But I consider that a more or less useless definition of "knowing" in context of both Gettier and TFA.
I think that, without using a definition of "knowing" that fits the description of definitions you are declaring useless, you won't be able to make any sense of either Gettier or TFA. So, however useful or useless you may find it in other contexts, in the context of trying to understand the debate, it's a very useful family of definitions of "knowing"; it's entirely necessary to your success in that endeavor.
Or, try renaming the variables and see if it still bothers you identically.
Furthermore, OP’s choice of putting “know” in quotes seems to suggest that author is not using the word as conventionally understood (though, of course, orthography is not an infallible guide to intent.)
IMHO, Gettier cases are useful only on that they raise the issue of what constitutes an acceptable justification for a belief to become knowledge.
Gettier clauses are specifically constructed to be about true beliefs, and so do not challenge the idea that facts are true. Instead, one option to resolve the paradox is to drop the justification requirement altogether, but that opens the question of what, if anything, we can know we know. At this point, I feel that I am just following Hume’s footsteps…
To know something in this sense seems to require several things: firstly, that the relevant proposition is true, which is independent of one's state of mind (not everyone agrees, but that is another issue...) Secondly, it seems to require that one knows what the relevant proposition is, which is a state of mind. Thirdly, having a belief that it is true, which is also a state of mind.
If we left it at that, there's no clear way to find out which propositions are true, at least for those that are not clearly true a priori (and even then, 'clearly' is problematic except in trivial cases, but that is yet another issue...) Having a justification for our belief gives us confidence that what we believe to be true actually is (though it rarely gives us certainty.)
But what, then, is justification? If we take the truth of the proposition alone as its justification, we get stuck in an epistemic loop. I think you are right if you are suggesting that good justifications are often in the form of causal arguments, but by taking that position, we are casting justification as being something like knowledge: having a belief that an argument about causes (or anything else, for that matter) is sound, rather than a belief that a proposition states a fact - but having a justified belief in an argument involves knowing that its premises are correct...
It is beginning to look like tortoises all the way down (as in Lewis Carroll's "What the Tortoise Said to Achilles".)
https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achi...
E.g. a neurologist would likely be happy to speak of a brain knowing false information, but a psychologist would insist that that’s not the right word. And that’s not even approaching how this maps to close-but-not-quite-exact translations of the word in other languages…
From my point of view, "to know" is a subjective feeling, an assessment on the degree of faith we put on a statement. "Knowledge" instead is an abstract concept, a corpus of statements, similar to "science". People "know" false stuff all the time (for some definition of "true" and "false", which may also vary).
A flat-earther may feel they "know" the earth is flat. I feel that i "know" that their feeling isn't "true" knowledge.
This is the simple case where we all (in this forum, or at least I hope so) agree. If we consider controversial beliefs, such as the existence of God, where Covid-19 originated or whether we have free will, people will often still feel they "know" the answer.
In other words, the experience of "knowinging" is not only personal, but also interpersonal, and often a source of conflicts. Which may be why people fight over the defintion.
In reality, there are very few things (if any) that can be "known" with absolute certainty. Anyone who has studied modern Physics would "know" that our intuition is a very poor guide to fundamental knowledge.
The scientific method may be better in some ways, but even that can be compromized. Also, it's not really useful for people outside the specific scientific field. For most people, scientific findings are only "known" second hard from seeing the scientists as authorities.
A bigger problem, though, is that a lot of people are misusing the label "scientific" to justify beliefs or propaganda that has only weak (if any) support from the use of hard science.
In the end, I don't think the word "knowledge" has any fundamental correspondence to something essential.
Instead, I see the ability to "know" something as a characteristic of the human brain. It's an ability that causes the brain to lock onto one belief and disregard all others. It appears to be tendency we all have, which means it's probably evolved by evolution due to providing some evolutionary advantage.
The types of "knowledge" that we feel we "know", to the extend that we learn them from others, seem to evolve in parallel to this as memes/memeplexes (using Dawkin's original use of "meme").
Such memes spread in part virously by pure replication. But if they convey advantages to the hosts they may spread more effectively.
For example, after Galilei/Newton, Physics provided several types of competitive advantage to those who saw it as "knowledge". Some economic, some military (like calculating artillery trajectories). This was especially the case in a politically and religously fragmented Europe.
The memeplex of "Science" seems to have grown out of that. Not so much because it produces absolute truths, but more because those who adopted a belief in science could reap benefits from it that allowed them to dominate their neighbours.
In other areas, religious/cultural beliefs (also seen as "knowledge" by te believers) seem to have granted similar power to the believers.
And it seems to me that this is starting to become the case again, especially in areas of the world where the government provides a welfare state to all that prevent scientific knowledge to grant a differential survival/reproductive advantage to those who still base their knowledge on Science.
If so, Western culture may be heading for another Dark Age....
I thought this was interesting:
> Instead, I see the ability to "know" something as a characteristic of the human brain. It's an ability that causes the brain to lock onto one belief and disregard all others. It appears to be tendency we all have, which means it's probably evolved by evolution due to providing some evolutionary advantage.
It is substantially hardware (the brain) and software (the culturally conditioned mind).
Rewind 100 years and consider what most people "knew" that black people were. Now, consider what most people nowadays "know" black people are not. So, definitely an improvement in my opinion, but if we can ever get our heads straight about racial matters I think we'll be well on our way to the second enlightenment.
This is something that a lot of Greeks would have had issues with, most probably Heraclitus, and Protagoras for sure. Restricting ourselves to Aristotelian logic back in the day has been extremely limiting, so much so that a lot of modern philosophers cannot even comprehends how it is to look outside that logic.
That's arguably good. If you restrict yourself to something that you know is a valid method of ascertaining truth, then you have much higher confidence in the conclusion. The fact that we still struggle even with getting this restricted method shows that restrictions are necessary and good!
Then you bootstrap your way to a more comprehensive method of discourse from that solid foundation. Like Hilbert's program, which ultimately revealed some incredibly important truths about logic and mathematics.
And to give a concrete example related to this as a whole, people should have known that getting to know something by not knowing it more and more is a valid epistemological take, just look at Christian Orthodox Isichasm and its doctrine about God (paraphrased it goes like this: the more you are aware of the fact that you don’t know God then the more you actually know/experience God”). Christian Orthodox Isichasm is, of course, in direct connection with neo-Platonism/Plotinism, but because the neo-Platonist “doctrine” on truth has never been mathematically formalized (presuming that that would even be possible) then the scientific world chooses to ignore it and only focuses on its own restricted way of looking at truth and, in the end, of experiencing truth.
Only in abstract discussions like this one. And in some concrete discussions on certain topics, not "knowing" seems to be essentially impossible for most non-silent participants.
To say that this is not "knowing" is (as another commenter noted) hair-splitting of the worst kind. In every sense it is a justified belief that happens to be false (we just do not know that yet).
The strength of the justification is, I would suggest, largely subjective.
Also I don't think this definition fits with people's intuition. At least, certainly not my own. There are times where I realise I'm wrong about something I thought I knew. When I look back, I don't say "I knew this, and I was wrong". I say "I thought I knew this, but I didn't actually know it".
This is one of the best questions ever, not just for philosophers, but for all us regular plebes to ponder often. The number of things I know is very very small, and the number of things I believe dramatically outnumbers the things I know. I believe, but don’t know, that this is true for everyone. ;) It seems pretty apparent, however, that we can’t know everything we believe, or nothing would ever get done. We can’t all separately experience all things known first-hand, so we rely on stories and the beliefs they invoke in order to survive and progress as a species.
Not to mention what does it even mean for something to be false. For the hypothetical savage the knowledge that the moon is a piece of cheese just beyond reach is as true as it is for me the knowledge that it's a celestial body 300k km away. Both statements are false for the engineer that needs to land a probe there (the distance varies and 300k km is definitely wrong).
First person know is belief. To some extent: this is just faith! Yes we have faith that the laws of physics wont change tomorrow, or we remember yesterday happened etc. Science tries to push that faith close to fact by verifying the fuck out of everthing. But we will never know why anything...
The other "know" is some kind of concept of absolute truth and a coincidence that what someone belives matches this. Whether that coincidence is chance or astute observations or in the paper's case: both.
“Debugging is the art of figuring out which of your assumptions are wrong.”
(Attribution unknown)
One way is to reason from a false premise, or as I would put it, something we think is true is not true.
The other way is to mix logical levels (“this sentence is false”).
I don’t think I ever encountered a bug from mixing logical levels, but the false premise was a common culprit.
The culprit was an embedded TrueType font that had what (I think) was a strange but valid glyph name with a double forward slash instead of the typical single (IIRC whatever generated the PDF just named the glyphs after characters so /a, /b and then naturally // for slash). Either way it worked fine in most viewers and printers.
The larger scale production printer on the other hand, like many, converted to postscript in the processor as one of its steps. A // is for an immediately evaluated name in postscript so when it came through unchanged, parsing this crashed the printer.
So we have a font, in a PDF, which got turned into Postscript, by software, on a certain machine which presumably advertised printing PDF but does it by converting to PS behind the scenes.
A lot of layers there and different people working on their own piece of the puzzle should have been 'encapsulated' from the others but it leaked.
security with cryptography is mostly about logical level problems, where each key or operation forms a layer or box. treating these as discrete states or things is also an abstraction over a seqential folding and mixing process.
debugging a service over a network has the whole stack as logical layers.
most product management is solving technical problems at a higher level of abstraction.
a sequence diagram can be a multi-layered abstraction rotated 90 degrees, etc.
But most people tend not to include that in the "your assumptions" list, and frequently it is the source of the bug.
In other words, it looks like a form of solipsism.
But what world it would be if you could flip a coin on any choice and still survive! If the world didn't follow any self-consistent logic, like a Roger Zelazny novel, that would be fantastic. Not sure that qualifies as solipsism, but still. Would society even be possible? Or even life?
Here, as long as you follow cultural norms, every choice has pretty good outcomes.
My favourite debugging technique is "introduce a known error".
This validates that your set of "facts" about the file you think you're editing are actually facts about the actual file you are editing.
For example: is the damn thing even compiling?
[1] http://www-bcf.usc.edu/~kleinsch/Gettier.pdf
Presumably, there is a farmer who raised the cow, then purchased the papier-mâché, then scrounged for a palette of paints, and meticulously assembled everything in a field -- all for the purpose of entertaining distant onlookers.
That is software engineering. In Gettier's story, we're not the passive observers. We're the tricksters who thought papier-mâché was a good idea.
Let's take the obscured cow example. Nobody outside the confines of a philosophy experiment believes that there is a cow in the field. They believe that they see something which looks like a cow (this is justified and true) and they also believe, based on past evidence, that what they are seeing is a cow (this is justified but not, in this special case, true.) But if you play this joke on them repeatedly, they will start to require motion, sound, or a better look at the cow shaped object before assigning a high likelihood of there being an actual cow in the field that they are observing. They will also ask you how you are arranging for the real cow to always be conveniently obscured by the fake cow.
Unsurprisingly, gaining additional evidence can change our beliefs.
The phenomenon of a human gaining object permanence is literally the repeated updating of prior possibility estimations until we have a strong base estimation that things do not cease to exist when we stop observing them. It happens to all of us early on. (Bayes' Theorem is a reasonable approximation of mental processes here. Don't conclude that it accurately describes everything.)
The papier-mache cow simulation is not something we normally encounter, and hypothesizing it every time is a needless multiplication of entities... until you discover that there is a philosophical jokester building cow replicas. Then it becomes a normal part of your world to have cow statues and cows in fields.
Now, software engineering:
We hold models in our brains of how the software system works (or isn't working). All models are wrong, some are useful. When your model is accurate, you can make good predictions about what is wrong or what needs to be changed in the code or build environment in order to produce a desired change in software behavior. But the model is not always accurate, because we know:
- the software system is changed by other people - the software system has bugs (because it is non-trivial) - even if the software system is the same as our last understanding of it, we do not hold all parts of the model in our brains at the same weights. A part that we are not currently considering can have effects on the behaviour we are trying to change.
Eventually we gain the meta-belief that whatever we are poking is not actually fixed until we have tested it thoroughly in practice... and that we may have introduced some other bug in the process.
With bayes, you're computing P(Model|Evidence) -- but this doesnt explain where Model comes from or why Evidence is relevant to model.
If you compute P(AllPossibleModels|AllSensoryInput) you end up never learning anything.
What's happening with animals is that we have a certain, deterministic, non-bayesian primitive model of our bodies from which we can build more complex models.
So we engage in causal reasoning, not bayesian updating: P(EvidenceCausedByMyBody| do(ActionOfMyBody)) * P(Model|Evidence)
>certain, deterministic, non-bayesian primitive model of our bodies
What makes you certain the model of our body is non-Bayesian? Does this imply we have an innate model of our body and how it operates in space? I could be convinced that babies don't inherently have a model of their bodies (or that they control their bodies) and it is a learned skill. Possibly learned through some pseudo Bayesian process. Heck, the unathletic among us adults may still be updating our Bayesian priors with our body model, given how often it betrays our intentions :-)
In bayesian approaches it's assumed we have some implicity metatheory which gives us how the data relates to the model, so really all bayesian formulae should have an implicit 'Theory' condition which provides, eg., the actual probability value:
P(Model|Evidence, Theory(Model, Evidence))
The problem is there's no way of building such a theory using bayesianism, it ends in a kind of obvious regress: P(P(P(M|E, T1)|T2)|T3,...)
What theory provides the meaning of 'the most basic data'? ie., how it relates to the model? (and eg., how we compute such a probability).
The answer to all these problems is: the body. The body resolves the direction of causation, it also bootstraps reasoning.
In order to compute P(ShapeOfCup|GraspOnCup, Theory(Grasp, Shape)), I first (in early childhood) build such a theory by computing P(ShapeSensaton|do(GraspMovemnt), BasicTheory(BasicMotorActions, BasicSensations).
Were 'do' is non-bayesian conditioning, ie., it denotes the probability distribution which arises specifically from causal intervention. And "BasicTheory" has to be in-built.
In philosophical terms, the "BasicTheory" is something like Kant's synthetic a priori -- though there's many views on it. Most philosophers have realised, long before contemporary stats, that you cannot resolve the under-determination of theory by evidence without a prior theory.
If it's an ability that later develops independent of experience with the exterior world, it seems untestable. I.e., how can you test the theory without a baby being in the world in the first place?
Eg., it might be that the kind of "theory" which exists is un/pre-conscious. So that it takes a long time, comparatively, for the baby to become aware of it. Until the baby has a self-conception it cannot consciously form the thought "I am grasping" -- however, consciousness imv is a derivative-abstracting process over-and-above the sensory motor system.
So the P(Shape|do(Grasp), BasicTheory(Grasp, Shape)) actually describes something like a sensory-motor 'structure' (eg., a distribution of shapes associated with sensory-motor actions). The proposition that "I am grasping" which allows expressing a propositional confidence requires (self-)consciousness: P(Shape|"I have grasped", Theory(Grasp, Shape)) -- bayesianism only makes sense when the arguments of probability are propositions (since its about beliefs).
What's the relationship between the bayesian P(Shape|"I have...") and the causal P(Shape|do(Grasp)) ? The baby requires a conscious bridge from the 'latent structural space' of the sensory-motor system to the intentional belief-space of consciousness.
So P(Shape|do(Grasp)) "consciously entails" P(Shape| "I have..") iff the baby has to developed a theory, Theory(MyGrasping|Me)
But, perhaps counter-intutively, it is not this theory which allows the baby to reliably compute the shape based on knowing "its their action". It's only the sensory-motor system which needs to "know" (metaphorically) that the grapsing is of the shape.
Maybe a better way of putting it then is that the baby requires a procedural mechanism which (nearly-) guarentees that it's actions are causally associated with its sensations such that it's sensations and actions are in a reliable coupling. This 'reliable coupling' has to provide a theory, in a minimal sense, of how likely/relevant/salient/etc. the experiences are given the actions
It is this sort of coupling which allows the baby, eventually, to develop an explicit conscious account of its own existence.
E.g., If motor movement and causal inference are coupled, would you expect a baby born with locked in syndrome to have a limited notion of self?
This is not only testable, but central to neuroscience, and i'd claim, to any actual science of intelligence -- rather the self-aggrandising csci mumbojumbo.
On the testing side, you can lesion various parts of the sensory-motor system of mice, run them in various maze-solving experiments under various conditions (etc.) and observe their lack of ability to adapt to novel environments.
...whoa. That makes complete sense.
So you're saying that there must be some form of meta-rationality that gives cues to our attempts at Bayesian reasoning, directing those attempts how to make selections from each set (the set of all possible models and the set of all sensory inputs) in order to produce results that constitute actual learning.
And you're suggesting that in animals and humans at least, the feedback loop of our embodied experience is at least some part of that meta-rationality.
That's an incredible one-liner.
To be able to claim there is a cow there requires additional evidence.
Every other time you've been in that school building, the clocks have shown you the right time, so you feel very confident that the clocks on the wall are accurate.
But this time, you happen to be in the room with the non-functioning clock. It says "2:02" but by great good fortune, it actually happens to be 2:02.
So your belief is:
1. True. It actually is 2:02.
2. Justified. The vast majority of the time, if you see a clock on a wall in that building, it is working fine.
But should we say that you know the time is 2:02? Can you get knowledge of the time from a broken clock? Of course not. You just got lucky.
In order to count as knowledge, it has to be justified in the right way, which, alas, nobody has been able to specify exactly what that way should be. So far, nobody has come up with criteria which we can't find break in a similar way.
// all you can justify is that there is the likeness of a cow there //
If you see something which looks real, you are justified in believing it is real. If you see your friend walking into the room, sure, you've seen your friend's likeness in the room. But you are justified in believing your friend is in the room.
So if you see something that looks like a cow in a field, you are justified in believing there is a cow in a field, even though looks may be deceiving.
First of all you have to be able to test your knowledge, you would test that the clock is correct for every minute of the day. If you missed any minutes then your knowledge is incomplete, you instead have probable knowledge, (using the same methods that physics uses to decide if an experimental result is real, you can assign a probability that the clock is correct).
Also, since when is knowledge absolute? You can never be completely certain about any knowledge, you can only assign (or try to assign) a probability that you know something, and testing your belief greatly increases the probability.
(PS. Thank you for the reply.)
Is this assertion not self-refuting though?
How careful do you have to be to never be fooled? For most people, a non-zero error rate is acceptable. Their level of caution will be adjusted based on their previous error rate. (Seen in this sense, perfect knowledge in a philosophical sense is a quest for a zero error rate.)
In discussions of how to detect causality, one example is flipping a light switch to see if it makes the light go on and off. How many flips do you need in order to be sure it’s not coincidence?
This is where Contextualism comes into play. Briefly, your epistemic demands are determined by your circumstances.
https://plato.stanford.edu/entries/contextualism-epistemolog...
Hello darkness my old friend…
Suppose you've got a class library with no source, and the documentation defines a get method for some calculated value. But suppose that what the get method actually does is return an incorrectly calculated value. You're not getting the right calculated value, but you're getting a calculated value none the less. But then finally suppose that in the same code is the right calculated value in unreachable code or an undocumented method.
On the one hand, you have a justified true belief that "the getter returns a calculated value": (1) you believe the getter returns a value; (2) that belief didn't come from nowhere, but is justified by you getting values back that look exactly like calculated values; (3) and the class does, in fact, have code in it to return a correctly calculated value.
But you're staff and above when you can understand when your programming model is broken, and how to experiment to find out what it really is. That almost always goes beyond the specified and tested behaviors (which might be incidentally correct) to how the system should behave in untested and unanticipated situations.
Not surprisingly, problems here typically stem from gaps in the programming model between developers or between departments, who have their own perspective on the elephant, and their incidence in production is an inverse function of how well people work together.
Are there any good examples of gettiers in software engineering that don't rely on understanding causality, where we're just talking about "what's there" not explaining "how it got there"?
For the autofocus example, if the statement in question was "my patch broke the autofocus," it would not be Gettier because it is not true (the unrelated pushed changes did); if the statement in question was "my PR broke the autofocus," it would not be Gettier because it is JTB, and the justification (it was working before the PR, but not after) is correct, i.e., the cause of the belief, the perception, and deduction, are correct; Same if the statement in question was "the autofocus is broken."
It would be Gettier if the person reporting the bug was using an old (intact) version of the app but was using Firefox with a website open in another window on another screen, which was sending alerts stealing the focus.
The most common example of true Gettier cases in software dev is probably the following: A user reports a bug but is using an old version, and while the new version should have the bug fixed, it's still there.
The statement is "the current version has the bug." The reporter has Justified Belief because they see the bug and recently updated, but the reporter cannot know, as they are not on the newest version.
- JTB is not enough, for something to be “true” it needs _testability_. In other words, make a prediction from your knowledge-under-test which would be novel information (for example, “we’ll find fresh cow dung in the field”). - nothing is really ever considered “true”, there’s only theories that describe reality increasingly correctly
In fact, physics did away with the J: it doesn’t matter that your belief is justified if it’s tested. You could make up a theory with zero justification (which doesn’t contradict existing knowledge ofc), make predictions and if they’re tested, that’s still knowledge. The J is just the way that beliefs are formed (inference)
For example, if I toss a coin and it comes up heads, put the coin in my pocket and then go about my day, and later on say to somebody "I tossed a coin earlier, and it came up heads", that is a JTB, but it's not testable. You might assume I'm lying, but we're not talking about whether you have a JTB in whether I tossed a heads or not, we're talking about if I have one.
There are many areas of human experience where JTB is about as good as we are going to get, and testability is off-limits. If somebody tells me they saw an alien climb out of a UFO last night, I have lots of reasons to not believe them, but if this a very trustworthy individual who has never lied to me about anything in my decades of experience of knowing them, I might have a JTB that they think this is true, even if it isn't. But none of it is testable.
Physics - the scientific method as a whole - is a superb way to think about and understand huge swathes of the World, but it has mathematically proven limits, and that's fine, but let's not assume that just because something isn't testable it can't be true.
Because every time they said, "I've found the absolute truth," the philosophers just replied, "Only in your frame of reference!"
But you may have conflated 'testability' and 'tested'. Can I know there is a cow in the field if I don't check? Seeing it was already evidence, testing just collects more evidence, so how can that matter? Should we set a certainty threshold on knowledge? Could be reasonable.
Maybe prediction-making is too strong to be necessary for 'knowing', if we allow knowing some fact in a domain of knowledge of which you're otherwise clueless. Although very reasonable to not call this knowledge. Suppose I learn of an mathematical theorem in a field that's so unfamiliar that I can't collect evidence to independently gain confidence in it.
How could something become true in the first place such that it could be tested to discover that it is true, if the test precedes and is a condition for truth?
Do you have tests I can run on each of your many assertions here that prove their truth?
Is the crux of the argument that justification is an arbitrary line and ultimately insufficient?
These are correct but contrived and unrealistic, so later examples are more plausible (e.g. being misled by a mislabelled television program from a station with a strong track record of accuracy).
The point is not disproving justified true belief so much as showing the inadequacy of any one formal definition: at some point we have to elevate evidence to assumption and there's not a one-size-fits-all way to do that correctly. And, similarly to the software engineering problems, a common theme is the ways you can get bitten by looking at simple and seemingly true "slices" of a problem which don't see a complex whole.
It is worth noting that Gettier himself was cynical and dismissive of this paper, claiming he only wrote it to get tenure, and he never wrote anything else on the topic. I suspect he didn't find this stuff very interesting, though it was fashionable.
I think a case can't so much "disprove" JTB, so much as illustrate that adopting a definition of knowledge is more complex than you might naively believe.
Of course that devolves rapidly into trying to find the "base case" of knowledge that are inherent
If there's a bug - things on other levels will adapt to that bug, creating a "gettier" waiting to happen.
Other feedback-related concept is false independence. Imagine a guy driving a car over a hilly road with 90 mph speed limit. The speed of his car is not correlated with the position of the foot on the gas pedal (it's always 90 mph). On the other hand the position of the gas pedal and the angle of the road is correlated.
This is example popular in macroeconomics (to explain why central bank interest rates and inflation might seem to be independent).
We purposefully try not to do rebases in my team for this reason.
The classic and oft heard “How did this ever work?”
When this kind of thing tries to surface, it’s a warning that you need to 10x your understanding of the problem space you are adjacent to.
By all means you can gain a lot by making things easier to understand, but only in service of shortcuts while developing or debugging. But this kind of understanding is not the foundation your application can safely stand on. You need detailed visibility into what the system is genuinely doing, and our mushy brains do a poor job of emulating any codebase, no matter how elegant.
FP can be good for that but I often find that people get so carried away with the pure notion of functional code that they forget to make it obvious in its design. Way, way too much “clever” functional code out there.
The data structures are the key for many things, but a lot of software is all about handling side effects, where basically everything you touch is an input or an output with real world, interrelated global state.
That’s where correctly compartmentalising those state relationships and ample asserts or fail-soft/safe code practices become key. And properly descriptive variable names and naming conventions, with sparse but deep comments where it wasn’t possible to write the code to be self documented by its obvious nature.
But isn't the paper-mache cow case solved by simply adding that the evidence for the justification also needs to be true?
The definition already requires the belief to be true, that's a whole other rabbit hole, but assuming that's valid, it's rather obvious that if your justification is based on false evidence then it is not justified, if it's true by dumb luck of course it doesn't count as knowing it.
EDIT: Okay I see how it gets complicated... The evidence in this case is "I see something that looks like a cow", which I guess is not false evidence? Should your interpretation of the evidence be correct? Should we include into the definition that the justification cannot be based on false assumptions (existing false beliefs)? I can see how this would lead to more papers.
EDIT: I have read the paper and it didn't really change my view of the problem. I think Gettier is just using a sense of "justified" that is somewhat colloquial and ill defined. To me a proposition is not justified if it is derived from false propositions. This kind of solves the whole issue, doesn't it?
To Gettier it is more fuzzy, something like having reasonably sufficient evidence, even if it is false in the end. More like "we wouldn't blame him for being wrong about that, from his point of view it was reasonable to believe that".
I understand that making claims of the absolute truthfulness of things makes the definition rather useless, we always operate on incomplete evidence, then we can never know that we know anything (ah deja vu). But Gettier is not disputing the part of the definition that claims that the belief needs to be true to be known.
EDIT: Maybe the only useful definition is that know = believe, but in speech you tend to use "he knows P" to hint that you also believe P. No matter the justification or truthfulness.
EDIT: I guess that's the whole point that Gettier was trying to make: that all accepted definitions at the time were ill-defined, incomplete and rather meaningless, and that we should look at it closer. It's all quite a basic discussion on semantics. The paper is more flamebait (I did bite) than a breakthrough, but it is a valid point.
The problem is that when you're working at such a low level as trying to define what it means to know something, even simple inferences become hellishly complicated. It's like trying to bootstrap a web app in assembly.
***
justified: in the sense of deriving from evidence
true: because it doesn't make sense to "know" a falsehoood
belief: i.e., a proposition in your head
***
Justified: there is an error message
true: there is an error condition
belief: the engineer observes the message and condition
---
Where's my cow?
Are you my cow? [0]
0: https://www.amazon.com/Wheres-My-Cow-Terry-Pratchett/dp/0060...
The more likely a bug is to make me look dumb, it will only appear as soon as I ask for help.
Does the code have 0 defects, 1 defect, or 2 defects?
I see no practical usefulness in all of these examples, except as instances of the rule that you can get correct results from incorrect reasoning.
It has been "thematically appropriated" by a certain sort of pop-philosophy, but it says nothing relevant.
Philosophy isnt the activity of trying to construct logical embeddings in deductive proofs. If any one ever thought so, then there's some thin sort of relevance, but no one ever has.
But it's also about fuzzy stuff which doesn't follow the A or not A logic.
Saved the tech team time chasing from a wild goose.
https://fitelson.org/proseminar/gettier.pdf
It's really worth a read, it's remarkably short and is written in very plain language. It takes less than 10 mins to go through it.
A three-page paper that shook philosophy, with lessons for software engineers - https://news.ycombinator.com/item?id=18898646 - Jan 2019 (179 comments)
Love it
EDIT: Deleted paragraph on DRY that wasn't quite right.
[1] https://fitelson.org/proseminar/gettier.pdf
The changes only had adjacency to the causes and that’s super common on any system that has a few core pieces of functionality.
I think the core lesson here is that if you can’t fully explain the root cause, you haven’t found the real reason, even if it seems related.
And the “right” RC only has to be right enough to solve the issue.
Desperation to ‘know’ something for certain can be misleading when coincidence is a lot more common than proof.
Worse yet is extending the feeling of ‘justified’ to somehow ‘lessen’ any wrongness, perhaps instead of a more informative takeaway.