Does this mean that artificial computer systems, wired appropriately, can be conscious? Not necessarily, Koch says. This might one day be possible with the advent of new technology, but we are not there yet. He writes. “The high connectivity [in a human brain] is very different from that found in the central processing unit of any digital computer, where one transistor typically connects to a handful of other transistors.” For the foreseeable future, AI systems will remain unconscious despite appearances to the contrary.
I think one would have to be unaware of the ability of simple Turing-equivalent machines to simulate devices of much greater complexity than themselves in order to find this argument persuasive. We see here that Koch is a fan of IIT; can he show one machine with a high IIT score that cannot be simulated by a simple (though sufficiently-sized) Turing-equivalent computer, possibly abetted with a source of entropy?
slibhb 89 days ago [-]
Simulating something isn't the same thing as being that thing.
You can write a computer program that simulates pain. Perhaps even to the point that it convinces people that it feels pain. But, actually, it doesn't have a nervous system and can't feel pain.
Maybe in the future we'll build machines that can feel (and are conscious) but currently we don't know how. It would be very strange if we build those machines accidentally while training LLMs.
mannykannot 89 days ago [-]
While it is true in general that simulating a thing is not that thing itself, a simulation may be the same as the thing being simulated in all aspects that matter. In particular, information-manipulating systems are amenable to simulation by other information-manipulating systems: one can, for example, make a simulation of a cryptographic machine that encrypts and decrypts messages.
I agree that we don't know how to build machines that can feel (and are conscious); all I am saying here is that Koch's argument (at least as presented in this article) against a suitably-programmed transistor-based, von Neumann architecture digital computer becoming a conscious machine does not hold up.
slibhb 89 days ago [-]
> While it is true in general that simulating a thing is not that thing itself, a simulation may be the same as the thing being simulated in all aspects that matter.
What does matter? Does consciousness? It may matter less than is widely assumed since LLMs can do lots of cool stuff and aren't conscious.
> I agree that we don't know how to build machines that can feel (and are conscious); all I am saying here is that Koch's argument (at least as presented in this article) against a suitably-programmed transistor-based, von Neumann architecture digital computer becoming a conscious machine does not hold up.
I agree that his argument isn't very good. Yet I'm sure it's the case that there is no possible program that can run on any computer that exists today that can be conscious.
Hardware matters. Computers need multiple cores to do parallel computations. Similarly, they'd need some kind of hardware to be conscious, hardware that we don't know how to build. If we want to build it, we'll almost certainly need to understand what makes humans or other animals conscious. The odds of just stumbling across the solution seems very low.
mannykannot 89 days ago [-]
> What does matter? Does consciousness?
Personally, I think it does, and I do not think it can be dismissed as either epiphenomenal or some sort of illusion. I don't think LLMs are evidence against this, as, while they are uncannily like humans in some respects, they seem to lack some pretty important things (such as self-recognition or a theory of mind, or, for that matter, any concept of language as referring to an external world that goes by rules independently of what is said about it.)
> Hardware matters. Computers need multiple cores to do parallel computations. Similarly, they'd need some kind of hardware to be conscious, hardware that we don't know how to build.
This is not a valid argument - the conclusion ('[computers would] need some kind of hardware to be conscious, hardware that we don't know how to build') does not ineluctably follow from the premise ('Computers need multiple cores to do parallel computations' or even just 'Hardware matters'.)
slibhb 88 days ago [-]
I don't think consciousness will end up mattering very much. It matters a lot to us because we're conscious and it's not totally clear that moral considerations apply to things that aren't conscious. But I think it will turn out that non-conscious computer programs will be able to do more or less everything we can do.
> This is not a valid argument - the conclusion ('[computers would] need some kind of hardware to be conscious, hardware that we don't know how to build') does not ineluctably follow from the premise ('Computers need multiple cores to do parallel computations' or even just 'Hardware matters'.)
I'm not claiming that my argument proves anything. The point I'm making is that simulating something in software isn't magic. You can't write a computer program that executes code in parallel and run it on a single core CPU and observe parallelism. You need to have hardware that supports parallelism.
I think it's very likely that this is also the case for consciousness. You can simulate consciousness. But to actually achieve conscious experience, you need a certain kind of hardware. This is just my opinion, there's no proof here.
mannykannot 87 days ago [-]
I am curious as to what one would be simulating if one were simulating consciousness without the simulation having achieved conscious experience. By simulating consciousness, do you mean just simulating the behavior of an agent presumed to be conscious, such as a person? We should be careful not to equivocate between the two, and to be clear, wherever I have mentioned or referred to the simulation of a conscious agent, I mean the former, not merely the latter.
jhickok 89 days ago [-]
>Simulating something isn't the same thing as being that thing.
That's a controversial, though probably majority position, among people who study consciousness. Functional equivalence in the minds of many hard-nosed, very important folks in the field *does* entail feature equivalence with respect to consciousness.
goatlover 89 days ago [-]
Problem is functional equivalence doesn't deal with the hard problem, since the hard problem isn't functional, it's a felt experiential matter. No degree of functional equivalence can tell us whether a system is experiencing something. It only tells us that it's functioning in an equivalent manner.
Fundamentally, this is a dichotomy between objective and subjective. Functional is an objective measurement.
mannykannot 89 days ago [-]
> No degree of functional equivalence can tell us whether a system is experiencing something.
What makes you sure of that? I have read all the usual arguments, and I don't find them sufficient to justify the certainty that it is beyond the realm of possibility of doing so, at least up to the standard of evidence by which we assume other people are experiencing something.
Framing the physical-mental problem in terms of equivalence is essentially begging the question anyway, tacitly presupposing that the mental is not a physical phenomenon. No-one asks what, physically, is equivalent to a hurricane (to use just one example of a complex and what was once a mysterious phenomenon.)
jhickok 89 days ago [-]
That view of the hard problem begs the question against functionalist theories of consciousness though. Many physicalists deny that the hard-problem (or backing arguments like the zombie argument) entails that there is such a dichotomy.
mannykannot 89 days ago [-]
Why would these viewpoints be regarded as mutually exclusive? What's wrong with X and Y being functionally equivalent in every respect, yet different in implementation? See the last paragraph of my reply to goatlover for why I feel that identity is a red herring.
jhickok 89 days ago [-]
Well for the functionalist I am describing, their view is that identity *doesn't* matter, and that consciousness is multi-realizable by any appropriately arrange substrate. That's why my reply to OP's statement-- "simulating something isn't the same as being that thing" is wrong to a large group of people who work in this field, who claim that there is nothing more to being a thing than their functional arrangement, whether it's a brain or a neural network etc.
mannykannot 89 days ago [-]
Indeed - substrate independence is behind my example of a computer simulation of a cryptographic machine. I chose to use the term 'the same' strictly, by which the simulation is not strictly the same as the machine (the substrate is different), yet in the ways that matter (i.e. functionally), they are the same. I find that putting it this way makes it less easy (or likely) to be summarily dismissed by people who feel that "a simulation is not the same as the thing it is simulating" is a decisive argument against the possibility of conscious computers.
slowmovintarget 88 days ago [-]
In the real world, identity tends to matter, especially for complex systems.
Genetic algorithms, for example, given enough time, will optimize differently on different hardware, or even in a different environment. What's amazing is how resilient organic creatures are, humans in particular, to changes in "identity."
bbor 89 days ago [-]
As someone who spends a lot of time on this subject, and disagrees with this person passionately: great article, and a fascinating addition to the field! The critic (who triple majored?!) has her site + newsletter at the bottom, I'm certainly giving it a chance.
I was going to quote-dunk a bunch of it, but I think it's beyond the point -- no one's going to be convinced on the scientific validity of "consciousness" in this thread alone. I will say that I recommend Patricia Churchland's Neurophilosophy and Noam Chomsky's [Language and The Mind](https://www.ugr.es/~fmanjon/Language%20and%20Mind.pdf) for scientifically-minded views that don't share this persons goal.
keybored 89 days ago [-]
Interesting that you mention Chomsky. Is my feeling correct that Skinner and people like him represented the modern feeling—and at that time too apparently—that we only needed to find mechanisms of association and then would be able to mechanically simulate the interesting parts of the human mind?
Btw from the Chomsky book you linked:
> > The technological advances of the 1940s simply reinforced the general euphoria. Computers were on the horizon, and their imminent availability reinforced the belief that it would suffice to gain a theoretical understanding of only the simplest and most superficially obvious of phenomena – everything else would merely prove to be “more of the same,” an apparent complexity that would be disentangled by the electronic marvels
foobarian 89 days ago [-]
One simple question I'm dying to know the answer to - does the consciousness found in biological brains rely on some hitherto unknown physical mechanism (e.g. some QD effect) or can it be recreated purely through interconnectivity of a data structure/algorithm?
bbor 88 days ago [-]
It’s a bigger question than one can get into in a forum comment (I recommend typing some key words into https://plato.stanford.edu for more!), but to share what I feel are the closest to empirical truths we have on the matter:
There is, as of yet, no scientific justification for thinking that humans are different from computers in some fundamental physical sense. We can talk about differences in architectonic structure like this article does (densely connected neurons v sparsely connected transistors), but ultimately each neuron is a machine, and at this point they’re machines we understand pretty darn well. Like all other machines bigger than 0.0001nm or whatever, there’s not really a clear mechanism by which they could be meaningfully “controlled” or “influenced” by quantum interactions.
In this light, the answer is simple: human brains are incredibly complex machines. Some of the main counter arguments are:
1. It “feels” like “something” to be “you”, which is a whole field science lacks the terminology/framework to study. This is the position taken by this author, more or less
2. All conscious beings so far (animals included) have been biological, so simple parsimony could argue that we need a counterexample first.
3. Humans are able to communicate with each other telepathically by tapping into an unknown physical field. Obviously won’t win a lot of fans on this forum, but it’s an interesting piece of trivia that this final point was the only one Alan Turing found plausible in his famous 1950 paper Computing Machinery and Intelligence. See page 17: https://courses.cs.umbc.edu/471/papers/turing.pdf
foobarian 88 days ago [-]
> There is, as of yet, no scientific justification for thinking that humans are different from computers in some fundamental physical sense
This is similar to the question of, can we simulate a brain? Even if we can't build one using a LLM-type construct, it seems at the very least we could model atomic interactions in detail and brute-force a simulation that way. However I think this requires that there is a lower limit to the level of detail of physical reality; otherwise the non-determinism such as what you see with the uncertainty principle becomes intractable. As a result it's not clear to me at all that humans are not different from computers in a physical sense.
poikroequ 89 days ago [-]
It depends who you ask, but I personally lean strongly towards the "unknown physical mechanism". It seems insane to me to believe that simulating neutral activity in software will magically result in real consciousness.
seba_dos1 89 days ago [-]
Why?
poikroequ 88 days ago [-]
I use the word "magic" a lot. If you really stop to consider how a computer works, multiple sticks of ram, registers, L1/L2/L3 cache, LRU cache, memory swapped to disk, GPU memory, virtual memory, encrypted/compressed memory, x86/arm instructions, Unicode, and computation can span multiple computers.
It seems insane to me to believe that consciousness can emerge from this mess, simply because "you performed the right computations", regardless of cpu architecture or operating system or programming language or whatever. If consciousness really could emerge from all this, interpret the CPU instruction set, parse virtual memory tables, dereference pointers, decrypt and uncompress memory, parse Unicode text, assemble all of the necessary information scattered all about in the system, then consciousness is magic, a miracle.
Or, we could simply assume there's some unknown physical mechanism/process/activity in the brain that leads to conscious experience. Almost everything else in the universe is the result of a physical process, magnetic fields, a nuclear bomb exploding, even quantum entanglement. Why should consciousness be an exception?
The5thElephant 88 days ago [-]
I'm struggling to understand this argument. Our brain is just hardware. Even if there is a quantum effect we haven't discovered that is necessary for it, that effect is still running on regular old atoms and molecules. There is no inherent reason we couldn't just add that effect to our metal and silicon computers.
Those atoms and molecules may just be bits of information themselves in some higher order computer. But there is nothing inherently "magical" about consciousness other than its uniqueness.
Heck there are fairly convincing arguments for pan-consciousness where it is a fundamental part of any set of information and is simply as complex as that information system. If you have a highly complex, self referential information system like our brains, then the complexity of consciousness is equivalent to our experience. The chinese box would have its LLM-like consciousness, which we would not recognize as our own, but could still a qualitative experience born from objective information states.
Think more about your last paragraph, it undermines your argument from the previous two. If everything is the result of a physical process, then how is that an argument for consciousness being somehow fundamentally different or exceptional in our ability to recreate or simulate it?
poikroequ 88 days ago [-]
> There is no inherent reason we couldn't just add that effect to our metal and silicon computers.
Yes, I agree with that much. Hopefully I understand you, but I do believe we could create a "consciousness chip", so to speak, that performs the proper physical process to create real conscious experience, not just simulated. But it's unlikely to happen with existing computer hardware. (by "create", I'm not saying that consciousness emerges from nothing. Rather, there's this idea of "activating" consciousness, "turning the lights on".)
> Those atoms and molecules may just be bits of information themselves in some higher order computer. But there is nothing inherently "magical" about consciousness other than its uniqueness.
I don't buy the idea that the universe is a computer nor that we're living in a simulation. I do believe there is an objective reality.
> Heck there are fairly convincing arguments for pan-consciousness where it is a fundamental part of any set of information and is simply as complex as that information system. If you have a highly complex, self referential information system like our brains, then the complexity of consciousness is equivalent to our experience. The chinese box would have its LLM-like consciousness, which we would not recognize as our own, but could still a qualitative experience born from objective information states.
It's an idea that comes out of sci-fi. It's been used as a plot device in some episodes of Star Trek. But it's just that, science fiction.
> Think more about your last paragraph, it undermines your argument from the previous two. If everything is the result of a physical process, then how is that an argument for consciousness being somehow fundamentally different or exceptional in our ability to recreate or simulate it?
That's not what I said. What I'm saying is computers are not magically conscious. Could we recreate consciousness with the right hardware? Sure, I don't see why not, like what I said with a "consciousness chip" . But is consciousness magically going to emerge purely from a simulation without any special hardware? No, of course not.
The5thElephant 88 days ago [-]
> It's an idea that comes out of sci-fi. It's been used as a plot device in some episodes of Star Trek. But it's just that, science fiction.
I'm not talking about hand-wavy sci-fi or spiritual "the universe is a sentient being" stuff. I'm talking about one of the few solutions to what the nature of qualia is and why we aren't philosophical zombies. Why would the universe have some mechanism of consciousness available to it prior to those structures even existing? What is more convincing about a physical process being the literal mechanism for consciousness and not an informational process that has an abstracted physical basis? Could the literal mechanism not just be the information system considering you can create the same behaviors in a simulation as in reality?
Consciousness being some thing that just appears in the universe all of a sudden because evolution happened on this "neat trick" in physics seems more "magical" than it being an emergent property of increasing more complex systems, but a property that has always been there. We see the same emergent behavior appear in all sorts of systems built using different parts and/or simulated as long as they are in the same context of rules. Why wouldn't that also hold true for the emergent behavior of consciousness?
The argument for it being this unique thing that only can happen via a strict physical process strikes me as dualist. Why is it a unique thing?
poikroequ 88 days ago [-]
As I said before, you can simulate a nuclear explosion, but that simulation will never manifest itself into a real nuclear explosion.
There's a physical basis for most things. A magnetic field forms because you have a bunch of particles with the same quantum spin. A stove burner glows red when you run electricity through it.
I'm not saying the process in the brain is unique. I'm not saying there's only one way to "activate" consciousness. Heck, for all I know, maybe a bolt of lightning experiences consciousness, even if just for a fraction of a second.
Allow me to give you a "physical" example. The brain operates near a critical threshold, that is the edge of chaos. The brain is teetering on the edge of chaos without ever going chaotic. Well, not usually, because when the brain does enter chaos, you get a seizure. But there is growing evidence that this may be a prerequisite for consciousness; when the brain enters a less chaotic state, a person may lose consciousness.
Now let's talk about information processing. Most of what our brains do is subconscious. Only a small subset of what our brains do is actually conscious. What makes some information processing more special than others that some of it is conscious and the rest is not? Isn't it more likely that the way the information is processed is what makes the difference? I.e. different at a physical level.
The5thElephant 85 days ago [-]
The nuclear explosion analogy is a bit off since for anything living in the simulation the nuclear explosion would be quite real. The explosion isn't real for us in the physical world because it cannot interact with us, but a simulated mind can interact with us in the real world.
> Only a small subset of what our brains do is actually conscious. What makes some information processing more special than others that some of it is conscious and the rest is not?
This is an excellent way of approaching the question, but I just as easily can say isn't it more likely that the difference is the pattern of the information and not the strict physical structure that makes it? Look at how many different physical structures and mechanisms we have for seeing, hearing, breathing, touching, etc across nature. Many of them are fundamentally different from each other, but end up in the same result of a sense.
Isn't it more likely that conscious thinking is like other senses in that it's a kind of information processing, rather than a specific mechanism of processing?
This also make it more likely to answer your question of why are some mental processes conscious and the majority are not, it would seem far more likely that the brain's neuronal structures (most of which are the same basic cell throughout the brain, just in different types of structures) discover different patterns rather than fundamentally different physical processes.
seba_dos1 88 days ago [-]
I don't see why implementing consciousness in terms of RAM sticks, registers, cache and CPU instruction sets would be any more surprising or unbelievable than, say, implementing all those I mentioned inside a Game of Life field - which is perfectly possible, even if impractical.
poikroequ 87 days ago [-]
Perhaps you misunderstand. I'm saying that a computer (RAM sticks, registers, etc) is extremely unlikely to have actual conscious experience. At best, a computer can be a philosophical zombie. The point is, consciousness doesn't emerge purely from computation alone. You need the right physical process/activity/properties/whatever, which the brain has and computers do not.
You can construct a computer using purely mechanical parts - gears, cams, springs, etc. The first calculators were purely mechanical.
So imagine you built an absolutely massive computer, made purely out of mechanical parts, that was capable of performing matrix multiplication and therefore running LLMs. It's technically computation, but should we seriously expect real consciousness to emerge from a purely mechanical computer just because it performed the right computations? Of course not, that would literally be a miracle.
seba_dos1 87 days ago [-]
> I'm saying that a computer (RAM sticks, registers, etc) is extremely unlikely to have actual conscious experience.
And yet a bag of proteins, mitochondria, water etc. somehow isn't?
> The point is, consciousness doesn't emerge purely from computation alone.
[citation needed]
> It's technically computation, but should we seriously expect real consciousness to emerge from a purely mechanical computer just because it performed the right computations?
Of course yes. I don't see why consciousness couldn't theoretically emerge even from computations made by a really bored human placing stones on an endless beach, or in an anthill with any single ant being completely oblivious of what's going on. Such consciousness will only be conscious of data that's provided to it as its input, but conscious nevertheless - just like we are only conscious of a specific representation of the real world as perceived by our flawed senses. I don't expect any meaningful difference there, other than our minds being at least a few steps ahead of anything we can build at this moment, which is hardly surprising given how little we know about brains and bodies even today.
poikroequ 87 days ago [-]
Consciousness is magic. Got it.
seba_dos1 86 days ago [-]
Yes, you're trying to argue for it in the whole thread, but I'm not as convinced as you are :)
Computation isn't magic, even when we barely understand it.
keybored 89 days ago [-]
> Unraveling how consciousness arises out of particular configurations of organic matter is a quest that has absorbed scientists and philosophers for ages. Now, with AI systems behaving in strikingly conscious-looking ways, it is more important than ever to get a handle on who and what is capable of experiencing life on a conscious level.
Lord give me strength.
> Koch suggests that exercise, meditation, and the occasional guided psychedelic might be beneficial to many people. Substances such as psilocybin can enhance one’s feeling of well-being and facilitate pure presence. It is rare to have a close brush with death, and mystical transformations typically come unbidden; in contrast, psychoactive substances—though not entirely predictable—are more consistently available and can be managed safely. Koch’s book implies that there’s immense psychological benefit of entering "the flow" through chemistry that might outweigh the small risks involved.
This is cool. I like that he considers states of consciousness for the everyman and is not talking about philosophical zombies (Angels on a Pinhead for the modern age).
Workaccount2 89 days ago [-]
IIT is like the string theory of Neuroscience.
Lots of people putting weight behind it, but also with a degree of hubris similar to the string theory guys of the late 90's. But just like string theory, it also has a lot of fair criticism and is mostly still in the hypothesis phase.
axblount 89 days ago [-]
The key similarity between IIT (integrated information theory, for the uninitiated) and string theory is that both are unfalsifiable. Hubris can be forgiven. Unfalsifiability can not.
swayvil 89 days ago [-]
In articles like this our study of consciousness is strongly colored by our preference for thinking and writing. And that's arguably unavoidable, being thinkers in a society and such, but there you go.
A "clearer" study would require starting from some kind of zero. But nobody wants to go live in a cave and whoever did might not be inclined to write an article about it.
ben_w 89 days ago [-]
I think starting from zero would not work, as we all did that from wherever it is in our life cycle that consciousness emerges, be that "a cell at conception" or later. It would be another anecdote of one example of consciousness.
What I think will work, is to examine all the different forms of consciousness — including not just animals, but also those humans we find unfathomable, alien, and impossible to empathise with.
swayvil 89 days ago [-]
That would be putting the cart before the horse. Better to try for zero first, then theorize second.
anthk 89 days ago [-]
A few days ago I was reading a pop science magazine from Spain (this month was as a special issue about the brain/consciusness) and Koch was mentioned. Someone call Peat/Jung.
JK, network effect at work, for sure.
On consciousness, I'm interested on the integrated information theory.
waldrews 88 days ago [-]
How are people still publishing books without audiobook versions? What, are we supposed to read with our eyes, like animals?
grantcas 89 days ago [-]
[dead]
ypeterholmes 89 days ago [-]
“The high connectivity [in a human brain] is very different from that found in the central processing unit of any digital computer, where one transistor typically connects to a handful of other transistors.”
Has he been living under a rock? Modern AI models already outpace the connectivity of the human brain, and are only getting bigger.
j_bum 89 days ago [-]
Assuming that GPT4 has 1T+ parameters, you’re incorrect.
Modern estimates are that there are 100T neuronal connections in the adult human brain [0]. And that’s neuronal connections alone.
Astrocytes also make direct connections with neurons and can modify and induce neuronal activity [1]. There are 100B neurons [0] and ~20B astrocytes in the adult human brain [2, 3].
So this 100T connections estimate is only a small slice of the picture of human brain activity.
"In human brains there are about 100 billion neurons (nerve cells), each capable of producing an electrical pulse up to perhaps a thousand times a second."
So maybe that's wrong. Thanks for the links. Even so if we look out 10, 100, 200 years from now, that level of complexity will be greatly surpassed by AI.
j_bum 89 days ago [-]
Indeed, the computing power of cells is nothing to blink at.
It is astonishing that we can mimic human reasoning so well with these LLMs, but the essence of cognition seems to still be missing.
I agree with your belief that human processing will be surpassed in the future. We do live in some exciting times :)
passion__desire 89 days ago [-]
Speed relative to environment is important in determining if something is intelligent. If it takes N flops to produce one token but I carry those out in 10 years, that is not intelligence.
ypeterholmes 89 days ago [-]
The machines will be much faster than us, and soon. So what's your point?
passion__desire 89 days ago [-]
Consciousness or at least intelligent behaviour is how fast a system is relative to environment.
poikroequ 89 days ago [-]
AI is just software. He's talking about the physical transistors in a computer.
89 days ago [-]
ypeterholmes 89 days ago [-]
That's precisely my point. The database connections involved in a neural network are the corollary, not transistors. What am I missing?
poikroequ 89 days ago [-]
The author is clearly talking about PHYSICAL connectivity in the brain. Thus why he says AI is unlikely to become conscious without the "advent of new technology". The brain is a physical neural network. AI is a software simulated neural network. Nowhere in the article does the author confuse the two.
ryandvm 89 days ago [-]
The physical number of transistors is irrelevant. If in software you were able to perfectly simulate the 100T physical neuronal connections in a real brain, then you would perfectly recreate that brain's conscious experience.
Granted, you would be doing it with a frame rate limited by the processing power of the computer, but that just means that a thought that takes a human 1 second to arrive at might take much longer for the AI (for now).
But at this point the TYPE of computation performed by neurons, which is unlike what modern computers do-- for example, brains do not appear to have addressable memory units separate from compute units-- do seem to be differing enough to perhaps explain some of the gaps between computers and minds. Some even think neural computation is a different category of computation altogether: https://pubmed.ncbi.nlm.nih.gov/23126542/
poikroequ 89 days ago [-]
What? No. Consciousness isn't magic. Consciousness doesn't emerge purely from computation, that's just nonsense. Do you realize computers can have encrypted memory? How the heck can you possibly hope to perfectly recreate conscious experience with encrypted memory? How about virtual memory? Is consciousness seriously parsing the virtual memory table to reassemble the memory of a process, dereferencing pointers, parsing UTF8 bytes, interpreting computer code? Be real.
dekhn 89 days ago [-]
It's not correct to say modern AI models outpace the connectivity of the human brain.
ypeterholmes 89 days ago [-]
No? The human brain has ~100 billion neurons and ~1 trillion connection weights. Google’s PaLM uses ~540 Billion nodes with ~100 trillion connection weights.
And the key point is this- these models are the worst they will ever be, and are gaining size at pace. So even if you we grant the argument that our brains are still a bit more complex, hopefully we can agree that will not be the case in 5 years. Heck, how about 20 years, or 100? Let's be real.
pocketsand 89 days ago [-]
If you took the "no it won't" side of every argument about "how in X number of years, AI is sure to Y", you'd be way ahead.
In any event, raw parameter/weight count to me seems like a very primitive way to judge "complexity" in comparison to the human brain. Looked at most ways, our brains are for more efficient at doing the incredible things they do than LLMs. Consider how little language young children are exposed to in comparison to LLMs given their abilities to figure out how to produce language.
If the brain doesn't work like an LLM, you can expand the size and "complexity" of these models to the moon and they won't outperform the brain. Current models can write impressively well, but they can barely do math. It's clear they don't reason as we do.
dekhn 89 days ago [-]
nodes and weights are different from neurons and connections. Neurons are also not the only components in the brain which contribute to intelligence.
google recently scanned a 1mm cube of human brain which was 1.5Petabytes of raw data. The AI hardware that Google trains on is multiple racks.
I think a better analogy would be between an entire google datacenter (including all the networking, storage, sensors, processors, and memory) and a human body although even then it's a stretch.
89 days ago [-]
Rendered at 22:08:18 GMT+0000 (Coordinated Universal Time) with Vercel.
I think one would have to be unaware of the ability of simple Turing-equivalent machines to simulate devices of much greater complexity than themselves in order to find this argument persuasive. We see here that Koch is a fan of IIT; can he show one machine with a high IIT score that cannot be simulated by a simple (though sufficiently-sized) Turing-equivalent computer, possibly abetted with a source of entropy?
You can write a computer program that simulates pain. Perhaps even to the point that it convinces people that it feels pain. But, actually, it doesn't have a nervous system and can't feel pain.
Maybe in the future we'll build machines that can feel (and are conscious) but currently we don't know how. It would be very strange if we build those machines accidentally while training LLMs.
https://www.101computing.net/enigma-machine-emulator/
I agree that we don't know how to build machines that can feel (and are conscious); all I am saying here is that Koch's argument (at least as presented in this article) against a suitably-programmed transistor-based, von Neumann architecture digital computer becoming a conscious machine does not hold up.
What does matter? Does consciousness? It may matter less than is widely assumed since LLMs can do lots of cool stuff and aren't conscious.
> I agree that we don't know how to build machines that can feel (and are conscious); all I am saying here is that Koch's argument (at least as presented in this article) against a suitably-programmed transistor-based, von Neumann architecture digital computer becoming a conscious machine does not hold up.
I agree that his argument isn't very good. Yet I'm sure it's the case that there is no possible program that can run on any computer that exists today that can be conscious.
Hardware matters. Computers need multiple cores to do parallel computations. Similarly, they'd need some kind of hardware to be conscious, hardware that we don't know how to build. If we want to build it, we'll almost certainly need to understand what makes humans or other animals conscious. The odds of just stumbling across the solution seems very low.
Personally, I think it does, and I do not think it can be dismissed as either epiphenomenal or some sort of illusion. I don't think LLMs are evidence against this, as, while they are uncannily like humans in some respects, they seem to lack some pretty important things (such as self-recognition or a theory of mind, or, for that matter, any concept of language as referring to an external world that goes by rules independently of what is said about it.)
> Hardware matters. Computers need multiple cores to do parallel computations. Similarly, they'd need some kind of hardware to be conscious, hardware that we don't know how to build.
This is not a valid argument - the conclusion ('[computers would] need some kind of hardware to be conscious, hardware that we don't know how to build') does not ineluctably follow from the premise ('Computers need multiple cores to do parallel computations' or even just 'Hardware matters'.)
> This is not a valid argument - the conclusion ('[computers would] need some kind of hardware to be conscious, hardware that we don't know how to build') does not ineluctably follow from the premise ('Computers need multiple cores to do parallel computations' or even just 'Hardware matters'.)
I'm not claiming that my argument proves anything. The point I'm making is that simulating something in software isn't magic. You can't write a computer program that executes code in parallel and run it on a single core CPU and observe parallelism. You need to have hardware that supports parallelism.
I think it's very likely that this is also the case for consciousness. You can simulate consciousness. But to actually achieve conscious experience, you need a certain kind of hardware. This is just my opinion, there's no proof here.
That's a controversial, though probably majority position, among people who study consciousness. Functional equivalence in the minds of many hard-nosed, very important folks in the field *does* entail feature equivalence with respect to consciousness.
Fundamentally, this is a dichotomy between objective and subjective. Functional is an objective measurement.
What makes you sure of that? I have read all the usual arguments, and I don't find them sufficient to justify the certainty that it is beyond the realm of possibility of doing so, at least up to the standard of evidence by which we assume other people are experiencing something.
Framing the physical-mental problem in terms of equivalence is essentially begging the question anyway, tacitly presupposing that the mental is not a physical phenomenon. No-one asks what, physically, is equivalent to a hurricane (to use just one example of a complex and what was once a mysterious phenomenon.)
Genetic algorithms, for example, given enough time, will optimize differently on different hardware, or even in a different environment. What's amazing is how resilient organic creatures are, humans in particular, to changes in "identity."
I was going to quote-dunk a bunch of it, but I think it's beyond the point -- no one's going to be convinced on the scientific validity of "consciousness" in this thread alone. I will say that I recommend Patricia Churchland's Neurophilosophy and Noam Chomsky's [Language and The Mind](https://www.ugr.es/~fmanjon/Language%20and%20Mind.pdf) for scientifically-minded views that don't share this persons goal.
Btw from the Chomsky book you linked:
> > The technological advances of the 1940s simply reinforced the general euphoria. Computers were on the horizon, and their imminent availability reinforced the belief that it would suffice to gain a theoretical understanding of only the simplest and most superficially obvious of phenomena – everything else would merely prove to be “more of the same,” an apparent complexity that would be disentangled by the electronic marvels
There is, as of yet, no scientific justification for thinking that humans are different from computers in some fundamental physical sense. We can talk about differences in architectonic structure like this article does (densely connected neurons v sparsely connected transistors), but ultimately each neuron is a machine, and at this point they’re machines we understand pretty darn well. Like all other machines bigger than 0.0001nm or whatever, there’s not really a clear mechanism by which they could be meaningfully “controlled” or “influenced” by quantum interactions.
In this light, the answer is simple: human brains are incredibly complex machines. Some of the main counter arguments are:
1. It “feels” like “something” to be “you”, which is a whole field science lacks the terminology/framework to study. This is the position taken by this author, more or less
2. All conscious beings so far (animals included) have been biological, so simple parsimony could argue that we need a counterexample first.
3. Humans are able to communicate with each other telepathically by tapping into an unknown physical field. Obviously won’t win a lot of fans on this forum, but it’s an interesting piece of trivia that this final point was the only one Alan Turing found plausible in his famous 1950 paper Computing Machinery and Intelligence. See page 17: https://courses.cs.umbc.edu/471/papers/turing.pdf
This is similar to the question of, can we simulate a brain? Even if we can't build one using a LLM-type construct, it seems at the very least we could model atomic interactions in detail and brute-force a simulation that way. However I think this requires that there is a lower limit to the level of detail of physical reality; otherwise the non-determinism such as what you see with the uncertainty principle becomes intractable. As a result it's not clear to me at all that humans are not different from computers in a physical sense.
It seems insane to me to believe that consciousness can emerge from this mess, simply because "you performed the right computations", regardless of cpu architecture or operating system or programming language or whatever. If consciousness really could emerge from all this, interpret the CPU instruction set, parse virtual memory tables, dereference pointers, decrypt and uncompress memory, parse Unicode text, assemble all of the necessary information scattered all about in the system, then consciousness is magic, a miracle.
Or, we could simply assume there's some unknown physical mechanism/process/activity in the brain that leads to conscious experience. Almost everything else in the universe is the result of a physical process, magnetic fields, a nuclear bomb exploding, even quantum entanglement. Why should consciousness be an exception?
Those atoms and molecules may just be bits of information themselves in some higher order computer. But there is nothing inherently "magical" about consciousness other than its uniqueness.
Heck there are fairly convincing arguments for pan-consciousness where it is a fundamental part of any set of information and is simply as complex as that information system. If you have a highly complex, self referential information system like our brains, then the complexity of consciousness is equivalent to our experience. The chinese box would have its LLM-like consciousness, which we would not recognize as our own, but could still a qualitative experience born from objective information states.
Think more about your last paragraph, it undermines your argument from the previous two. If everything is the result of a physical process, then how is that an argument for consciousness being somehow fundamentally different or exceptional in our ability to recreate or simulate it?
Yes, I agree with that much. Hopefully I understand you, but I do believe we could create a "consciousness chip", so to speak, that performs the proper physical process to create real conscious experience, not just simulated. But it's unlikely to happen with existing computer hardware. (by "create", I'm not saying that consciousness emerges from nothing. Rather, there's this idea of "activating" consciousness, "turning the lights on".)
> Those atoms and molecules may just be bits of information themselves in some higher order computer. But there is nothing inherently "magical" about consciousness other than its uniqueness.
I don't buy the idea that the universe is a computer nor that we're living in a simulation. I do believe there is an objective reality.
> Heck there are fairly convincing arguments for pan-consciousness where it is a fundamental part of any set of information and is simply as complex as that information system. If you have a highly complex, self referential information system like our brains, then the complexity of consciousness is equivalent to our experience. The chinese box would have its LLM-like consciousness, which we would not recognize as our own, but could still a qualitative experience born from objective information states.
It's an idea that comes out of sci-fi. It's been used as a plot device in some episodes of Star Trek. But it's just that, science fiction.
> Think more about your last paragraph, it undermines your argument from the previous two. If everything is the result of a physical process, then how is that an argument for consciousness being somehow fundamentally different or exceptional in our ability to recreate or simulate it?
That's not what I said. What I'm saying is computers are not magically conscious. Could we recreate consciousness with the right hardware? Sure, I don't see why not, like what I said with a "consciousness chip" . But is consciousness magically going to emerge purely from a simulation without any special hardware? No, of course not.
I'm not talking about hand-wavy sci-fi or spiritual "the universe is a sentient being" stuff. I'm talking about one of the few solutions to what the nature of qualia is and why we aren't philosophical zombies. Why would the universe have some mechanism of consciousness available to it prior to those structures even existing? What is more convincing about a physical process being the literal mechanism for consciousness and not an informational process that has an abstracted physical basis? Could the literal mechanism not just be the information system considering you can create the same behaviors in a simulation as in reality?
Consciousness being some thing that just appears in the universe all of a sudden because evolution happened on this "neat trick" in physics seems more "magical" than it being an emergent property of increasing more complex systems, but a property that has always been there. We see the same emergent behavior appear in all sorts of systems built using different parts and/or simulated as long as they are in the same context of rules. Why wouldn't that also hold true for the emergent behavior of consciousness?
The argument for it being this unique thing that only can happen via a strict physical process strikes me as dualist. Why is it a unique thing?
There's a physical basis for most things. A magnetic field forms because you have a bunch of particles with the same quantum spin. A stove burner glows red when you run electricity through it.
I'm not saying the process in the brain is unique. I'm not saying there's only one way to "activate" consciousness. Heck, for all I know, maybe a bolt of lightning experiences consciousness, even if just for a fraction of a second.
Allow me to give you a "physical" example. The brain operates near a critical threshold, that is the edge of chaos. The brain is teetering on the edge of chaos without ever going chaotic. Well, not usually, because when the brain does enter chaos, you get a seizure. But there is growing evidence that this may be a prerequisite for consciousness; when the brain enters a less chaotic state, a person may lose consciousness.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8851554/
Now let's talk about information processing. Most of what our brains do is subconscious. Only a small subset of what our brains do is actually conscious. What makes some information processing more special than others that some of it is conscious and the rest is not? Isn't it more likely that the way the information is processed is what makes the difference? I.e. different at a physical level.
> Only a small subset of what our brains do is actually conscious. What makes some information processing more special than others that some of it is conscious and the rest is not?
This is an excellent way of approaching the question, but I just as easily can say isn't it more likely that the difference is the pattern of the information and not the strict physical structure that makes it? Look at how many different physical structures and mechanisms we have for seeing, hearing, breathing, touching, etc across nature. Many of them are fundamentally different from each other, but end up in the same result of a sense.
Isn't it more likely that conscious thinking is like other senses in that it's a kind of information processing, rather than a specific mechanism of processing?
This also make it more likely to answer your question of why are some mental processes conscious and the majority are not, it would seem far more likely that the brain's neuronal structures (most of which are the same basic cell throughout the brain, just in different types of structures) discover different patterns rather than fundamentally different physical processes.
You can construct a computer using purely mechanical parts - gears, cams, springs, etc. The first calculators were purely mechanical.
So imagine you built an absolutely massive computer, made purely out of mechanical parts, that was capable of performing matrix multiplication and therefore running LLMs. It's technically computation, but should we seriously expect real consciousness to emerge from a purely mechanical computer just because it performed the right computations? Of course not, that would literally be a miracle.
And yet a bag of proteins, mitochondria, water etc. somehow isn't?
> The point is, consciousness doesn't emerge purely from computation alone.
[citation needed]
> It's technically computation, but should we seriously expect real consciousness to emerge from a purely mechanical computer just because it performed the right computations?
Of course yes. I don't see why consciousness couldn't theoretically emerge even from computations made by a really bored human placing stones on an endless beach, or in an anthill with any single ant being completely oblivious of what's going on. Such consciousness will only be conscious of data that's provided to it as its input, but conscious nevertheless - just like we are only conscious of a specific representation of the real world as perceived by our flawed senses. I don't expect any meaningful difference there, other than our minds being at least a few steps ahead of anything we can build at this moment, which is hardly surprising given how little we know about brains and bodies even today.
Computation isn't magic, even when we barely understand it.
Lord give me strength.
> Koch suggests that exercise, meditation, and the occasional guided psychedelic might be beneficial to many people. Substances such as psilocybin can enhance one’s feeling of well-being and facilitate pure presence. It is rare to have a close brush with death, and mystical transformations typically come unbidden; in contrast, psychoactive substances—though not entirely predictable—are more consistently available and can be managed safely. Koch’s book implies that there’s immense psychological benefit of entering "the flow" through chemistry that might outweigh the small risks involved.
This is cool. I like that he considers states of consciousness for the everyman and is not talking about philosophical zombies (Angels on a Pinhead for the modern age).
Lots of people putting weight behind it, but also with a degree of hubris similar to the string theory guys of the late 90's. But just like string theory, it also has a lot of fair criticism and is mostly still in the hypothesis phase.
A "clearer" study would require starting from some kind of zero. But nobody wants to go live in a cave and whoever did might not be inclined to write an article about it.
What I think will work, is to examine all the different forms of consciousness — including not just animals, but also those humans we find unfathomable, alien, and impossible to empathise with.
On consciousness, I'm interested on the integrated information theory.
Has he been living under a rock? Modern AI models already outpace the connectivity of the human brain, and are only getting bigger.
Modern estimates are that there are 100T neuronal connections in the adult human brain [0]. And that’s neuronal connections alone.
Astrocytes also make direct connections with neurons and can modify and induce neuronal activity [1]. There are 100B neurons [0] and ~20B astrocytes in the adult human brain [2, 3].
So this 100T connections estimate is only a small slice of the picture of human brain activity.
[0] https://medicine.yale.edu/lab/colon_ramos/overview/#:~:text=....
[1] https://neuraldevelopment.biomedcentral.com/articles/10.1186....
[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5063692/
[3] https://link.springer.com/article/10.1007/s00429-017-1383-5
"In human brains there are about 100 billion neurons (nerve cells), each capable of producing an electrical pulse up to perhaps a thousand times a second."
So maybe that's wrong. Thanks for the links. Even so if we look out 10, 100, 200 years from now, that level of complexity will be greatly surpassed by AI.
It is astonishing that we can mimic human reasoning so well with these LLMs, but the essence of cognition seems to still be missing.
I agree with your belief that human processing will be surpassed in the future. We do live in some exciting times :)
Granted, you would be doing it with a frame rate limited by the processing power of the computer, but that just means that a thought that takes a human 1 second to arrive at might take much longer for the AI (for now).
But at this point the TYPE of computation performed by neurons, which is unlike what modern computers do-- for example, brains do not appear to have addressable memory units separate from compute units-- do seem to be differing enough to perhaps explain some of the gaps between computers and minds. Some even think neural computation is a different category of computation altogether: https://pubmed.ncbi.nlm.nih.gov/23126542/
And the key point is this- these models are the worst they will ever be, and are gaining size at pace. So even if you we grant the argument that our brains are still a bit more complex, hopefully we can agree that will not be the case in 5 years. Heck, how about 20 years, or 100? Let's be real.
In any event, raw parameter/weight count to me seems like a very primitive way to judge "complexity" in comparison to the human brain. Looked at most ways, our brains are for more efficient at doing the incredible things they do than LLMs. Consider how little language young children are exposed to in comparison to LLMs given their abilities to figure out how to produce language.
If the brain doesn't work like an LLM, you can expand the size and "complexity" of these models to the moon and they won't outperform the brain. Current models can write impressively well, but they can barely do math. It's clear they don't reason as we do.
google recently scanned a 1mm cube of human brain which was 1.5Petabytes of raw data. The AI hardware that Google trains on is multiple racks.
I think a better analogy would be between an entire google datacenter (including all the networking, storage, sensors, processors, and memory) and a human body although even then it's a stretch.