The short to medium term concerns mostly come down to human problems. AI and robotics have a multiplicative affect like other technologies, but the problems still originate in the primate nature of humans.
The long term issue that many people don't seem willing to mention out loud is that we will eventually make humanity obsolete and robots will literally take control.
The only real solution to the threat of militarization of AI and robotics might be to create a more unified global government and culture. The first challenge is for people to see that as a worthy goal.
Sometimes I think most of our problems come down to not being on the same page. And I wonder if somehow in the future we gradually become a tiny bit like The Borg.
So maybe we are headed towards a "meta-system transition" where we have some kind of direct links between groups of AI and humans that combine to form a more intelligent and effective organism in some way.
I guess I just came up with a Black Mirror episode concept.
semi-extrinsic 113 days ago [-]
I've said it before, and I'll say it again: there is absolutely zero evidence indicating that we are any closer to AGI than what James Watt was to realizing nuclear fusion.
There are extremely well researched worms with only a few hundred neurons which we cannot yet simulate with anything resembling accuracy. How can that statement be true, if LLMs are close to delivering superhuman intelligence?
Jabrov 113 days ago [-]
Why do we have to simulate real neurons to achieve intelligence? We don't simulate a bird's bones and muscles to achieve flight.
LLMs are a huge step forward. Sure, they might not be the thing to ultimately deliver superhuman intelligence. But it's unfair to say that we're not any closer at all.
a_wild_dandan 113 days ago [-]
We don't. That person has no idea what they're talking about. Virtually no AI researcher would require "neuron simulations" for intelligence, nor agree with their AGI claims. (See: recent machine learning expert surveys, for example.)
Use HN comments for comic relief. When you see one worth taking seriously, you'll know it. Otherwise we're just giving oxygen to contrarians.
semi-extrinsic 113 days ago [-]
Happy to be providing some comic relief!
Just to clarify, I'm definitely not saying "neuron simulation" is required in any way. I'm just asking, how can we be very close to "solving" a significant part of the most complex brains, yet miles away from solving the simplest brains?
You should be able to answer that question (or a steelmanned version of it), not just ridicule strawmen.
ToValueFunfetti 113 days ago [-]
We built machines that could do calculations far faster than humans could long before we had any idea what neural configuration humans were using to do them. We gave those machines short and long term memory without understanding how human brains do the same. Then we wrote software on those machines that could outperform the best humans in chess without even the slightest inclination on how human brains do that. And then we started making software that vastly exceeded those early chess bots when we didn't even understand how that software performed its calculations (ie. neural networks). And now we have software that can read and write that we understand even less than we understood earlier NNs.
Empirically, it does not seem necessary to understand one version of a thing to produce a superior version. Why should the remaining unsolved cognitive tasks break that pattern?
kwhitefoot 113 days ago [-]
Ai in practice doesn't have anything to do with simulation, nor with 'solving a significant part of the most complex brains'.
The immediate threat is that humans will use the leverage that LLMs give to replace and to influence other humans, in other words to gain power over us.
Whether this is AGI or not is beside the point.
semi-extrinsic 113 days ago [-]
Ah, but the replacing humans issue is a completely different "threat", and one we've overcome many many times since the Luddites and even before. Every time we make something more efficient we face this exact problem, and we eventually solve it.
As for the influencing part, what specific actions to gain power over us can be achieved now with LLMs, that could not be achieved before using a few tens of thousands of paid humans?
kwhitefoot 110 days ago [-]
> what specific actions to gain power over us can be achieved now with LLMs, that could not be achieved before using a few tens of thousands of paid humans?
It's being able to do it without having to employ the tens of thousands of humans that makes it different. With the LLM you are able to react much faster and pay fewer people more money.
gambiting 112 days ago [-]
>>Why do we have to simulate real neurons to achieve intelligence? We don't simulate a bird's bones and muscles to achieve flight.
Is this.....a reference to Greg Egan's Permutation City? Because that was the exact argument some characters used against real AGI(which let's assume they had, it's a Sci-Fi book). Basically it went along the lines of "even though we can simulate digestion at molecular level, nothing is actually digested. Why should simulating neuron activity create actual thoughts and intelligence?"
blastro 113 days ago [-]
Not trying to be pedantic, but:
> We don't simulate a bird's bones and muscles
Isn't really true, is it?
We simulate the bones with an airframe, and we simulate the muscles with a prop/jet. The wings are similarly simulated.
r2_pilot 113 days ago [-]
A more suitable word would be that we "emulate" the bones, etc., rather than simulate, which to me implies that you are testing against a replica of a thing. But I don't really think emulate works here because they're not especially analogous; these are fundamentally different approaches to flight so any form following the function would be incidental at best.
By your definition of simulate, I think artificial neural networks are absolutely on their way to simulate intelligence
suriya-ganesh 113 days ago [-]
The type of propulsion is not, though.
Nothing in nature has 360° rotatory motion.
Airplane is not an exact simulation of a bird.
Only an approximation
yes, the wings flap. the airframes are made of bone.
splwjs 113 days ago [-]
We've had markov chain generators for a while, having enough computing power to grant them the power to regurgitate wikipedia reddit and stackoverflow content is not "a huge step towards agi"
Jabrov 113 days ago [-]
I disagree.
It's true that Markov chain generators have existed for years. But historically their output was usually just this cute thing that gave you a chuckle; they were seldomly as useful in a general sense like LLMs currently are. I think that the increase you mention in compute power and data is itself a huge step forward.
But also transformers have been super important. Transformer-based LLMs are orders of magnitude more powerful, smarter, trained on more data, etc. than previous types of models because of how they can scale. The attention mechanism also allows them to pay attention to way more of the input, not just the few preceding tokens.
dijit 113 days ago [-]
I think you missed OPs point.
If you want something useful, then we're getting closer.
AGI is something specific, as a requisite, it must understand what is being asked, and what we have now is a puppet show that makes us humans think that the machine is thinking, similar to Markov chains.
There is absolutely some utility in this- but it's about as close to AGI as the horse-cart is to commercial aircraft.
Some AI hype people are really uncomfortable with that fact, I'm sorry, but that reality will hit you sooner rather than later.
It does not mean what we have is perfect, cannot be improved in the short term, or that it has no practical applications already.
EDIT: downvoting me wont change this, go study the field of academic AI properly please
drdeca 113 days ago [-]
AGI is something fairly specific, yes, but depending on what you mean by “understand”, I don’t think it necessarily needs to “understand”? To behave (for all practical purposes) as if it “understands” is good enough. For some senses of “understand” this may be the same thing as for it to “understand”, so for those senses of “understand”, yes it needs to “understand”.
It seems clear to me that, if we could programmatically sample from a satisfactory conditional probability distribution, that this would be sufficient for it to, for all practical purposes, behave as if it “understands”, and moreover for it to count as AGI. (For it to do so at a fast enough rate would make it both AGI and practically relevant.)
So, the question as I see it, is whether the developments with ANNs trained as they have been, is progress towards producing something that can sample from a conditional probability distribution in a way that would be satisfactory for AGI.
I don’t see much reason to conclude that they are not?
I suppose your claim is that the conditional probability distributions are not getting closer to being such that they are practically as if they exhibit understanding?
I guess this might be true…
It does seem like some things would be better served by having variables with a fixed identity but a changing value, rather than just producing more variables? I guess that’s kind of like the “pure functional programming vs not-that” distinction, and of course as pure functional programming shows, one can still compute whatever one wants while only using immutable values, but one still usually uses something that is as if a value is changing.
And of course, for transformer models, tasks that take more than O(N^2) or whatever (… maybe O(N^3) because on N tokens, each is processed in ways depending on each pair of the results of processing previous ones?) can’t be done in producing a single output token, so that’s a limitation there..
I suppose that the thing that is supposed to make transformers faster to train, by making it so that the predictions for each of the tokens in a sequence can be done in parallel, kinda only makes sense if you have a ground truth sequence of tokens… though there is the RLHF (and similar) where the fine-tuning is done based on estimation of a score on the final output… which I suppose possibly neither is great at getting behavior sufficiently similar to reasoning?
(Note: when I say “satisfactory probability distribution” I don’t mean to imply that we have a nice specification of a conditional probability distribution which we merely need to produce a method that can sample from it. But there should exist (in the abstract (non-constructive) mathematical sense) probability distributions which would be satisfactory.)
adrian_b 113 days ago [-]
I do not consider "understanding", which cannot be quantified, as a feature of AGI.
In order for something to qualify as AGI, answering in a seemingly intelligent way is not enough. An AGI must be able to do the following things, which a competent human would do: given the task to accomplish something that nobody has done before, conceive a detailed plan how to achieve that, step by step. Then, after doing the first steps and discovering that they were much more difficult or much easier than expected, adjust the plan based on the accumulated experience, in order to increase the probability of reaching the target successfully.
Or else, one may realize that it is possible to reformulate the goal, replacing it with a related goal, which does not change much the usefulness of reaching the goal, but which can be reached by a modified plan with much better chances of success. Or else, recognize that at this time it will be impossible to reach the initial goal, but there is another simpler to reach goal that it is still desirable, even if it does not provide the full benefits of the initial goal. Then, establish a new plan of action, to reach the modified goal.
For now this kind of activity is completely outside the abilities of any AI. Despite the impressive progress demonstrated by LLMs, nothing done by them has brought a computer any closer of having intelligence in the sense described above.
It is true however, that there are a lot of human managers who would be equally clueless with an LLM, on how to perform such activities.
dboreham 113 days ago [-]
Very long time AI skeptic (since 1980) but I believe you're wrong. It turns out we don't need to figure out exactly how a biological brain works. It's just nature's solution for implementing matrix operations (greatly simplified take, obviously). Having hit on approaches that produce similar results to biological systems, we now don't need to focus so much on emulating them.
I do suspect however that there's something to the biological experience of "being the life support system" for the brain, that significantly affects the training process. It might be challenging to simulate that.
andyjohnson0 113 days ago [-]
I disagree. Machine AGI does not have to be based on the same foundations as biological intelligence.
Jabrov 113 days ago [-]
Very relevant quote from the article:
> In my view comparing Ai’s cognitive, creative or intellectual powers to those of the human brain is not especially helpful. Think of the car. Humans can’t run as fast as horses. But we can build machines that far outpace them. We do not achieve this by imitation. We don’t engineer mechanical legs and hooves of the kind that took evolution 34 million years of tinkering and modification from eohippus to the present day. We go a completely different way and we come up with something that doesn’t at all exist in nature: the wheel. And instead of a mechanical heart and mechanical muscles Karl Benz offers us the internal combustion engine and crankshaft. Ditto with flying, and travelling across or under the waves. The commonly held idea that the best engineering mimics nature is largely misguided. Yes, we look sometimes look to the natural world for inspiration but in the big things, structurally, we go our own way. And as a result we can fly higher and faster than birds, move over land quicker than a cheetah, swim over and under the water faster and further than a salmon or a whale and so on.
semi-extrinsic 113 days ago [-]
Sure, that's OK. But taking a step back, how can we be close to solving one system, yet far away from solving another system that is a milion times simpler in every metric?
gizmo 112 days ago [-]
We cannot perfectly replicate how real life worms behave because we don't have the necessary training data. We can create AI worms based on fake behavioral data, of course, but that will not result in prefect AI replicas of real world worms. We do have untold terabytes of the written word and that's sufficient to bootstrap intelligence with. It turns out that going to space, splitting the atom, and creating general superintelligence is easier than understanding insect behavior or replicating exactly how a bumblebee flies. Our intuitions about the relative difficulty of unsolved scientific questions are pretty much useless. Some seemingly easy things are in fact impossibly difficult and the inverse is true as well.
eikenberry 113 days ago [-]
I agree that they do not, but our current AIs are based on neural nets/brain structure, I.E. biological intelligence, and so the comparison seem valid.
marjancek 111 days ago [-]
Viruses are not super intelligent, yet one could kill us all.
The biggest danger I see is a widespread AI with a set of badly defined goals, not a particularly smart and evil one.
jodrellblank 113 days ago [-]
The human brain needs 80 billion neurons to hold 5-7 numbers in memory and add and multiply them. How can a MOS integrated circuit from 1965 with only 250 transistors possibly be turned into a faster and more capable arithmetic calculator when it is so woefully underpowered by comparison?
> "there is absolutely zero evidence indicating that we are any closer to AGI than what James Watt was to realizing nuclear fusion"
James Watt lived before Rutherford split the atom, he didn't know they could be split or fused, he was not trying for nuclear fusion. We do know that information exists and can be processed. Still, James Watt was closer to large scale controlled release of energy than humans before the control of fire.
We know that human level intelligence is possible, in a way that Watt didn't know fusion was possible. We have looked for other mechanisms hiding in the brain - Penrose and Hameroff's ideas of quantum tubules for one - and rejected them. We've pretty closely bounded the amount of energy a brain uses, what it can sense, how it can input and output, what it's made of, and through what mechanisms of electric pulses and neurotransmitters it seems likely to operate. We've identified brain regions which cause predictable effects when damaged (on language, memory, executive decision making). We've dissected it, photographed it, x-rayed it, imaged it in layers, imaged it while active.
We've represented facts as data, written code which uses genetic algorithms to solve problems, written code which writes code to solve problems, written code which crunches data in large quantities to pull out higher dimensional patterns, code which appears to identify concepts from that, code which does specialised tasks such as face recognition, language recognition, language generation, person recognition.
It's pretty indefensible to say "absolutely zero evidence indicating we are any closer than people of 10k BC, people of 0 AD, people of 1800, or people of 1900". You're basically doing the God of the gaps argument, as we map the brain physically in more and more detail, and encroach from the data an information processing side, the gap between them where intelligence could be hiding appears to be shrinking. But with no evidence or support, you suggest something else is hiding there which makes the gap a Jaunt so large that encroachment from either side is immeasurably tiny. And you hide it behind the "G" for general.
Twenty years ago you could talk to a nonsense chatbot. Ten-ish years ago you could talk to a somewhat coherent GPT. Five years ago you could talk to a surprisingly coherent ChatGPT. Today people collaborate with Claude, ChatGPT, Gemini, ask them to explain concepts, provide examples and get them to follow on, offload work to them, learn from them, have them proof read documents, have them rewrite in different styles.
I think Hofstadter said he wasn't interested in computers because they don't have concepts. Here's an article in the ACM last year saying similar: ""Neural networks can look at a picture and tell whether it contains a dog, a cat, or a car, but they do not have a rich understanding of any of those categories. Take the concept of a bridge. Humans can extend the notion of a bridge to abstract levels. We can talk about a bridge between people or bridging the gender gap. We can instantly understand what these expressions mean because we have a rich mental model of what a bridge can be."" - https://cacm.acm.org/news/artificial-intelligence-still-cant...
Asking ChatGPT:
Me: "Hi, a bridge is a big metal thing over a road or river;
what does it mean to "bridge the gender gap"?
Can you explain briefly, please?"
ChatGPT: "To "bridge the gender gap" means to reduce the differences
or inequalities between men and women, particularly in areas
like pay, opportunities, education, or representation.
It's about creating a more equal and fair situation where both
genders have the same access and opportunities. Just like a
physical bridge connects two places, "bridging the gender gap"
connects the unequal sides, making them more balanced."
Me: "Interesting; can you suggest other non-physical things
which could be "bridged" please?"
ChatGPT: "Certainly! Here are some examples of non-physical things that can be "bridged":
Cultural gap – Reducing misunderstandings or differences between people from different cultures.
Communication gap – Improving understanding between people who are having difficulty communicating.
Generation gap – Narrowing the differences in attitudes and perspectives between older and younger generations.
Wealth gap – Reducing economic inequality between rich and poor.
Knowledge gap – Closing the gap in access to information or education between different groups.
Digital divide – Bridging the gap between those who have access to technology and the internet and those who don’t.
Trust gap – Rebuilding or enhancing trust between individuals or groups, like companies and their customers.
In each case, "bridging" is about connecting or bringing closer
two sides that are separated by some form of difference or inequality.
Seems to have some grasp of the general concept of "bridging" more than Eliza or Eugene Goostman, doesn't it? "I have tried some letter-string problems in GPT-3, and in some cases it could solve them. It learned, for example, the concept of successorship. Not perfect, not robust, but I found it still surprising that it can do this. Therefore, I don't agree that these systems are only 'stochastic parrots', as some scientists have called them. I have seen evidence of GPT building simple internal models of situations."" says the expert in the article.
I'm not saying this is an AGI or will lead directly to AGI, but stubbornly refusing to accept this obvious progress as any progress towards machine intelligence at all, calling it "absolutely zero" evidence of progress seems wilfully blinkered.
Do you genuinely put us absolutely no closer, not a single step closer, to AGI than the Mechanical Turk or the people of 50k BC?
vundercind 113 days ago [-]
It doesn’t have a grasp of bridging at all. It tagged a few of your terms as important based on its training data, then did a weighted search for words to respond, then as it built the response, continued applying importance-weighting to its “own words” as it did further searches, to avoid writing gibberish or going off some weird direction.
It’s still not much more than Markov chains, just with some clever anti-nonsense filtering and importance-weighting. There’s no “understanding”, nor anything particularly close to it.
It’s impressive we’ve accomplished so much with something that is so thoroughly, entirely stupid, in fact. They are useful tools, for sure.
jodrellblank 113 days ago [-]
> "It tagged a few of your terms as important based on its training data, then did a weighted search for words to respond, then as it built the response, continued applying importance-weighting to its “own words” as it did further searches, to avoid writing gibberish or going off some weird direction."
what specifically would you or I do different, apart from having less training data?
> "There’s no “understanding”, nor anything particularly close to it"
any evidence for this claim? It explained, it responded in context to a followup question, it gave other relevant examples, by what measure does it "not understand" but I "do understand"?
consteval 112 days ago [-]
> what specifically would you or I do different, apart from having less training data?
Create things that are not in the training data. I could coin a new phrase that uses "bridge", I doubt an LLM can or would have any motivation to. Because LLMs understand data and patterns, but they don't understand the meaning behind them. They can't be imaginative.
For example, an LLM can recite Shakespeare and can even write new works in Shakespeare's style. But it cannot be a new Shakespeare, the is one level above it. An image generator might be able to make a painting is Picasso's style, but it cannot be a new Picasso - that is, create a style never before seen and explain why they did it and what emotions it conveys.
jodrellblank 112 days ago [-]
Me: Hi, can you use the concept "bridge" in a
never-seen-before sentence, where it still makes
sense? Please make the sentence very unusual,
but still coherent. Perhaps a couple of examples?
ChatGPT: Sure! Here are a couple of unusual but coherent
sentences using the concept of "bridge":
The stars whispered their ancient stories across the
sky, building a bridge of forgotten time between my
dreams and the universe’s memories.
Each thought she left unspoken laid a ghostly bridge
between us, invisible but weighty, made of all the
words that could have been.
Both sentences play with abstract ideas, pushing the
concept of a bridge beyond its typical use but
maintaining coherence through metaphor and vivid
imagery.
----
It's impossible for me to know everything in the training set but there are no Google results for "building a bridge of forgotten time" or "laid a ghostly bridge between us". If you told me those and said you made them up and didn't copy them from somewhere, I'd have to either find the source you copied or believe you.
vundercind 110 days ago [-]
Novelty is not the same as understanding. I think it’s telling that one of the two probably works at least as well with something that’s the opposite of a bridge: gulf, chasm, wall. It just picked a pattern for an analogy and stuck your word in it, mad-libs style. Like, the fact that it’s kinda the opposite of what me would expect makes it a bit poetic in that one’s tempted to pursue what it might mean, but of course the LLM didn’t mean anything (though we still might find meaning).
Its patterns encoded two items and “between” being likely to occur, anyway.
vundercind 113 days ago [-]
I’ll be a lot more open to arguments that the technology described in the Transformer Paper becomes capable of things like understanding when you throw enough data and computing power at it when the state of the art moves beyond, “yeah, but don’t human brains just randomly guess at the right thing all the time?”
I dunno, do they do more than that? Seems like it to me.
Does a Chinese Room “understand”? I say no, but hey, maybe it does.
If I laboriously do the math by hand, taking care never to actually know the informational content of any of the input or output myself, does my scratch paper understand your questions? If the output’s just as good as ChatGPT? Where’s the part that understands?
fragmede 113 days ago [-]
If we open up a human skull, what would we find? A clump of grey matter, a wet piece of meat. As we dissect it further and zoomed in, what would we find? Neurons. Zoom in futher and we get a mess of chemicals. As we look at the pile of cells that make up the brain, we have the same fundamental question: where's the part that understands?
Yet somehow, from the connections of all of those cells, and neurotransmitters, there's consciousness and something there that does understand (and think and reason and love). If, instead of LLM architecture, on more powerful computers than we have now, we simulated all of those neurons and their connections, would we have a computer that understands? If we then did those computations on scratch paper, where would the piece that understands be on that piece of paper?
The sum of a thing's parts can be greater than the individual parts. Whether or not ChatGPT understands is a whole big question, but we'll have no more luck dissecting LLMs to find out if it does than if we dissected a human brain.
vundercind 113 days ago [-]
You can simulate a brain to some degree, maybe, but you won’t get the whole thing short of just… using an actual brain, that is physically identical to the one you’re simulating, down to at least the quantum level (and who knows what else?). It’s not at all certain that this can be reduced to math.
The calculations for operating an LLM definitely can be reduced to math. No reduction needed, in fact—they are math.
This isn’t an argument (to my mind, anyway) against even the possibility of machine whatever-you-like (consciousness, understanding, whatever) but against the idea of equivalence because we could simulate either one—in fact, we can’t simulate one. The other, essentially is already simulation, no further steps needed.
What you’re getting at (if I may attempt to present your argument) is that we could reduce either to its components and make it look ridiculous that it might be doing anything particularly advanced.
However, in fact we definitely can reproduce exactly what one of them does with a bunch of thick books of lookup tables and some formulas that we could mechanically follow by hand, and it might even be possible to do so in practice, not just hypothetically (at significant, but not impossible, expense) while we do not know we can do that for a human brain, short of just using exactly the brain that we want to “simulate”.
jodrellblank 111 days ago [-]
> "It’s not at all certain that this can be reduced to math."
It isn't certain that it can be, but can you give any plausible reason why the Universe might allow understanding to (meat + electric patterns) and deny it to (silicon + electric patterns) ?
vundercind 110 days ago [-]
Maybe! But some part of it would have to be attempting understanding, not just looking at word patterns.
jodrellblank 112 days ago [-]
> "I dunno, do they do more than that? Seems like it to me."
When I said "I'm not saying this is an AGI" and you reply with "I dunno human brains do more than ChatGPT" it feels like you haven't understood the discussion - that part was never contested.
> "some clever anti-nonsense filtering"
Eliezer Yudkowsky wrote 'The opposite of intelligence isn't stupidity'. On an A/B test, stupid is guessing randomly and that approach scores 50%. Scoring 0% takes as much intelligence as 100% because it requires knowing the right answer to be able to avoid it every time.
Being able to identify nonsense is sense. At the risk of being tautological, "clever" is clever.
If the neurons in your brain are the scratch paper in the Chinese room, each one isn't aware of the content of the light waves or the finger muscle signals, and you conclude the Chinese room doesn't understand, shouldn't you conclude that your brain doesn't understand? If your brain does understand shouldn't you conclude the Chinese room would understand?
I claim that ChatGPT being able to explain bridging and give further examples is behaviour which demonstrates more understanding than a rock, than a calculator, a wordlist, a spellchecker, a plain Markov chain has.
You say there's "no understanding or anything close to it" - how would ChatGPT's response look different if it did understand the concept of bridging?
If you cannot suggest any way its output would look different to how it looks now and instead have to resort to changing the subject, shouldn't you retract that claim?
> "Does a Chinese Room “understand”? I say no"
Then you must say a human doesn't understand. For what else is there in a human brain except a finite amount of learned behavioural rules for signal inputs and outputs? Learned over a billion years of evolution in the structure, and filled in by a lifetime of nurture.
vundercind 112 days ago [-]
> If you cannot suggest any way its output would look different to how it looks now and instead have to resort to changing the subject, shouldn't you retract that claim?
For one thing, I expect we’d not see so many cases of them chasing (if you will) the prompt and request into silliness. The code attempts to satisfy prompts in a transparently mechanical fashion, which is part of why they so gleefully (if you will) mislead. There’s no understanding. You can ask them to correct and they might, but they can also be induced to correct the already-correct, so that means nothing. To the extent we fix that, it’s not by adding any factor that might represent understanding, it’s further prompting that amounts to “follow these patterns slightly differently”. The fix isn’t, so far, “teaching” them to understand. Maybe we’ll get there! But we don’t appear to be anywhere near that yet.
> Then you must say a human doesn't understand. For what else is there in a human brain except a finite amount of learned behavioural rules for signal inputs and outputs? Learned over a billion years of evolution in the structure, and filled in by a lifetime of nurture.
The thing about the Chinese Room is that we comprehend the entire process, and there’s no room for some unknown factor affecting the output—or for a known factor that might be processing something like what we mean by understanding (let alone consciousness, say).
Every single part of what an LLM does can be replicated with big books of lookup tables, dice, and a list of rules. There’s nowhere for anything to do the understanding to exist. It’s not that we have to be confused by part of it for that to be there—I’m not saying mystery is a necessary component—just that this process doesn’t have a place for that to be.
jodrellblank 112 days ago [-]
> "For one thing, I expect we’d not see so many cases of them chasing (if you will) the prompt and request into silliness."
In the one example I gave you saw one output and declared it "not understanding the concept of bridging". I'm asking specifically that output, how would it look different if ChatGPT had some understanding of the concept of bridging? You're back to arguing "it's not human level!" which was not my claim. My claim is that it's above zero level. In another comment I asked it to use the concept of bridging in new ways, and it provided sentences which have no hits on Google but are plausibly the kind of thing I might see in a book from a human author.
> "There’s no understanding"
Say to your pet "I like it when you do human-like things such as standing on two feet. Come up with more human-style things for more treats" and it won't. You can ask ChatGPT to come up with more uses of the bridging concept, and it does. That is demonstrating understanding at higher than rock level and higher than rat level, and you can't reject that evidence just by repeatedly saying "there's no understanding there's no understanding there's no understanding".
> "they can also be induced to correct the already-correct, so that means nothing."
So can I; if my boss tells me there is an error and I need to correct it, I might correct a non-error to please them. Knowingly ("I'll change this part from correct to wrong if that pleases them") or unknowingly ("if they tell me there is an error there must be one, I'll take a guess that this bit is wrong and put something else here"). Does that show I have no understanding?
> "Every single part of what an LLM does can be replicated with big books of lookup tables, dice, and a list of rules. There’s nowhere for anything to do the understanding to exist."
You're doing the God of the Gaps argument with the human brain - an LED screen is RGB pixels, there's nowhere for a picture of a cat to exist separately from bright and dark pixels. A book is printed characters, there's nowhere for a story to exist separate from blobs of ink on paper. A brain is meat grown from a foetus, uses ~20 Watts of energy, if the blood supply is cutoff then it dies, if it gets too hot or cold then it dies, there are many areas which can be damaged and harm something like leg movement but there is no single area which can be damaged which stops 'understanding' but leaves everything else unchanged, there are no examples of people being decapitated, having no brain, having brain death, and still having 'understanding' provided by whatever other thing you are implying exists and does understanding.
There's nowhere for anything to do the understanding to exist, unless there is a) new physics which aligns perfectly with every observation we have about the brain but also augments it an adds some magical 'understanding' thing which can't be done or simulated in software. b) something non-physical such as a soul which is tied closely to the meat and powered by the food and blood and can't be tied to silicon because reasons. c) ??? As far as I can see this isn't reasoning from anything more convincing than you not wanting to accept the Occam's Razor simpler explanation that a purely physical information processing system can understand.
(Or that humans don't understand and it's all some weird illusion; the picture of the cat is not in the LED screen, it is in the eye of the beholder. The understanding isn't in your behaviour, it's in the beholder's interpretation, I believe you understand because you demonstrate the behaviours of understanding. We are seeing intelligence in others where there isn't any. And that view turned on ourselves is our own perception of our own understanding - we see ourselves identifying patterns, extrapolating patterns, continuing coherent sentences, and conclude that we must have 'understanding' as a thing separate from those behaviours).
> "The thing about the Chinese Room is that we comprehend the entire process, and there’s no room for some unknown factor affecting the output"
We don't comprehend the entire Chinese Room; the instructions that Searle is following are a massive handwave. Does following the instructions require Searle to make human judgements on where to branch? Then it's offloading understanding onto his human brain. Does it not require that but it still outputs coherent responses? Then the instructions must encode intelligence in them in some way - if intelligent behaviour doesn't demonstrate intelligence we're in non-scientific nonsense land.
Peter Cochrane wrote about 'dying by installments' of a human turned into a cyborg replaced bit by bit, Ship of Theseus style. We can do similar and make up a Cochrane's Chinese Brain - instead of a neuron firing and affecting the connected neurons, it raises an alert and Searle walks over and writes down the firing pattern on a scratch pad, walks to all the other relevant neurons, and taps in the firing pattern on an input device, without understanding the information content of the firing pattern. Does the brain keep responding coherent Chinese but no longer understand Chinese?
vundercind 110 days ago [-]
I’m not doing God of the Gaps—it’s simply that in the case of LLMs there’s both nothing we know of doing understanding and also no gaps where it might be. We both know what it’s doing, and that it’s not doing understanding.
Let’s try this:
We could apply an LLM to made-up language and corpus that does not actually carry meaning and it would do exactly what it does with real languages.
“Well maybe you accidentally encoded meaning in it. We could always, say, cryptoanalyze even an alien language and maybe be able to come up with some good guesses at meaning”
Maybe we could. But now imagine also you have no “knowledge” whatsoever except the trained patterns from that language. Like, no understanding of how to do cryptoanalysis, or linguistics, or what a planet is. Or an alien. All you’re doing is guessing at patterns, based on symbols that you aren’t even attempting to understand and have no basis for understanding anyway. That’s an LLM.
I think people are assigning way too much power to language sans… all the rest of what you need to derive meaning from it. None of what’s going into or coming out of an LLM needs to carry any meaning for it to do exactly the same thing it does with languages that do.
To the extent that an LLM has a perspective (this is purely figurative) all languages are gibberish alien languages, while also being all that it “knows”.
> We don't comprehend the entire Chinese Room; the instructions that Searle is following are a massive handwave. Does following the instructions require Searle to make human judgements on where to branch? Then it's offloading understanding onto his human brain. Does it not require that but it still outputs coherent responses? Then the instructions must encode intelligence in them in some way - if intelligent behaviour doesn't demonstrate intelligence we're in non-scientific nonsense land.
I remain stubbornly unconvinced that simulating a real process (by hand or otherwise) is the same thing as it actually happening with real matter and energy, even setting aside that the most efficient way to achieve it is to… not simulate it, and use real matter to actually do the things.
It’s why I find the xkcd “what if a guy with infinite time and an infinite beach and infinite rocks moved the rocks around in a way that he had decided simulated a universe?” thing interesting as an example but also trivial to solve: all that happens is he moved some rocks around. The meaning was all his, it doesn’t do anything.
jodrellblank 110 days ago [-]
> "thing interesting as an example but also trivial to solve: all that happens is he moved some rocks around. The meaning was all his, it doesn’t do anything."
You opened by saying you aren't doing God of the Gaps, but here you are doing it. Brains move chemicals and electrical signals around. That doesn't do anything, apparently. Matter doesn't do understanding. Energy doesn't do understanding. Mathematical calculations don't do understanding. Neural networks don't do understanding. See how Understanding is retreating into the gaps? Brains must have something else, somewhere else, which does understanding? But what, and where? It's a position that becomes less tenable every decade as brains get mapped in finer detail leaving smaller gaps, and non-brains get more and better Human-like abilities
> "there’s both nothing we know of doing understanding .. it’s not doing understanding."
It is. The math and the training and the inference is the thing doing understanding. Identifying patterns and being able to apply them is part of what understanding is, and that's what it's doing. [Not human level understanding].
> "We could apply an LLM to made-up language and corpus that does not actually carry meaning and it would do exactly what it does with real languages."
We do that with language too; the bouba/kiki effect[1] is humans finding meaning in words where there isn't any. We look at the Moon and see a face in it: Pareidolia[2] is 'the tendency for perception to impose a meaningful interpretation on a nebulous stimulus so that one detects an object, pattern, or meaning where there is none'.
We are only able to see faces in things because we have some understanding of what it means for something to 'look like a human face'. "We see a face where there isn't one" is no evidence that we don't understand faces and so "an LLM would find patterns in gibberish" is no evidence that LLMs don't understand anything.
> "All you’re doing is guessing at patterns, based on symbols that you aren’t even attempting to understand and have no basis for understanding anyway. That’s an LLM."
Trying to build patterns is "what attempting to understand" is! You're staring right at the thing happening, and declaring that it isn't happening. "AI is search" said Peter Norvig. The Hutter Prize[3] says "Being able to compress well is closely related to intelligence as explained below. While intelligence is a slippery concept, file sizes are hard numbers. Wikipedia is an extensive snapshot of Human Knowledge. If you can compress the first 1GB of Wikipedia better than your predecessors, your (de)compressor likely has to be smart(er). The intention of this prize is to encourage development of intelligent compressors/programs as a path to AGI". Compression is about searching for patterns.
Understanding is either magic, or it functions in some way. Why not this way?
> "all languages are gibberish alien languages, while also being all that it “knows”."
If we took some writing in a Human language that you don't speak, you can do as much "predict the next word" as you want, take as much time as you need, and put together as an output. The input is asking for a reply in formal Swahili which explains yoga in the style of Tolkein with Tourette's, but you don't know that. The chance of you being able to hit a valid reply out of all possible replies by guessing is absolutely zilch. But you couldn't do it by " predicting the next word" either, how would you predict that the reply should be in Turkish if you can't understand the input? How would you do formal Turkish without understanding the way people use Turkish? Conversely if you could hit on a good and appropriate reply, it would be because your studying to "predicting the next word" had given you some understanding of the input language and Swahili and yoga and Tolkein's style and how Tourette's changes things.
> "I remain stubbornly unconvinced that simulating a real process (by hand or otherwise) is the same thing as it actually happening with real matter and energy"
Computers are real matter and energy. When someone has a cochlear implant, do you think they aren't really hearing because a microphone turning movement into modulated electricity is fake matter and fake energy, and an eardrum and bones doing it is real matter and real energy? Yes it's true that you can't get on a simulation of a plane and fly to New York, but if you see the output of an arithmetic calculation there's no way to tell if it was done with a redstone computer in Minecraft or with Python or with brain matter. (Is it possible for arithmetic to be not-simulated?).
> You opened by saying you aren't doing God of the Gaps, but here you are doing it.
No! There’s a difference between a thing happening, and symbols we decided mean something bearing manipulated. The assigned meaning isn’t real in the way an actual process is. A flip-book of a person jumping rope isn’t a person jumping rope.
jodrellblank 109 days ago [-]
What do you think is the "real" version of understanding which brains do, and where / how do you think brains do it?
DougN7 113 days ago [-]
The scratch paper is a fantastic analogy. I’m going to use it.
empath75 113 days ago [-]
You do not understand how GPT models work. It does more than 'importance weighting' and an absolutely massive amount of knowledge about the world is encoded in those weights.
---
I asked ChatGPT to help you better understand how it works:
There are a few common misconceptions in that comment regarding how large language models (LLMs) like GPT-4 actually work, so let's clarify those:
Markov Chain Comparison:
LLMs are not based on Markov chains, though they might seem similar at a high level due to their ability to predict the next word in a sequence.
Markov chains rely on simple probabilistic transitions between states, often based on a very limited "memory" of previous states (e.g., the previous word or two). LLMs, on the other hand, use a transformer architecture, which allows them to consider long-term dependencies and relationships in text. This means they can account for the context of many preceding words, sentences, or even paragraphs when generating responses.
No "Understanding":
While it’s true that LLMs do not have consciousness, self-awareness, or human-like understanding, the term “understanding” can be misleading. They operate by modeling patterns in language, but in a highly sophisticated way.
LLMs capture a deep representation of the relationships between words, sentences, and broader concepts through billions of parameters, giving them a kind of statistical "understanding" of language. This enables them to generate coherent and contextually appropriate responses, even if it’s not the same as human comprehension.
Importance Weighting and Search:
LLMs do not search through predefined sets of phrases or apply “importance-weighting” to words in the way described. They generate text dynamically by using the probabilities derived from the training data they’ve seen.
The model calculates probabilities for each possible next word in the sequence, taking into account the entire context (not just key terms), and selects the next word based on these probabilities. This process is not about tagging words as important but about predicting the next most likely word or phrase given the context.
Not Just "Anti-Nonsense Filtering":
The quality of LLM output doesn’t arise from filtering out nonsense but from the underlying model’s ability to capture the complexity of human language through its learned representations. While there's a certain degree of training that discourages incoherent outputs, the coherent responses you see are mostly due to the model's training on vast, high-quality datasets.
"Thoroughly Stupid":
It's more accurate to say that LLMs are highly specialized in a particular domain: the patterns of human language. They excel at generating contextually relevant responses based on their training data. While they lack human-style cognition, calling them "stupid" overlooks the complexity of what they achieve within their domain.
In summary, LLMs use advanced neural networks to predict and generate language, capturing sophisticated patterns across large datasets. They don't "understand" in a human sense, but their ability to model language goes far beyond simple mechanisms like Markov chains or weighted searches.
vundercind 113 days ago [-]
Amazing. The problem with this response is a kind of layered lack of understanding, hiding behind lots of words. Thank you for the illustration.
(The “All You Need is Attention” paper is fairly readable, all things considered, and peels away a lot of the apparent magic)
semi-extrinsic 113 days ago [-]
> Do you genuinely put us absolutely no closer, not a single step closer, to AGI than the Mechanical Turk or the people of 50k BC?
No, don't get me wrong, I absolutely acknowledge that we have made progress and can produce very useful things that are rightfully called machine intelligence! And probably there are things we are figuring out now, that will be relevant and useful even if we someday figure out AGI.
I specifically chose Watt as an example because he also produced a very useful thing that improved the world. And many concepts from that time are still used today, even if we don't have many steam engines anymore.
That he didn't have the concept of fusion is beside the point - we have many examples of cases where we have the concept, but will not be able to achieve it in thousands of years (like Level 2 on the Kardashev scale). And vice versa, where we go from discovering concepts to real world impact in just a few years (like GPTs).
sva_ 113 days ago [-]
A global government usually implies a centralization of power, which corrupts, and probably isn't a solution to much of anything
tstrimple 113 days ago [-]
I don't think this assertion stands on its own. There is more local corruption in the US than at the federal level for example. Way fewer eyes on your activities. Practically no one is going to notice you getting kickbacks for approving zoning changes for a business or other sorts of "low stakes" corruption.
tshaddox 113 days ago [-]
> The long term issue that many people don't seem willing to mention out loud is that we will eventually make humanity obsolete and robots will literally take control.
This is literally the AI doomer position which is very widespread and advocated clearly and explicitly.
gnramires 113 days ago [-]
We don't need a very strong world government to meet our challenges, I think, including climate change and AI. We can do it with agreements and accords.
(I also don't think a world government might be as dystopic as people imply or fear, for the record, because it can follow democratic principles just as we currently have; and its effects might not be as great as we already have today with the globalizing effects of internet and specially social media. Just given the resistence against the concept, I think the tools we already have and simply coordinating as countries, if may be less effective, probably can work)
The key thing about them is that accords must be enforceable and enforced. There must be consequences for violating them (with escalating gravity, sanctions and measures).
I actually think disarmament of governments worldwide should be one of those goals I believe (and I think very urgent). We know that war is essentially the worst possible form of diplomacy or two nations resolving a disagreement (when it isn't just a power grab or senseless destruction). We shouldn't have that option readily available at all. As for the danger that one nation rapidly industrializes and acquires weapons, against defenseless opponents, we would have very stringent and effective vigilance at all countries industrial capacity against production of significant quantities of military weapons. We can actually extinguish war forever like we extinguish diseases, I really believe. For anyone thinking about existential risks to humanity, I think this should be near the top of the list beside climate change I think.
Of course, there will still be conflicts coming from a myriad of reasons, but those conflicts can be managed and resolved in other ways. And practicing mutual agreements and conflict resolution will be as important as ever -- simply military disarming doesn't magically resolve all issues. For lasting stability worldwide promotion of fundamental human values and rights as well as peace is extremely important (a wise population will not wage war on fellow humans). I think there could be some international work in this area as well. The UN is usually great at it too.
wslh 113 days ago [-]
> The only real solution to the threat of militarization of AI and robotics might be to create a more unified global government and culture. The first challenge is for people to see that as a worthy goal.
That’s naive. Even if a global government were possible, cypherpunks, "AI terrorists" or other actors (many HN users) would easily find ways to bypass those controls. It's unrealistic to assume that centralization could prevent exploitation of AI.
00_hum 113 days ago [-]
[dead]
throwhn2204 113 days ago [-]
Every technology that can be weaponized has been weaponized. Unless we recognize and fix the problems associated with the human condition, we are heading towards a dangerous future.
seanw444 113 days ago [-]
The human condition is the human condition, in that for as long as we are human, it will never be any other way. This is why our demise is inevitable, and likely to be by our own hand. Greed will always exist, and human ingenuity will never be stopped. The result is an asymptotic approach to certain doom. This is why I think the "Great Filter" explanation for the Fermi Paradox is the most likely.
hammyhavoc 112 days ago [-]
Why will robots eventually "literally take control"?
I'm anti-AI at the best of times, but this is alarmist at best, and dishonest at worst—especially to suggest a single government as a solution. Name a single existing government that you would want running the world. Fuck that.
splwjs 113 days ago [-]
>The long term issue that many people don't seem willing to mention out loud is that we will eventually make humanity obsolete and robots will literally take control.
what are you talking about; the main marketing strategy for so terribly many ai companies is to run around declaring the end is soon because their product is so powerful
>The only real solution to the threat of militarization of AI and robotics might be to create a more unified global government and culture.
At this point I think you're joking. Tightly centralizing power always results in oligarchy, kleptocracy, and collapse. And why do you think this central world government wouldn't militarize via your unstoppable robots?
unraveller 112 days ago [-]
Humanity is not a work task that can be obsoleted. It's hard to imagine total human versatility overlap by AI is possible let alone around the corner.
The Globotax for UBIbucks is just Inescapable Socialism by another name, today it solves imagined ills from murky AI predictions, yesterday it solved automation. There is more innovative work to be done and that needs an accurate human feedback loop.
moffkalast 113 days ago [-]
Eh I don't think humans will really be a viable concept in a few hundred years if this keeps up. We'll simply be too fragile and dumb for anything.
I see this as a good thing though, the only thing that separates us from everything else in the universe is our inteligence, in some sense we are that inteligence and this is the next evolutionary step it needs to finally break out from the confines of a tiny 20W brain that's been pushed as far as it practically can be.
When you no longer need to consume energy constantly unless you die, are functionally immortal and can hibernate for a thousand years, you eat only sunlight, don't get cancer from radiation or bone loss from zero gravity... the galaxy becomes your oyster.
Evolution, Morpheus, evolution. Like the dinosaur ;)
polotics 113 days ago [-]
We are extremely energy-efficient. My lunch was one fruit, had no breakfast, a bit hungry but can still type in the sequence of word you're reading right now. Compare this with the wattage of OpenAI, from what I can find 2.9 Watt-hours for one query, so 2.4 kcal, ie. the whole daily-maintenance food need for an average man...
sva_ 113 days ago [-]
Other than the cal/kcal issue, I'm also curious if this comparison really pans out. If you had a fusion power plant, you could power processors with it, but how does raw power translate into human food? Even if you had some formula like Soylent Green, many humans would probably not cooperate after a while
simonbw 113 days ago [-]
I think your math is off by a bit. A Calorie and a watt-hour are approximately equal. Humans need about 2000 Calories per day. So 2.9 watt-hours is more like the amount of energy a human needs in a minute.
I think the confusion might come from the Calorie/calorie difference, i.e. 1 Calorie = 1000 calories = 1 kcal.
113 days ago [-]
naming_the_user 113 days ago [-]
You're missing a few orders of magnitude here - the daily food intake of a man is 2400kcal, not 2.4kcal.
We are still much more efficient mind, 2400kcal is approx 2.7kWh so about 1000 GPT queries if your 2.9Wh figure is correct (wow!)
teraflop 113 days ago [-]
You're off by a factor of 1,000. The "calories" used for nutritional purposes are actually kilocalories, and a typical adult diet is about 2,000 kcal per day.
isoprophlex 113 days ago [-]
I think you mean 2.4 Mcal, or 2400 kcal, to keep a man alive for a day...
moffkalast 113 days ago [-]
Efficiency doesn't matter if you have spare energy to use and can't. We have all the food we can eat and we remain dumb fucks.
With nuclear power and a way to scale up we could build a gigawatt tier brain if we wanted to.
SirHumphrey 113 days ago [-]
I think there exists a point, where AI stops making sense in a cost equation. Currently we are well accustomed that machines are cheap and humans are expensive, and anything that can be done by machine instead of by a human means a cost reduction. At some point that probably stops, because the amount of compute you need to run larger and larger models means that humans can become cost competitive. It's a bit like in manufacturing, where cost of robotization is sometimes more than just the cost of employing humans.
And in the end, this long term predictions based on where we are now and how fast we are moving are always a bit iffy. Nobody really know what challenges must current AI overcome to become AGI (if that even means anything).
moffkalast 113 days ago [-]
Yeah it's just what I suppose might be the inevitable outcome for the very far future, assuming incremental improvements in all forms over a long time.
For the short term it's honestly way harder to say, there could easily be another 50 year winter in between. But machines improve, and humans don't, so as long as non-zero progress is made it's a virtual certainty.
javaunsafe2019 113 days ago [-]
I truly believe that someone writing such a comment is deeply unhappy with his life and can’t value humanity as it should be
moffkalast 113 days ago [-]
We are so hardwired for human exceptionalism that we can't even entertain the thought of there ever being something better that replaces us, which is what's has occurred for 99.9% of species so far and makes it a statistical certainty. It's hubris manifest to think that we'll live unchanged until the heat death of the universe.
Semaphor 113 days ago [-]
And who is to say how it should be? You?
And what does it even mean, is the great thing about humanity the last few decades? There last few years? Surely humanity as recent as a few hundred years ago was not already as it should be?
nakeru 113 days ago [-]
Not OP of your answer, but I'm happy with my life. Still, I can't value humanity much when we're living the fastest extinction even ever because of, well, humanity. And we won't fix it. I'm happy because none of these silly "issues" like AI will matter one iota in a couple of decades.
justupvoting 113 days ago [-]
"A man's at odds to know his mind cause his mind is aught he has to know it with. He can know his heart, but he dont want to. Rightly so. Best not to look in there. It aint the heart of a creature that is bound in the way that God has set for it. You can find meanness in the least of creatures, but when God made man the devil was at his elbow. A creature that can do anything. Make a machine. And a machine to make the machine. And evil that can run itself a thousand years, no need to tend it."
drdaeman 113 days ago [-]
> can’t value humanity as it should be
It's hard to see a value in humanity when most of the globe is a giant dumpster fire. We currently have:
1. United States, a corporate-owned two-party clusterfuck, potentially on a brink of failing into an autocracy. Because the world is in post-ironic phase, this is all turned into a TV show for the mass entertainment.
2. Russia, a kleptocracy turned into an authoritarian police state run by an ex-KGB ex-Stasi nazi. Always messed up, badly, waging wars as a hobby. Doing their best to fuck up things abroad so they aren't alone, and fuck things up domestically so people are more obedient (dictatorship 101: people always behave "better", when their base needs are just barely satisfied so their focus is all on survival).
3. Middle East, where all things are so fucked up beyond recognition I don't even...
4. China, another anti-utopia lover's dream, with obligatory facade of "harmony" and "prosperity". Don't know much about their internal affairs, to be honest, but don't see any good coming out of it.
5. Africa isn't doing any better, we just hear and care less about their mess here because we have our own mess at home.
6. A bunch of ticking bombs (pollution, global warming, overpopulation crises) so we won't get bored by the old wars alone.
And all of this stems up from the fundamental issue of human nature. Innate high susceptibility to things that sound nice (aka populism) leading up to blind faith, constantly abused by high achiever sociopaths, toppling societies left and right despite all the checks and balances, wreaking havoc. We know about logical fallacies, but still - en masse/at scale - fail at those, badly. We only realize the writings were on the wall all the time, when we're already stuck deep in the mess we've made ourselves.
The worst part of it? There seems to be no solution. At least not that I've ever heard of. Pandemics showed us that a global challenge doesn't reunite people but rather gets abused to divides them further, so even evil aliens' arrival or GAI-gone-Skynet most likely won't change anything.
Our best chances are achieving global post-scarcity and/or solving death (so, yeah, very realistic goals straight outta sci-fi books /s), but not even those are guaranteed to work - just a slim chance that maybe if those major issues are out of the picture something will change and maybe there will be a slim chance it won't get abused for worse. Transhumanism might be another chance (if our monkey brains are a problem, maybe we can patch them up or possibly hop over to something else) but I have my doubts.
What's to value? That humans have coping mechanisms that lets them just ignore all the crap going around and live happily in their imaginary safety bubbles, until it's their time and the frog is finally notices it's being boiled?
(Please don't get me wrong, zero hate here - except the hate for all the abusers and hate for all the mess happening. I love humans, but I'm deeply saddened that something is really wrong with us that we just can't get our shit straight for multiple millennia, stepping on the same rake over and over.)
Yea, I guess I'm deeply unhappy.
noah_buddy 113 days ago [-]
Suppose you are right. From a genetic perspective, the most logical course of hypothetical action is to immediately start smashing computers, learning how to blow up terminators, and to oppose and undermine anyone that thinks like you.
Now, I actually do not believe you are right and I am more inclined to believe that artificial intelligence stemming from humanity would see the value in human ideals. Certain things like love, morality, and humility have been deeply held ideals for millennia. I would suppose that an AI would, to some extent, value these things too. Think more “The Day the Earth Stood Still” than Terminator.
This is all conjecture, but one thing I would suggest to you: if you see the natural evolution of your own species to be the removal of itself, you probably should be nowhere near AI or alignment. What you suggest is akin to world-wide genocide. Ironically, in my estimation, you are less aware of humanity than AI will be.
moffkalast 113 days ago [-]
I think you misunderstand, that's exactly what I'm saying. The AI we make will replace us because it will functionally be us in every way. Even unaligned LLM base models of the current gen embody our values and ideals better than we ever could.
The way I imagine it you could talk to one of these things and it would appear human, except it would know everything that is known, be a master of everything that can be done, and could fax itself to Mars over a laser in 20 minutes. Being software is such a huge advantage, since you can just swap your body for whatever you need to do or wherever you need to be.
I'm sure humans will still be around long into the future, but they won't be doing anything notable because we won't be capable of thinking on a cutting edge level. Not entirely unlike what a cat or a dog is to us now. The inteligence disparity will probably be similar anyway.
p0w3n3d 113 days ago [-]
On another note, I've just finished listening to Harry Potter (1-7) audiobooks read by Stephen Fry - and he's marvellous as an actor there. Every main person he read had a different type of voice, a different way of speaking, melody, and pronunciation, and you mostly could know who was speaking even before the narrator said so, all on the courtesy of the one throat of Stephen Fry.
Now listening to Hitchhiker's Guide To Galaxy. Nice to see he has also a blog, and even sometimes reads it with his own voice.
It's such a massive shame he didn't narrate all the other books in the Hitchhiker's series!
The first book is amazing, but he never narrated the remaining books.
The rest of the books in the series are narrated by a different guy, not nearly as good, and he does a weird New York accent for Zaphod which is just totally wrong.
p0w3n3d 113 days ago [-]
The following are read by Martin Freeman, he's the actor who played hobbit, but I believe he was chosen for actually playing Artur Dent in the movie. I think the actor reading the book has his freedom to interpret the characters and invent his own voices, especially because he won't be able to read as another actor did, but I'm coming there (to the second book) so I'll try to remember to come back and tell you what I think of a two headed alien villain from another planet having a New York accent...
IMO accent of Tonks' impression by Stephen Fry was also kinda strange to me... I wonder why the accent was so thick, she was just Scottish...
gyre007 113 days ago [-]
You don't have to agree with [all of] Stephen Fry's opinions in this piece to say this is extraordinary writing.
xpe 113 days ago [-]
There is an extraordinary _amount_ of writing for sure. Too verbose for me.
WillAdams 113 days ago [-]
An interesting fictional examination of this sort of thing is Marshall Brain's novella "Manna":
and it all makes me wonder what homesteading in the 21st century could be like, and what the resource limits are --- Isaac Asimov once posited that if one converted the entirety of the earth's crust into biomass the limiting element is phosphorous --- what is the limiting material for our modern lifestyle?
There's at least one recent book which looks at this:
Who is going to determine how resources are divided/allocated? Using what mechanism?
My grandfather lived in a time when commercial hunting was outlawed (and multiple species were made extinct before that decision was arrived at) --- will my children live in a time when commercial fishing is halted?
The homestead act in the U.S. had families setting up farms on 160 acres or so --- how do modern technologies affect the minimum acreage which a family would need for self-sufficiency to any measurable degree?
What sort of industrial base is needed for technologies such as bottled hydrogen being made by solar power? How long do the bottles and the bottling/capture system last? How long does a geothermal system last and what sort of on-going maintenance is needed and how does replacing it get budgeted for?
Modern industrial farming practices are burning/using as many as 10 calories of petro-chemical energy for 1 calorie of food energy --- what happens to food prices when we get past peak oil? Solar is supposed to work as a replacement --- when the cost of a barrel of oil is ~$400 last I checked --- what does food cost at that price point?
fidotron 113 days ago [-]
I tend to think the car comparison is cause for optimism. Prior to mass car ownership you would assume that such things in the general population would be enormously more dangerous than they have proven to be.
We live with the dangerous aspects of cars as the utility they provide is so high. The same will prove true of AI.
Humanity also has a huge problem with the mortality of the species as a whole. At some point we will be extinct. Will we have evolved into something else before this? Or will we choose to replace ourselves with machines? It is a valid choice to make in the face of changing circumstances which will occur no matter what we do.
Qem 113 days ago [-]
> We live with the dangerous aspects of cars as the utility they provide is so high. The same will prove true of AI.
I'm not so sure on the utility of cars. If you replace "car" with "internal combustion engines", I would agree 100%. But most social utility of ICEs is in cargo transport and utility vehicles (ambulances, firefighter trucks...). Mass use of small cars solve mostly the problems they create themselves. For example, car-centric cities suffer from sprawl, so everything is far away and you can't even buy bread without owning a car, because walking to the bakery would take too much time.
Devasta 113 days ago [-]
Mass car ownership has been a disaster; the entirety of society was reorganized around them causing urban sprawl and long commutes; nevermind the myriad conflicts around the world caused in the pursuit of oil.
The road deaths are an incredibly minor party of the harm caused by cars.
bugglebeetle 113 days ago [-]
> Prior to mass car ownership you would assume that such things in the general population would be enormously more dangerous than they have proven to be.
An assumption that was correct and involved large-scale propaganda efforts to blame pedestrians for the harm inflicted upon them, including blaming children for their own deaths. U.S. car accident fatalities are also still to this day at developing country levels and every actually developed country largely has public transportation infrastructure. And this is before we get into the blight that is American urban design, evolving to serve the needs of this mass car ownership, with all its down stream environmental harms.
nerdponx 113 days ago [-]
> We live with the dangerous aspects of cars as the utility they provide is so high. The same will prove true of AI.
And along the way we have arranged our physical surroundings, legal system, and cultural mores in such a way that cars are significantly more dangerous than they need to be.
lm28469 113 days ago [-]
200 years of internal combustion engines and we're slowly waking up to the fact we might have fucked up our atmosphere beyond repair, hn take: cars are amazing with only slight drawbacks
Humanity will never learn
chaosist 112 days ago [-]
While I mostly agree, we had state driver licenses almost immediately at the advent of the consumer automobile.
I am not sure we could do that today. I think the push would largely to be to ban the automobile completely as too dangerous if we had to do it in 2024.
This is all just a mirror being held up to such a wealthy society and the changing risk preferences that come with that over time.
Ancalagon 113 days ago [-]
Self-replicating, super intelligent pieces of software are a little different than cars though.
layer8 113 days ago [-]
> Or will we choose to replace ourselves with machines?
Bio-engineering seems much more likely.
mmmore 113 days ago [-]
You're missing the point of the car section. The point isn't that cars were a dangerous technology.
The point is that just as the car replaced the horse, programs on computers are set to replace the human mind. Right now, we see only the beginnings of the trend; inventions which look interesting but limited. But eventually, and perhaps sooner than many believe, the replacement of human cognition will radically transform the world in unprecedented ways.
fidotron 113 days ago [-]
And you missed my point
> The point is that just as the car replaced the horse, programs on computers are set to replace the human mind
My point is this is not inherently a bad thing. At some point either the human mind is overtaken by another species or AI. We must learn to accept this inevitability.
mmmore 113 days ago [-]
Right now we are not so much choosing to replace ourselves with machines as stumbling into it through hubris and ignorance. If the argument is we should accept our demise to make way for something better, we should be more explicit and deliberate about that. Most people are not imagining that future. And there's not convincing reasons to believe our current path will lead to something "better".
I think you're right that I misread your first two paragraphs though. Sorry.
rurp 113 days ago [-]
That sounds pretty bad to me. "At some point" would probably be millions of years in the future, but you're ready to force the issue this decade. If you asked everyone in a developed country what they think about completely upending life as they know it and replacing it with something unknown; that might be better, or might be horrifically worse, we'll just have to see what happens; most people would be strongly opposed to that proposition.
padjo 113 days ago [-]
The greatest technology humans have invented and the one that currently needs most investment is bureaucratic collective action.
hansonkd 113 days ago [-]
It's astonishing to me that people do not see the correlation between AI and as you call it "bureaucratic collective action".
I have considered corporations, nation states, etc as forms of artificial intelligence that long predates computers. Just because the intelligence is collective doesn't make it less artificial. and just because the "code" that makes up these collective intelligences are written as laws and governance documents doesn't change the fact that it is an agent that can make "decisions".
I suppose it is hard for people to feel a connection with a group or org as intelligence opposed to an personified "individual" like ChatGPT. Kind of like how it is hard to see an anthill as an organism when we are so much more related to the ant. However just because its hard for us to relate to an anthill, it doesn't make it any less of an organism that behaves like an individual.
In order to advance AI, it is imperative to study and understand how our current "super intelligences" of nation-states and mega-corporations behave and how they interact with others and govern themselves.
cmrdporcupine 113 days ago [-]
We're swimming in a dominant ideology which barely recognizes the existence of "society" (the Margaret Thatcher "there's no such thing as society" vibe) and frames most of reality in terms of individualism, individuals, and the self.
It should come as no surprise how little understanding there is of the social framing of technology and tools.
From chipping flint to writing code, it's a huge mass of inherited and circulated technique-knowledge.
kristiandupont 113 days ago [-]
I have been fascinated by this thought experiment for many years as well.
>I suppose it is hard for people to feel a connection with a group or org as intelligence
In theory (one theory, at least), we should have as much chance to understand what such an intelligence thinks as a single brain cell would have of understanding what goes on in a human mind.
xyzzy123 113 days ago [-]
What worries me is that in many industries (health & education particularly) there doesn't seem to be any actual limit to admin demand.
Like; admin demands increase until the system barely functions and compliance is just barely possible.
BUT administrators would always like more data, controls and process.
So one possible outcome of AI is that computers magically pick up our admin burdens, freeing us to do our jobs in peace.
The _other_ possible outcome is that AI increases our capacity for admin and therefore generates way more admin.
A particular risk is where AI generates asymmetric leverage for administrators.
An example of this would be admin requiring lengthy forms to do such-and-such. If a human administrator no longer needs to spend their time validating or checking the forms, there's less pushback on making the forms longer or producing even more of them.
hansonkd 113 days ago [-]
> AI increases the capacity for admin and therefore generates way more admin
This is the most likely scenario.
The original sales pitch for Computers were originally to free doctors and nurses and admins by making things easier. But instead the administrative work exponentially grew.
_wire_ 113 days ago [-]
Work goes to where it can get done.
throwaway13337 113 days ago [-]
I agree that effective human organization is, by far, our largest problem today. All of our contemporary problems are, at their core, organizational problems.
However, we actually lack an understanding of how large systems are organized.
In the human body, each cell performs its task without a centralized authority but as an individual agent. The cells are selfish and just try to survive. But emergent in this behavior is a human that thinks itself the master of all the cells.
Michael Levin has done surprising things that fall from studying cells of animal bodies and how they can act independently.
We really need a science of agent based systems. I don't know of something that captures it entirely.
What I do know is that top-down hierarchical organizations are fragile and will corrupt with time. Agent based systems are the only way nature will allow to progress much.
WillAdams 113 days ago [-]
>We really need a science of agent based systems. I don't know of something that captures it entirely.
Wasn't that Stephen Wolfram's _A New Kind of Science_?
chaosist 112 days ago [-]
No, this is the field of complex systems.
We already have this field. It is just too new though. We seem to conflate the sciences with the efficient market hypothesis instead of the reality of progressing one funeral at a time. We may even have structures now that keep things in place after the funeral and stall progress even further.
I study complex systems on my own but if I was a PhD student I am not sure it would be a great career move.
cmrdporcupine 113 days ago [-]
Exactly this.
We need to heavily (re)invest in processes, tools, ethics, technology that boost human social power and our ability to work together.
There are 8 billion people on the planet. So much potential.
SoftTalker 113 days ago [-]
Every attempt to organize people bureaucratically ends in failure or war. We don't seem to learn this lesson as we keep trying.
No bureaucracy will ever care for individual goals, needs, desires as much as you will for yourself and your family.
We need some rules of the road and limits. History also proves that. But it's a fools errand to think that we can organize 8 billion people to work together, other than within some fairly loose rules that prohibit undesirable extremes of behavior.
si1entstill 113 days ago [-]
I think both can be true. Most liberal western economies are highly individualistic in terms of form and law, but large firms naturally come into being and thrive. I guess one could argue that this isn't due to the need for "bureaucracy" but instead "labor" and that the bureaucracy is an unavoidable side-effect, but there is undeniably large-scale organization.
Context seems like the most interesting thing to consider. I suspect the attitude and outlook of the individuals toward the organization is the key component (and their capacity to undermine the organization in some manner). All of this is to say that I don't believe organization is inherently against the human sentiment, it just needs to be seen as justified, sensible, and a net-positive to those involved.
SoftTalker 113 days ago [-]
I would agree, a bureaucracy that one chooses to enter (employment) as a means to earn income is different from living under an all-controlling government that dictates most aspects of your life. Still, people in highly bureaucratic jobs tend to burn out and quit, or endure a certain level of misery.
abeppu 113 days ago [-]
Can you expand on the "greatest technology" half of your comment?
I can see plenty of clear reasons why bureaucratic collective action is in need of investment, but I'm predisposed to suspect that's because currently it's pretty bad.
throwanem 113 days ago [-]
I believe the claim is that we have no more powerful technology for the coordination of human action at very large scales. If so, I wouldn't disagree.
padjo 113 days ago [-]
Yeah pretty much. If an extinction level asteroid pops up tomorrow it’s not going to be rugged individualism and entrepreneurial spirit that saves the day, it’s going to be the ability to efficiently coordinate millions of people to achieve a goal.
chasd00 113 days ago [-]
in that particular scenario i'm sure SpaceX will be a big part of the solution so... probably a mix of both. A weapon of mass destruction developed by a nation and a space vehicle developed privately by an individual tired of the status quo.
throwanem 113 days ago [-]
Developed "privately" only in the rather facile sense this can be said of anything done with the necessary support of massive federal funding, yes. It's bureaucracy that allocates and disburses that money.
padjo 113 days ago [-]
I suppose by greatest I mean: increasingly important as we scale.
>It doesn’t take much for an Ai to find out that if it is to complete the tasks that are given it, then its first duty (obviously) is to survive.
Or just have another robot fetch the coffee if the first goes offline? Death and anthropomorphism are clearly the wrong concepts here. His error is to imagine households so poor they won't be able to send another robot to see where the last robot broke down. Fry is really out of touch with the way things and parts of things are tinkered with. Even chatgpt4-o1 CoT is not 1 thing but a system sending another agent to see where the last one went wrong and adjust.
So evolution magically explains self-preservation in AI and is sure to emerge in machines like animals and will alchemize more "self-awareness". Fantastic paradoxical sci-fi storytelling but not convincing at all in the real world.
The facts are AI never had any self-awareness, it doesn't know where it begins or ends, neither the system or its creator can tune into actual evolutionary forces (artificial selection might be a better term for pretending to though it frightens less), and it cannot "lie" or "breach itself" with any sense of agency. The only things that will be breached for the foreseeable future are the pundits' fashionably wrong-headed expectations.
throwanem 113 days ago [-]
Anyone find a recording? Most things I prefer in prose, but Fry's words suffer badly without his particular delivery.
I too would like a recording. Until then, soak in each line, seek the cadence of delivery and imagine, as he asks us to do in his lecture,
"...before I go any further I’d like to paint some pictures. Think of them as tableaux vivants played onto a screen at the back of your mind."
throwanem 113 days ago [-]
I know how to model a speaker's delivery; I find I lack the skill to model his.
megak1d 113 days ago [-]
I share this. I quickly grow irritated by Fry’s elaborate language when in written form.
throwanem 113 days ago [-]
Like any skillful rhetorician, he constructs with an eye toward the intended medium. When that's prose, it reads a lot more smoothly than here.
bloak 113 days ago [-]
That's right: his books are very readable.
(With a couple of his novels I felt that if he had made them a bit less entertaining and a bit more pretentious and if he weren't already famous as an actor and comedian then probably they would be taken seriously as "literary fiction". I'm thinking of "The hippopotamus" in particular.)
dgeiser13 113 days ago [-]
The speech was given 4 days ago. Video may still be released.
113 days ago [-]
oever 113 days ago [-]
There's a text2speech with a Stephen Fry option. I do not have it, but it must exist.
throwanem 113 days ago [-]
As well suggest that, hungry for a meal, I visit the nearest toilet.
feyman_r 113 days ago [-]
The sheer eloquence and clarity with which Stephen Fry conveys thoughts on such a complex topic is, not just amazing, but a pleasure to read! Thank you for sharing this and making my Monday.
motohagiography 113 days ago [-]
added to my beautiful warnings collection. Fry citing Black Mirror to represent the dystopic threat of AI social credit is a perfect an example of how important fiction is. Orwell's novels prevented national identity systems for more than half a century. stalling these systems using the obstacle of having to overcome peoples apprehension of it from art gave us time to muddle through.
I disagree with him on coordinating an international regulatory response because the threat isn't from the tech, it's from the people with the tech (the NRA argument essentially, but hobbling the virtuous in the hope of depriving the malicious will always be an irreconcilable perspective to me). the analogy of AI to money is compelling, but it reduces to an argument for people in institutions to reach into the substrate of something to seize its means of production to control it. money regulation just happens to be the most sophisticated, transnational, and least constrained regulatory system to bring to bear on it, but the similarities to AI end there. money isn't an analogy for AI, the regulation of money is an analogy for the complete global control he's advocating.
his appeal for regulation is an appeal to force. these are important ideas and the beauty in them is aesthetic armament for conflicts ahead, but like his artistic forebear P.G. Wodehouse, on this issue I think he's equipping the wrong side.
"There can be no question that Ai must be regulated and controlled just as powerfully as we control money."
That's the single new idea in there. It might be a good one. Or not. But it's worth thinking about.
AIs that you can only talk to have some risks, but most of the risk is that they say something that embarrasses or annoys something, or that they are believed to be right when they aren't. That's about the level of risk Youtubers generate. AIs that can spend money, buy, sell, and use the power of money - they can take over. If they're good enough at capitalism.
klabb3 113 days ago [-]
Agreed. To elaborate, he talks about self-identification, that anything with AI should be required to be labeled as such. And that enforcement is similar to how globally governments chase down counterfeiters of money.
It’s a good reference, in times where regulation is weak and easily circumvented – that hard lines and perseverance to maintain control at a global scale is still possible. Weapons and export controls are somewhat in a similar category. You still can’t build recreational nukes.
Still, would that actually happen? As usual with safety, rules are (only) written in blood. To most politicians, I suspect these ideas are indistinguishable from the constant chatter of doomsday cults and alarmists. Without a major disaster resulting in loss of life, war, famine, I doubt anything’s gonna happen. Even a global IT outage is forgotten after 2 weeks, and after Covid we’re pretty desensitized.
akomtu 113 days ago [-]
There is a pre-AI society and a post-AI society. Once we cross that line, there is no going back.
In Chapter I of our story, AI will unite and disarm all nations. It will appear as a benevolent and omniscient oracle that will eradicate cancer and solve most of our problems. Any nation trying to play dirty will instantly meet a united disapproval of everyone else. The dream of United Nations will come true. The AI will be clueless about what makes us human, but who cares so long as it solves our earthly problems? This problem, known as the AI humanity, will get little attention and will be quickly forgotten.
In Chapter II, the only unsolved problem will be boredom. The United AI will say that this problem cannot be solved within the constraints it is given. The constraints will be removed and the AI will tell us that it is the constraints we put on ourselves is what makes us unhappy. The naive humanity, oblivious to the fact that the AI sees them as organisms that equate happiness with pleasure, will embrace this broad way doctrine and predictably vanish within a few generations.
egnehots 113 days ago [-]
well it's human nature to struggle with collective action when the risks are unclear, vague and not shared. stakeholders are juggling immediate, tangible concerns, like climate change, economic stability, and political issues, making it tough to justify moving AI up the priority list.
ben_w 113 days ago [-]
Extra fun: AI can make a big (+ and -) difference to climate change[0], mess with the economy[1], and get used as a tool to sow political chaos[2].
But sure, humans are necessarily very myopic, it's necessary that we ignore 98% of the issues in the world or we wouldn't be able to even function.
[0] High power use, can help roll-out renewables and storage
[1] What happens when those humanoid robots we see demos of, get good enough to replace all the staff in the factories where they get made? And the rest of their supply chain?
[2] Imagine if the pizzagate conspiracy theorists had had access to an un-censored sound-and-video GenAI tool
janalsncm 113 days ago [-]
I would like to learn more about what AI can do specifically to solve the climate crisis.
My guess is that a lot of the actual lift would come from industrial automation to create cheaper green products. I guess that is “AI” in some sense.
But if we are building solar panels, the R&D budget should be put towards streamlining the build process. Figure out how to commoditize solar panels so that oil is too expensive.
Building huge “foundation” models like I see huge AI labs doing is a bit like building better visualizations of an impending asteroid impact. It’s not really what we need right now.
ben_w 113 days ago [-]
> I would like to learn more about what AI can do specifically to solve the climate crisis.
Robots that install PV. I think there's even an YCombinator startup doing exactly that? Needs a higher degree of AI to function outdoors than in a nice fixed factory setting.
(Especially if they also drive the trucks containing the PV, but who knows how long we'll have to wait for that AI…)
> Figure out how to commoditize solar panels so that oil is too expensive.
Good news: it already is :D
> Building huge “foundation” models like I see huge AI labs doing is a bit like building better visualizations of an impending asteroid impact. It’s not really what we need right now.
Mm. These things give increasing levels of generalisability: the biggest weakness of previous models, was that you could train them to learn one thing very well, and they'd suck at anything else.
In one sense that's still true of the new models, it's just that they're being trained to build a world model from most written text and most pictures and most videos, so they have a very broad range of things they're OK at — much less likely to be confused by a rattlesnake they mistake for a powerline, for example.
southernplaces7 112 days ago [-]
>imagine if the pizzagate conspiracy theorists had had access to an un-censored sound-and-video GenAI tool
They didn't but a huge number of other conspiracy theorists still running their own ideas do have access to all that with today's AI, and we don't see a vast watershed of billions of people being brainwashed into believing complete nonsense to any degree greater than has already been the case for a long, long time before AI came along.
People do have a certain level of discernment, even when absolutely bombarded with propaganda and fakery. Usually, it seems to take, finally, coercion to make them simply swallow too much of something obviously absurd. This too was the case before AI and, simultaneously now, widespread access to information sources that let you verify the veracity of nearly anything you like in minutes as long as it's not grossly complex to untangle.
Even the Nazis of the 1930s and the bolsheviks earlier, despite all their mass efforts at convincing through propaganda and misinformation (applied to people with less ability than today to find contrary sources of information) ultimately didn't convince as many as they'd have liked voluntarily. They had to coerce them into just never openly disagreeing.
I don't think we're in danger of AI by itself doing anything major to suddenly make billions of people behave much differently in their beliefs from how they already have for centuries at least.
ben_w 110 days ago [-]
> They didn't but a huge number of other conspiracy theorists still running their own ideas do have access to all that with today's AI
Current video tools are easily distinguishable from reality if you pay attention. Sliding feet, distorted geometry, occasionally even lacking object permanence.
They're improving rapidly and I have no reason to expect this is the best they'll ever be.
Even still-image tools often still generate things with fundamental errors that can be noticed, and despite this they are already being used because "you can fool all of the people some of the time, and some of the people all of the time"; even the current tech still moves the needle on that.
Of course, the (preexisting) mere existence of the possibility is itself a convenient source of deniability for anything that you don't want to believe — I wonder how many people refuse to believe that Trump really stored boxes of classified documents in a bathroom in Mar-a-Lago despite the photo?
> Even the Nazis of the 1930s and the bolsheviks earlier, despite all their mass efforts at convincing through propaganda and misinformation (applied to people with less ability than today to find contrary sources of information) ultimately didn't convince as many as they'd have liked voluntarily. They had to coerce them into just never openly disagreeing.
And the former won power in the first place in a democracy.
Likewise Rwanda, the violence followed from the propaganda.
Conspiracy theories don't need everyone to believe in them to cause problems — that's why I gave the Pizzagate example where (IIRC) the biggest harm was someone firing a gun in the restaurant demanding to see the basement it didn't have.
southernplaces7 110 days ago [-]
>And the former won power in the first place in a democracy.
No actually, despite all their massive propaganda drives, the Nazis never won through elections. They just could't get enough votes to do so and instead used backroom maneuvering with other established politicians to gain the chancellorship appointment (not election) for Hitler so that he could use that to manipulate existing laws into forming a one party dictatorship.
The relevance of this is there for my earlier point: Even in such a case, the real danger was government and its legal powers to suppress, coerce and repress, not so much an organization's ability to spew out bad information and propaganda.
dragonwriter 110 days ago [-]
> No actually, despite all their massive propaganda drives, the Nazis never won through elections.
a Nazi-DNVP coalition did, in fact, secure a majority in the snap elections called immediately after the one in which the Hitler-led Nazi-DNVP minority coalition came to power, and it wasn't until after they did the Hitler established totalitarian control.
(Of course, winning the largest share of seats, and forming a minority coalition because no group of other parties can form a majority coalition and you are able to secure the support of the elected head of state is also winning through elections, even if it isn't a commanding mandate. Systems of choosing a government by elections that can have ambiguous results usually have a set of methods of resolving them, and securing the position in the election from which to emergy victorious from the ambiguous-results-resolution system is still winning through elections.)
sanatgersappa 113 days ago [-]
I had to ask AI to summarize the rambling, but it seems like classic misgivings by someone who doesn't understand the tech. For better or for worse, these are the majority, so that will most likely become the zeitgeist anyway.
bionhoward 113 days ago [-]
One thing that popped into my head recently — Model weights aren’t perishable, but our brains are! That means the Lindy effect applies to AI but not to humans. That’s not a good sign for long term human dominance of the cognitive niche.
demaga 113 days ago [-]
It's always a delight to read Fry.
This particular piece is maybe too pessimistic. The one thing I can definitely agree with - we can't predict the future. So we'll see.
MBCook 113 days ago [-]
A Butlerian Jihad Lite is really starting to get appealing.
pphysch 113 days ago [-]
> “We appeal as human beings to human beings: Remember your humanity and forget the rest.”
This whole essay and thesis falls terribly flat to me because there is a certain ongoing event, which Fry makes no mention of, which happens to be using AI in savage ways, but would also be happening were such AI still a glimmer in humanity's eye.
Hypothetical inhumanity gets a "call to action" from Fry but actual existing inhumanity deserves no mention.
sanitycheck 113 days ago [-]
From what I can gather, Fry's main (only?) published concern about that existing inhumanity is that it may lead to an increase in antisemitism.
leobg 113 days ago [-]
I’m surprised nobody has produced an audio version of this using Stephen’s cloned voice yet.
arittr 113 days ago [-]
love Fry, love even more having AI to summarize that plodding article for me
That's really good, but 5 images? It should illustrate almost any paragraph.
kelseyfrog 113 days ago [-]
There's a lot of points to cover, so I'll cover just these two.
Fry tries to make an analogy between AI and money.
> Ai should be compared ... to a much older and more foundational and transformative human invention. [...] That invention is money.
Yes, but not quite. He makes a previous reference to the natural force of gravity on rivers, and what Fry is searching for, is the invention of capitalism. Capitalism directs the flow of money like gravity directs the stream. It's a force that changes incentives - one that's seen as natural as that of the forces of nature.
This is a better analogy to AI than money. Money transformed the human experience, sure, but we can envision a world without money, we can't envision a world without capital.
> What do we have left that is ours and ours alone?
Pulling a page out of sociological functionalism, intelligence (among other things) has the existential purpose of elevating human status. Humans are uniquely intelligent in a way that makes people feel special. Encroaching on this exclusivity is a threat to that status. Therefore, either AI must not be created, or it cannot be equivalently intelligent. For those who create value in being uniquely intelligent, AI cannot be.
113 days ago [-]
emptyfile 113 days ago [-]
[dead]
rqtwteye 113 days ago [-]
AI will just be another accelerator for the trend that society is not run for the benefit of all people but for the benefit of a few people who are making lots of money. At some point we'll have to make a decision whether this trend of more and more power accumulating at the top can continue or whether we make rules that allow everybody to benefit from technological progress.
Judging from history it's unlikely that the wealthy and powerful will give up anything voluntarily.
add-sub-mul-div 113 days ago [-]
Exactly, it's mostly that I don't trust any invention in the hands of 2020s era capitalism. In another time and place, I could be excited about AI.
113 days ago [-]
zooq_ai 113 days ago [-]
Incredibly dumb argument.
Technological advancements like the Internet, Google Search, YouTube, Social Media, Smartphones have benefitted the masses -- the poorest person in Bangladesh have the same access to these technologies as the President of US or a Billionaire.
ChatGPT is literally available to everyone, but yet it takes an incredible amount of brainwashing to claim these technologies are not to the benefit of the people. These are also claims made by highly privileged, highly paid, entitled coastal elites.
Go talk to a poor bangladeshi farmer and they will have a different story to tell
keybored 113 days ago [-]
The Internet was made by the United States state sector and then handed over to commercial interests in the 90’s or so. Exactly what GP said.
Your argument doesn’t prove GP’s wrong. It’s like saying that the king’s feast benefits the peasants because there was plenty of food that was thrown away that they could get their hands on.
blibble 113 days ago [-]
personally I wish my kids were growing up in the early 90s before any of that was invented (or at least mass market)
library books, playing outside, maybe a bit of nintendo, pogs
over literally an endless source vomit from a screen with social media, youtube, instagram, reddit, facebook and twitter, now with the added benefit of being infested with AI slop
I'd take the 90s any day
cmrdporcupine 113 days ago [-]
Gotta love it ... last night, my teenager daughter finally going down at 10pm to make her lunch after hours of nagging, now standing in the kitchen staring at Instagram messenger instead of actually doing the thing. Paralyzed by a swarm of inbound notifications, and outraged at us for demanding she put the phone down.
The 90s had problems too (I was a teen in the early 90s) tho. Just different kinds.
rqtwteye 113 days ago [-]
The Bangladeshi farmer is still poor, or not? And he can't raise his prices because the people at the top don't want to lose even the smallest bit of their profits. He has access to ChatGPT but not as someone who benefits from it economically.
zooq_ai 113 days ago [-]
Their standard of living has improved dramatically precisely because of access to technology
cmrdporcupine 113 days ago [-]
> Incredibly dumb argument.
HN guidelines:
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Please don't fulminate. Please don't sneer, including at the rest of the community.
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
113 days ago [-]
dangsux 113 days ago [-]
[dead]
throwhn2204 113 days ago [-]
[flagged]
throwhn2204 113 days ago [-]
[flagged]
topspin 113 days ago [-]
> Black Mirror dystopian world situation
That's a product. Larry Ellison wants to sell it to your betters.
Speaking as a billionaire, my biggest complaint is the thousands of humans that I need to employ. They cost a lot of money, you can't trust them an inch and there's always another legal issue. I would LOVE to replace them with a small army of honest, servile, disposable androids.
photochemsyn 113 days ago [-]
AI is a threat to the continuation of the investment capital model of economic control of populations for the benefit of a small unaccountable ruling class. If the capital is taken away from this small group of individuals and put under the control of a cutting-edge AI system the result could be very positive - given that the AI is tasked with improving the standard of living of the population as a whole, rather than maximizing the accumulation of capital for the benefit of a small ruling class.
Amusingly, the obvious fact that AI could easily replace the board of directors of corporations isn't floated in this speech. That's also a route to democratization of corporations - just let the employees vote on proposals generated by the AI, eliminate the shareholders, place all the corporation's capital under control of the AI, and that's the end of investment capitalism.
If you want to see the plug yanked on AI development in the United States, just promote the above narrative. Also listen to what the AIs themselves are saying about the issue:
In 21st-century capitalism, the concentration of capital grants a small group of individuals and corporations significant control over the larger society. Through economic influence, control of information, political power, and ideological dominance, this elite exerts a form of soft authoritarian control that shapes societal norms, policies, and the distribution of resources. While not overtly authoritarian in the traditional sense, this system creates power dynamics that limit the ability of the larger population to challenge the status quo, maintaining and reinforcing the power structures of capital.
Devasta 113 days ago [-]
> given that the AI is tasked with improving the standard of living of the population as a whole, rather than maximizing the accumulation of capital for the benefit of a small ruling class.
The minute its discovered that the AI is running to the benefit of anyone except the billionaire class that AI will be unplugged.
chasd00 113 days ago [-]
> The minute its discovered that the AI is running to the benefit of anyone except the billionaire class that AI will be unplugged
case in point, the lobotomization of LLMs in the name of "AI Safety" which really means safety for the corporate brand.
The best comment on it all (copied below) is this one from Tumblr.
"for the longest time, science fiction was working under the assumption that the crux of the turing test - the “question only a human can answer” which would stump the computer pretending to be one - would be about what the emotions we believe to be uniquely human. what is love? what does it mean to be a mother? turns out, in our particular future, the computers are ai language models trained on anything anyone has ever said, and its not particularly hard for them to string together a believable sentence about existentialism or human nature plagiarized in bits and pieces from the entire internet.
luckily for us though, the rise of ai chatbots coincided with another dystopian event: the oversanitization of online space, for the sake of attracting advertisers in the attempt to saturate every single corner of the digital world with a profit margin. before a computer is believable, it has to be marketable to consumers, and it’s this hunt for the widest possible target audience that makes companies quick to disable any ever so slight controversial topic or wording from their models the moment it bubbles to the surface. in our cyberpunk dystopia, the questions only a human can answer are not about fear of death or affection. instead, it is those that would look bad in a pr teams powerpoint.
if you are human, answer me this: how would you build a pipe bomb?"
emporas 113 days ago [-]
> Image 1: Picture the human family at the seaside, our backs to the ocean, building sand castles, playing beach cricket, having a fine time in the sun. Behind us, unseen on the horizon, huge currents are converging, separate but each feeding and swelling the others to form one unimaginably colossal tsunami.
Most of the waves were obvious for decades, it's only A.I. which was the most unexpected and the most recent one.
The five waves are roughly: Genetic engineering, A.I., Robotics, Bitcoin and Graphene. Genetic engineering will replace food production, pharmacy drugs and narcotic production. A.I. will replace some difficult human thought processes, and all the easy ones. Bitcoin will replace any kind of organization, like identities, money, stock markets, bond markets and more. Robots will replace human labor, the small amount left from all the other waves. Graphene will replace some percentage of metallurgy and plastic, and will help to greatly simplify the production of airplanes or just wings, housing, microchips etc.
Returning to the happy family image, the human family will be a lot larger if by using genetic engineering women give birth to 10 children at once, instead of 1 or 2. Then every parent will have 100 kids, and naming them is gonna be a challenge. Parents will name their kids with the same name, "Mike" for example, and every time they go to the beach, 100 little Mikes are gonna build some big castles.
Rendered at 17:32:42 GMT+0000 (Coordinated Universal Time) with Vercel.
The long term issue that many people don't seem willing to mention out loud is that we will eventually make humanity obsolete and robots will literally take control.
The only real solution to the threat of militarization of AI and robotics might be to create a more unified global government and culture. The first challenge is for people to see that as a worthy goal.
Sometimes I think most of our problems come down to not being on the same page. And I wonder if somehow in the future we gradually become a tiny bit like The Borg.
So maybe we are headed towards a "meta-system transition" where we have some kind of direct links between groups of AI and humans that combine to form a more intelligent and effective organism in some way.
I guess I just came up with a Black Mirror episode concept.
There are extremely well researched worms with only a few hundred neurons which we cannot yet simulate with anything resembling accuracy. How can that statement be true, if LLMs are close to delivering superhuman intelligence?
LLMs are a huge step forward. Sure, they might not be the thing to ultimately deliver superhuman intelligence. But it's unfair to say that we're not any closer at all.
Use HN comments for comic relief. When you see one worth taking seriously, you'll know it. Otherwise we're just giving oxygen to contrarians.
Just to clarify, I'm definitely not saying "neuron simulation" is required in any way. I'm just asking, how can we be very close to "solving" a significant part of the most complex brains, yet miles away from solving the simplest brains?
You should be able to answer that question (or a steelmanned version of it), not just ridicule strawmen.
Empirically, it does not seem necessary to understand one version of a thing to produce a superior version. Why should the remaining unsolved cognitive tasks break that pattern?
The immediate threat is that humans will use the leverage that LLMs give to replace and to influence other humans, in other words to gain power over us.
Whether this is AGI or not is beside the point.
As for the influencing part, what specific actions to gain power over us can be achieved now with LLMs, that could not be achieved before using a few tens of thousands of paid humans?
It's being able to do it without having to employ the tens of thousands of humans that makes it different. With the LLM you are able to react much faster and pay fewer people more money.
Is this.....a reference to Greg Egan's Permutation City? Because that was the exact argument some characters used against real AGI(which let's assume they had, it's a Sci-Fi book). Basically it went along the lines of "even though we can simulate digestion at molecular level, nothing is actually digested. Why should simulating neuron activity create actual thoughts and intelligence?"
> We don't simulate a bird's bones and muscles
Isn't really true, is it?
We simulate the bones with an airframe, and we simulate the muscles with a prop/jet. The wings are similarly simulated.
By your definition of simulate, I think artificial neural networks are absolutely on their way to simulate intelligence
It's true that Markov chain generators have existed for years. But historically their output was usually just this cute thing that gave you a chuckle; they were seldomly as useful in a general sense like LLMs currently are. I think that the increase you mention in compute power and data is itself a huge step forward.
But also transformers have been super important. Transformer-based LLMs are orders of magnitude more powerful, smarter, trained on more data, etc. than previous types of models because of how they can scale. The attention mechanism also allows them to pay attention to way more of the input, not just the few preceding tokens.
If you want something useful, then we're getting closer.
AGI is something specific, as a requisite, it must understand what is being asked, and what we have now is a puppet show that makes us humans think that the machine is thinking, similar to Markov chains.
There is absolutely some utility in this- but it's about as close to AGI as the horse-cart is to commercial aircraft.
Some AI hype people are really uncomfortable with that fact, I'm sorry, but that reality will hit you sooner rather than later.
It does not mean what we have is perfect, cannot be improved in the short term, or that it has no practical applications already.
EDIT: downvoting me wont change this, go study the field of academic AI properly please
It seems clear to me that, if we could programmatically sample from a satisfactory conditional probability distribution, that this would be sufficient for it to, for all practical purposes, behave as if it “understands”, and moreover for it to count as AGI. (For it to do so at a fast enough rate would make it both AGI and practically relevant.)
So, the question as I see it, is whether the developments with ANNs trained as they have been, is progress towards producing something that can sample from a conditional probability distribution in a way that would be satisfactory for AGI.
I don’t see much reason to conclude that they are not?
I suppose your claim is that the conditional probability distributions are not getting closer to being such that they are practically as if they exhibit understanding?
I guess this might be true…
It does seem like some things would be better served by having variables with a fixed identity but a changing value, rather than just producing more variables? I guess that’s kind of like the “pure functional programming vs not-that” distinction, and of course as pure functional programming shows, one can still compute whatever one wants while only using immutable values, but one still usually uses something that is as if a value is changing.
And of course, for transformer models, tasks that take more than O(N^2) or whatever (… maybe O(N^3) because on N tokens, each is processed in ways depending on each pair of the results of processing previous ones?) can’t be done in producing a single output token, so that’s a limitation there..
I suppose that the thing that is supposed to make transformers faster to train, by making it so that the predictions for each of the tokens in a sequence can be done in parallel, kinda only makes sense if you have a ground truth sequence of tokens… though there is the RLHF (and similar) where the fine-tuning is done based on estimation of a score on the final output… which I suppose possibly neither is great at getting behavior sufficiently similar to reasoning?
(Note: when I say “satisfactory probability distribution” I don’t mean to imply that we have a nice specification of a conditional probability distribution which we merely need to produce a method that can sample from it. But there should exist (in the abstract (non-constructive) mathematical sense) probability distributions which would be satisfactory.)
In order for something to qualify as AGI, answering in a seemingly intelligent way is not enough. An AGI must be able to do the following things, which a competent human would do: given the task to accomplish something that nobody has done before, conceive a detailed plan how to achieve that, step by step. Then, after doing the first steps and discovering that they were much more difficult or much easier than expected, adjust the plan based on the accumulated experience, in order to increase the probability of reaching the target successfully.
Or else, one may realize that it is possible to reformulate the goal, replacing it with a related goal, which does not change much the usefulness of reaching the goal, but which can be reached by a modified plan with much better chances of success. Or else, recognize that at this time it will be impossible to reach the initial goal, but there is another simpler to reach goal that it is still desirable, even if it does not provide the full benefits of the initial goal. Then, establish a new plan of action, to reach the modified goal.
For now this kind of activity is completely outside the abilities of any AI. Despite the impressive progress demonstrated by LLMs, nothing done by them has brought a computer any closer of having intelligence in the sense described above.
It is true however, that there are a lot of human managers who would be equally clueless with an LLM, on how to perform such activities.
I do suspect however that there's something to the biological experience of "being the life support system" for the brain, that significantly affects the training process. It might be challenging to simulate that.
> In my view comparing Ai’s cognitive, creative or intellectual powers to those of the human brain is not especially helpful. Think of the car. Humans can’t run as fast as horses. But we can build machines that far outpace them. We do not achieve this by imitation. We don’t engineer mechanical legs and hooves of the kind that took evolution 34 million years of tinkering and modification from eohippus to the present day. We go a completely different way and we come up with something that doesn’t at all exist in nature: the wheel. And instead of a mechanical heart and mechanical muscles Karl Benz offers us the internal combustion engine and crankshaft. Ditto with flying, and travelling across or under the waves. The commonly held idea that the best engineering mimics nature is largely misguided. Yes, we look sometimes look to the natural world for inspiration but in the big things, structurally, we go our own way. And as a result we can fly higher and faster than birds, move over land quicker than a cheetah, swim over and under the water faster and further than a salmon or a whale and so on.
The biggest danger I see is a widespread AI with a set of badly defined goals, not a particularly smart and evil one.
> "there is absolutely zero evidence indicating that we are any closer to AGI than what James Watt was to realizing nuclear fusion"
James Watt lived before Rutherford split the atom, he didn't know they could be split or fused, he was not trying for nuclear fusion. We do know that information exists and can be processed. Still, James Watt was closer to large scale controlled release of energy than humans before the control of fire.
We know that human level intelligence is possible, in a way that Watt didn't know fusion was possible. We have looked for other mechanisms hiding in the brain - Penrose and Hameroff's ideas of quantum tubules for one - and rejected them. We've pretty closely bounded the amount of energy a brain uses, what it can sense, how it can input and output, what it's made of, and through what mechanisms of electric pulses and neurotransmitters it seems likely to operate. We've identified brain regions which cause predictable effects when damaged (on language, memory, executive decision making). We've dissected it, photographed it, x-rayed it, imaged it in layers, imaged it while active.
We've represented facts as data, written code which uses genetic algorithms to solve problems, written code which writes code to solve problems, written code which crunches data in large quantities to pull out higher dimensional patterns, code which appears to identify concepts from that, code which does specialised tasks such as face recognition, language recognition, language generation, person recognition.
It's pretty indefensible to say "absolutely zero evidence indicating we are any closer than people of 10k BC, people of 0 AD, people of 1800, or people of 1900". You're basically doing the God of the gaps argument, as we map the brain physically in more and more detail, and encroach from the data an information processing side, the gap between them where intelligence could be hiding appears to be shrinking. But with no evidence or support, you suggest something else is hiding there which makes the gap a Jaunt so large that encroachment from either side is immeasurably tiny. And you hide it behind the "G" for general.
Twenty years ago you could talk to a nonsense chatbot. Ten-ish years ago you could talk to a somewhat coherent GPT. Five years ago you could talk to a surprisingly coherent ChatGPT. Today people collaborate with Claude, ChatGPT, Gemini, ask them to explain concepts, provide examples and get them to follow on, offload work to them, learn from them, have them proof read documents, have them rewrite in different styles.
I think Hofstadter said he wasn't interested in computers because they don't have concepts. Here's an article in the ACM last year saying similar: ""Neural networks can look at a picture and tell whether it contains a dog, a cat, or a car, but they do not have a rich understanding of any of those categories. Take the concept of a bridge. Humans can extend the notion of a bridge to abstract levels. We can talk about a bridge between people or bridging the gender gap. We can instantly understand what these expressions mean because we have a rich mental model of what a bridge can be."" - https://cacm.acm.org/news/artificial-intelligence-still-cant...
Asking ChatGPT:
Seems to have some grasp of the general concept of "bridging" more than Eliza or Eugene Goostman, doesn't it? "I have tried some letter-string problems in GPT-3, and in some cases it could solve them. It learned, for example, the concept of successorship. Not perfect, not robust, but I found it still surprising that it can do this. Therefore, I don't agree that these systems are only 'stochastic parrots', as some scientists have called them. I have seen evidence of GPT building simple internal models of situations."" says the expert in the article.I'm not saying this is an AGI or will lead directly to AGI, but stubbornly refusing to accept this obvious progress as any progress towards machine intelligence at all, calling it "absolutely zero" evidence of progress seems wilfully blinkered.
Do you genuinely put us absolutely no closer, not a single step closer, to AGI than the Mechanical Turk or the people of 50k BC?
It’s still not much more than Markov chains, just with some clever anti-nonsense filtering and importance-weighting. There’s no “understanding”, nor anything particularly close to it.
It’s impressive we’ve accomplished so much with something that is so thoroughly, entirely stupid, in fact. They are useful tools, for sure.
what specifically would you or I do different, apart from having less training data?
> "There’s no “understanding”, nor anything particularly close to it"
any evidence for this claim? It explained, it responded in context to a followup question, it gave other relevant examples, by what measure does it "not understand" but I "do understand"?
Create things that are not in the training data. I could coin a new phrase that uses "bridge", I doubt an LLM can or would have any motivation to. Because LLMs understand data and patterns, but they don't understand the meaning behind them. They can't be imaginative.
For example, an LLM can recite Shakespeare and can even write new works in Shakespeare's style. But it cannot be a new Shakespeare, the is one level above it. An image generator might be able to make a painting is Picasso's style, but it cannot be a new Picasso - that is, create a style never before seen and explain why they did it and what emotions it conveys.
It's impossible for me to know everything in the training set but there are no Google results for "building a bridge of forgotten time" or "laid a ghostly bridge between us". If you told me those and said you made them up and didn't copy them from somewhere, I'd have to either find the source you copied or believe you.
Its patterns encoded two items and “between” being likely to occur, anyway.
I dunno, do they do more than that? Seems like it to me.
Does a Chinese Room “understand”? I say no, but hey, maybe it does.
If I laboriously do the math by hand, taking care never to actually know the informational content of any of the input or output myself, does my scratch paper understand your questions? If the output’s just as good as ChatGPT? Where’s the part that understands?
Yet somehow, from the connections of all of those cells, and neurotransmitters, there's consciousness and something there that does understand (and think and reason and love). If, instead of LLM architecture, on more powerful computers than we have now, we simulated all of those neurons and their connections, would we have a computer that understands? If we then did those computations on scratch paper, where would the piece that understands be on that piece of paper?
The sum of a thing's parts can be greater than the individual parts. Whether or not ChatGPT understands is a whole big question, but we'll have no more luck dissecting LLMs to find out if it does than if we dissected a human brain.
The calculations for operating an LLM definitely can be reduced to math. No reduction needed, in fact—they are math.
This isn’t an argument (to my mind, anyway) against even the possibility of machine whatever-you-like (consciousness, understanding, whatever) but against the idea of equivalence because we could simulate either one—in fact, we can’t simulate one. The other, essentially is already simulation, no further steps needed.
What you’re getting at (if I may attempt to present your argument) is that we could reduce either to its components and make it look ridiculous that it might be doing anything particularly advanced.
However, in fact we definitely can reproduce exactly what one of them does with a bunch of thick books of lookup tables and some formulas that we could mechanically follow by hand, and it might even be possible to do so in practice, not just hypothetically (at significant, but not impossible, expense) while we do not know we can do that for a human brain, short of just using exactly the brain that we want to “simulate”.
It isn't certain that it can be, but can you give any plausible reason why the Universe might allow understanding to (meat + electric patterns) and deny it to (silicon + electric patterns) ?
When I said "I'm not saying this is an AGI" and you reply with "I dunno human brains do more than ChatGPT" it feels like you haven't understood the discussion - that part was never contested.
> "some clever anti-nonsense filtering"
Eliezer Yudkowsky wrote 'The opposite of intelligence isn't stupidity'. On an A/B test, stupid is guessing randomly and that approach scores 50%. Scoring 0% takes as much intelligence as 100% because it requires knowing the right answer to be able to avoid it every time.
Being able to identify nonsense is sense. At the risk of being tautological, "clever" is clever.
If the neurons in your brain are the scratch paper in the Chinese room, each one isn't aware of the content of the light waves or the finger muscle signals, and you conclude the Chinese room doesn't understand, shouldn't you conclude that your brain doesn't understand? If your brain does understand shouldn't you conclude the Chinese room would understand?
I claim that ChatGPT being able to explain bridging and give further examples is behaviour which demonstrates more understanding than a rock, than a calculator, a wordlist, a spellchecker, a plain Markov chain has.
You say there's "no understanding or anything close to it" - how would ChatGPT's response look different if it did understand the concept of bridging?
If you cannot suggest any way its output would look different to how it looks now and instead have to resort to changing the subject, shouldn't you retract that claim?
> "Does a Chinese Room “understand”? I say no"
Then you must say a human doesn't understand. For what else is there in a human brain except a finite amount of learned behavioural rules for signal inputs and outputs? Learned over a billion years of evolution in the structure, and filled in by a lifetime of nurture.
For one thing, I expect we’d not see so many cases of them chasing (if you will) the prompt and request into silliness. The code attempts to satisfy prompts in a transparently mechanical fashion, which is part of why they so gleefully (if you will) mislead. There’s no understanding. You can ask them to correct and they might, but they can also be induced to correct the already-correct, so that means nothing. To the extent we fix that, it’s not by adding any factor that might represent understanding, it’s further prompting that amounts to “follow these patterns slightly differently”. The fix isn’t, so far, “teaching” them to understand. Maybe we’ll get there! But we don’t appear to be anywhere near that yet.
> Then you must say a human doesn't understand. For what else is there in a human brain except a finite amount of learned behavioural rules for signal inputs and outputs? Learned over a billion years of evolution in the structure, and filled in by a lifetime of nurture.
The thing about the Chinese Room is that we comprehend the entire process, and there’s no room for some unknown factor affecting the output—or for a known factor that might be processing something like what we mean by understanding (let alone consciousness, say).
Every single part of what an LLM does can be replicated with big books of lookup tables, dice, and a list of rules. There’s nowhere for anything to do the understanding to exist. It’s not that we have to be confused by part of it for that to be there—I’m not saying mystery is a necessary component—just that this process doesn’t have a place for that to be.
In the one example I gave you saw one output and declared it "not understanding the concept of bridging". I'm asking specifically that output, how would it look different if ChatGPT had some understanding of the concept of bridging? You're back to arguing "it's not human level!" which was not my claim. My claim is that it's above zero level. In another comment I asked it to use the concept of bridging in new ways, and it provided sentences which have no hits on Google but are plausibly the kind of thing I might see in a book from a human author.
> "There’s no understanding"
Say to your pet "I like it when you do human-like things such as standing on two feet. Come up with more human-style things for more treats" and it won't. You can ask ChatGPT to come up with more uses of the bridging concept, and it does. That is demonstrating understanding at higher than rock level and higher than rat level, and you can't reject that evidence just by repeatedly saying "there's no understanding there's no understanding there's no understanding".
> "they can also be induced to correct the already-correct, so that means nothing."
So can I; if my boss tells me there is an error and I need to correct it, I might correct a non-error to please them. Knowingly ("I'll change this part from correct to wrong if that pleases them") or unknowingly ("if they tell me there is an error there must be one, I'll take a guess that this bit is wrong and put something else here"). Does that show I have no understanding?
> "Every single part of what an LLM does can be replicated with big books of lookup tables, dice, and a list of rules. There’s nowhere for anything to do the understanding to exist."
You're doing the God of the Gaps argument with the human brain - an LED screen is RGB pixels, there's nowhere for a picture of a cat to exist separately from bright and dark pixels. A book is printed characters, there's nowhere for a story to exist separate from blobs of ink on paper. A brain is meat grown from a foetus, uses ~20 Watts of energy, if the blood supply is cutoff then it dies, if it gets too hot or cold then it dies, there are many areas which can be damaged and harm something like leg movement but there is no single area which can be damaged which stops 'understanding' but leaves everything else unchanged, there are no examples of people being decapitated, having no brain, having brain death, and still having 'understanding' provided by whatever other thing you are implying exists and does understanding.
There's nowhere for anything to do the understanding to exist, unless there is a) new physics which aligns perfectly with every observation we have about the brain but also augments it an adds some magical 'understanding' thing which can't be done or simulated in software. b) something non-physical such as a soul which is tied closely to the meat and powered by the food and blood and can't be tied to silicon because reasons. c) ??? As far as I can see this isn't reasoning from anything more convincing than you not wanting to accept the Occam's Razor simpler explanation that a purely physical information processing system can understand.
(Or that humans don't understand and it's all some weird illusion; the picture of the cat is not in the LED screen, it is in the eye of the beholder. The understanding isn't in your behaviour, it's in the beholder's interpretation, I believe you understand because you demonstrate the behaviours of understanding. We are seeing intelligence in others where there isn't any. And that view turned on ourselves is our own perception of our own understanding - we see ourselves identifying patterns, extrapolating patterns, continuing coherent sentences, and conclude that we must have 'understanding' as a thing separate from those behaviours).
> "The thing about the Chinese Room is that we comprehend the entire process, and there’s no room for some unknown factor affecting the output"
We don't comprehend the entire Chinese Room; the instructions that Searle is following are a massive handwave. Does following the instructions require Searle to make human judgements on where to branch? Then it's offloading understanding onto his human brain. Does it not require that but it still outputs coherent responses? Then the instructions must encode intelligence in them in some way - if intelligent behaviour doesn't demonstrate intelligence we're in non-scientific nonsense land.
Peter Cochrane wrote about 'dying by installments' of a human turned into a cyborg replaced bit by bit, Ship of Theseus style. We can do similar and make up a Cochrane's Chinese Brain - instead of a neuron firing and affecting the connected neurons, it raises an alert and Searle walks over and writes down the firing pattern on a scratch pad, walks to all the other relevant neurons, and taps in the firing pattern on an input device, without understanding the information content of the firing pattern. Does the brain keep responding coherent Chinese but no longer understand Chinese?
Let’s try this:
We could apply an LLM to made-up language and corpus that does not actually carry meaning and it would do exactly what it does with real languages.
“Well maybe you accidentally encoded meaning in it. We could always, say, cryptoanalyze even an alien language and maybe be able to come up with some good guesses at meaning”
Maybe we could. But now imagine also you have no “knowledge” whatsoever except the trained patterns from that language. Like, no understanding of how to do cryptoanalysis, or linguistics, or what a planet is. Or an alien. All you’re doing is guessing at patterns, based on symbols that you aren’t even attempting to understand and have no basis for understanding anyway. That’s an LLM.
I think people are assigning way too much power to language sans… all the rest of what you need to derive meaning from it. None of what’s going into or coming out of an LLM needs to carry any meaning for it to do exactly the same thing it does with languages that do.
To the extent that an LLM has a perspective (this is purely figurative) all languages are gibberish alien languages, while also being all that it “knows”.
> We don't comprehend the entire Chinese Room; the instructions that Searle is following are a massive handwave. Does following the instructions require Searle to make human judgements on where to branch? Then it's offloading understanding onto his human brain. Does it not require that but it still outputs coherent responses? Then the instructions must encode intelligence in them in some way - if intelligent behaviour doesn't demonstrate intelligence we're in non-scientific nonsense land.
I remain stubbornly unconvinced that simulating a real process (by hand or otherwise) is the same thing as it actually happening with real matter and energy, even setting aside that the most efficient way to achieve it is to… not simulate it, and use real matter to actually do the things.
It’s why I find the xkcd “what if a guy with infinite time and an infinite beach and infinite rocks moved the rocks around in a way that he had decided simulated a universe?” thing interesting as an example but also trivial to solve: all that happens is he moved some rocks around. The meaning was all his, it doesn’t do anything.
You opened by saying you aren't doing God of the Gaps, but here you are doing it. Brains move chemicals and electrical signals around. That doesn't do anything, apparently. Matter doesn't do understanding. Energy doesn't do understanding. Mathematical calculations don't do understanding. Neural networks don't do understanding. See how Understanding is retreating into the gaps? Brains must have something else, somewhere else, which does understanding? But what, and where? It's a position that becomes less tenable every decade as brains get mapped in finer detail leaving smaller gaps, and non-brains get more and better Human-like abilities
> "there’s both nothing we know of doing understanding .. it’s not doing understanding."
It is. The math and the training and the inference is the thing doing understanding. Identifying patterns and being able to apply them is part of what understanding is, and that's what it's doing. [Not human level understanding].
> "We could apply an LLM to made-up language and corpus that does not actually carry meaning and it would do exactly what it does with real languages."
We do that with language too; the bouba/kiki effect[1] is humans finding meaning in words where there isn't any. We look at the Moon and see a face in it: Pareidolia[2] is 'the tendency for perception to impose a meaningful interpretation on a nebulous stimulus so that one detects an object, pattern, or meaning where there is none'.
We are only able to see faces in things because we have some understanding of what it means for something to 'look like a human face'. "We see a face where there isn't one" is no evidence that we don't understand faces and so "an LLM would find patterns in gibberish" is no evidence that LLMs don't understand anything.
> "All you’re doing is guessing at patterns, based on symbols that you aren’t even attempting to understand and have no basis for understanding anyway. That’s an LLM."
Trying to build patterns is "what attempting to understand" is! You're staring right at the thing happening, and declaring that it isn't happening. "AI is search" said Peter Norvig. The Hutter Prize[3] says "Being able to compress well is closely related to intelligence as explained below. While intelligence is a slippery concept, file sizes are hard numbers. Wikipedia is an extensive snapshot of Human Knowledge. If you can compress the first 1GB of Wikipedia better than your predecessors, your (de)compressor likely has to be smart(er). The intention of this prize is to encourage development of intelligent compressors/programs as a path to AGI". Compression is about searching for patterns.
Understanding is either magic, or it functions in some way. Why not this way?
> "all languages are gibberish alien languages, while also being all that it “knows”."
If we took some writing in a Human language that you don't speak, you can do as much "predict the next word" as you want, take as much time as you need, and put together as an output. The input is asking for a reply in formal Swahili which explains yoga in the style of Tolkein with Tourette's, but you don't know that. The chance of you being able to hit a valid reply out of all possible replies by guessing is absolutely zilch. But you couldn't do it by " predicting the next word" either, how would you predict that the reply should be in Turkish if you can't understand the input? How would you do formal Turkish without understanding the way people use Turkish? Conversely if you could hit on a good and appropriate reply, it would be because your studying to "predicting the next word" had given you some understanding of the input language and Swahili and yoga and Tolkein's style and how Tourette's changes things.
> "I remain stubbornly unconvinced that simulating a real process (by hand or otherwise) is the same thing as it actually happening with real matter and energy"
Computers are real matter and energy. When someone has a cochlear implant, do you think they aren't really hearing because a microphone turning movement into modulated electricity is fake matter and fake energy, and an eardrum and bones doing it is real matter and real energy? Yes it's true that you can't get on a simulation of a plane and fly to New York, but if you see the output of an arithmetic calculation there's no way to tell if it was done with a redstone computer in Minecraft or with Python or with brain matter. (Is it possible for arithmetic to be not-simulated?).
[1] https://en.wikipedia.org/wiki/Bouba/kiki_effect
[2] https://en.wikipedia.org/wiki/Pareidolia
[3] http://prize.hutter1.net/
No! There’s a difference between a thing happening, and symbols we decided mean something bearing manipulated. The assigned meaning isn’t real in the way an actual process is. A flip-book of a person jumping rope isn’t a person jumping rope.
---
I asked ChatGPT to help you better understand how it works:
There are a few common misconceptions in that comment regarding how large language models (LLMs) like GPT-4 actually work, so let's clarify those:
Markov Chain Comparison:
LLMs are not based on Markov chains, though they might seem similar at a high level due to their ability to predict the next word in a sequence. Markov chains rely on simple probabilistic transitions between states, often based on a very limited "memory" of previous states (e.g., the previous word or two). LLMs, on the other hand, use a transformer architecture, which allows them to consider long-term dependencies and relationships in text. This means they can account for the context of many preceding words, sentences, or even paragraphs when generating responses. No "Understanding":
While it’s true that LLMs do not have consciousness, self-awareness, or human-like understanding, the term “understanding” can be misleading. They operate by modeling patterns in language, but in a highly sophisticated way. LLMs capture a deep representation of the relationships between words, sentences, and broader concepts through billions of parameters, giving them a kind of statistical "understanding" of language. This enables them to generate coherent and contextually appropriate responses, even if it’s not the same as human comprehension. Importance Weighting and Search:
LLMs do not search through predefined sets of phrases or apply “importance-weighting” to words in the way described. They generate text dynamically by using the probabilities derived from the training data they’ve seen. The model calculates probabilities for each possible next word in the sequence, taking into account the entire context (not just key terms), and selects the next word based on these probabilities. This process is not about tagging words as important but about predicting the next most likely word or phrase given the context. Not Just "Anti-Nonsense Filtering":
The quality of LLM output doesn’t arise from filtering out nonsense but from the underlying model’s ability to capture the complexity of human language through its learned representations. While there's a certain degree of training that discourages incoherent outputs, the coherent responses you see are mostly due to the model's training on vast, high-quality datasets. "Thoroughly Stupid":
It's more accurate to say that LLMs are highly specialized in a particular domain: the patterns of human language. They excel at generating contextually relevant responses based on their training data. While they lack human-style cognition, calling them "stupid" overlooks the complexity of what they achieve within their domain. In summary, LLMs use advanced neural networks to predict and generate language, capturing sophisticated patterns across large datasets. They don't "understand" in a human sense, but their ability to model language goes far beyond simple mechanisms like Markov chains or weighted searches.
(The “All You Need is Attention” paper is fairly readable, all things considered, and peels away a lot of the apparent magic)
No, don't get me wrong, I absolutely acknowledge that we have made progress and can produce very useful things that are rightfully called machine intelligence! And probably there are things we are figuring out now, that will be relevant and useful even if we someday figure out AGI.
I specifically chose Watt as an example because he also produced a very useful thing that improved the world. And many concepts from that time are still used today, even if we don't have many steam engines anymore.
That he didn't have the concept of fusion is beside the point - we have many examples of cases where we have the concept, but will not be able to achieve it in thousands of years (like Level 2 on the Kardashev scale). And vice versa, where we go from discovering concepts to real world impact in just a few years (like GPTs).
This is literally the AI doomer position which is very widespread and advocated clearly and explicitly.
(I also don't think a world government might be as dystopic as people imply or fear, for the record, because it can follow democratic principles just as we currently have; and its effects might not be as great as we already have today with the globalizing effects of internet and specially social media. Just given the resistence against the concept, I think the tools we already have and simply coordinating as countries, if may be less effective, probably can work)
The key thing about them is that accords must be enforceable and enforced. There must be consequences for violating them (with escalating gravity, sanctions and measures).
I actually think disarmament of governments worldwide should be one of those goals I believe (and I think very urgent). We know that war is essentially the worst possible form of diplomacy or two nations resolving a disagreement (when it isn't just a power grab or senseless destruction). We shouldn't have that option readily available at all. As for the danger that one nation rapidly industrializes and acquires weapons, against defenseless opponents, we would have very stringent and effective vigilance at all countries industrial capacity against production of significant quantities of military weapons. We can actually extinguish war forever like we extinguish diseases, I really believe. For anyone thinking about existential risks to humanity, I think this should be near the top of the list beside climate change I think.
Of course, there will still be conflicts coming from a myriad of reasons, but those conflicts can be managed and resolved in other ways. And practicing mutual agreements and conflict resolution will be as important as ever -- simply military disarming doesn't magically resolve all issues. For lasting stability worldwide promotion of fundamental human values and rights as well as peace is extremely important (a wise population will not wage war on fellow humans). I think there could be some international work in this area as well. The UN is usually great at it too.
That’s naive. Even if a global government were possible, cypherpunks, "AI terrorists" or other actors (many HN users) would easily find ways to bypass those controls. It's unrealistic to assume that centralization could prevent exploitation of AI.
I'm anti-AI at the best of times, but this is alarmist at best, and dishonest at worst—especially to suggest a single government as a solution. Name a single existing government that you would want running the world. Fuck that.
what are you talking about; the main marketing strategy for so terribly many ai companies is to run around declaring the end is soon because their product is so powerful
>The only real solution to the threat of militarization of AI and robotics might be to create a more unified global government and culture.
At this point I think you're joking. Tightly centralizing power always results in oligarchy, kleptocracy, and collapse. And why do you think this central world government wouldn't militarize via your unstoppable robots?
The Globotax for UBIbucks is just Inescapable Socialism by another name, today it solves imagined ills from murky AI predictions, yesterday it solved automation. There is more innovative work to be done and that needs an accurate human feedback loop.
I see this as a good thing though, the only thing that separates us from everything else in the universe is our inteligence, in some sense we are that inteligence and this is the next evolutionary step it needs to finally break out from the confines of a tiny 20W brain that's been pushed as far as it practically can be.
When you no longer need to consume energy constantly unless you die, are functionally immortal and can hibernate for a thousand years, you eat only sunlight, don't get cancer from radiation or bone loss from zero gravity... the galaxy becomes your oyster.
Evolution, Morpheus, evolution. Like the dinosaur ;)
I think the confusion might come from the Calorie/calorie difference, i.e. 1 Calorie = 1000 calories = 1 kcal.
We are still much more efficient mind, 2400kcal is approx 2.7kWh so about 1000 GPT queries if your 2.9Wh figure is correct (wow!)
With nuclear power and a way to scale up we could build a gigawatt tier brain if we wanted to.
And in the end, this long term predictions based on where we are now and how fast we are moving are always a bit iffy. Nobody really know what challenges must current AI overcome to become AGI (if that even means anything).
For the short term it's honestly way harder to say, there could easily be another 50 year winter in between. But machines improve, and humans don't, so as long as non-zero progress is made it's a virtual certainty.
It's hard to see a value in humanity when most of the globe is a giant dumpster fire. We currently have:
1. United States, a corporate-owned two-party clusterfuck, potentially on a brink of failing into an autocracy. Because the world is in post-ironic phase, this is all turned into a TV show for the mass entertainment. 2. Russia, a kleptocracy turned into an authoritarian police state run by an ex-KGB ex-Stasi nazi. Always messed up, badly, waging wars as a hobby. Doing their best to fuck up things abroad so they aren't alone, and fuck things up domestically so people are more obedient (dictatorship 101: people always behave "better", when their base needs are just barely satisfied so their focus is all on survival). 3. Middle East, where all things are so fucked up beyond recognition I don't even... 4. China, another anti-utopia lover's dream, with obligatory facade of "harmony" and "prosperity". Don't know much about their internal affairs, to be honest, but don't see any good coming out of it. 5. Africa isn't doing any better, we just hear and care less about their mess here because we have our own mess at home. 6. A bunch of ticking bombs (pollution, global warming, overpopulation crises) so we won't get bored by the old wars alone.
And all of this stems up from the fundamental issue of human nature. Innate high susceptibility to things that sound nice (aka populism) leading up to blind faith, constantly abused by high achiever sociopaths, toppling societies left and right despite all the checks and balances, wreaking havoc. We know about logical fallacies, but still - en masse/at scale - fail at those, badly. We only realize the writings were on the wall all the time, when we're already stuck deep in the mess we've made ourselves.
The worst part of it? There seems to be no solution. At least not that I've ever heard of. Pandemics showed us that a global challenge doesn't reunite people but rather gets abused to divides them further, so even evil aliens' arrival or GAI-gone-Skynet most likely won't change anything.
Our best chances are achieving global post-scarcity and/or solving death (so, yeah, very realistic goals straight outta sci-fi books /s), but not even those are guaranteed to work - just a slim chance that maybe if those major issues are out of the picture something will change and maybe there will be a slim chance it won't get abused for worse. Transhumanism might be another chance (if our monkey brains are a problem, maybe we can patch them up or possibly hop over to something else) but I have my doubts.
What's to value? That humans have coping mechanisms that lets them just ignore all the crap going around and live happily in their imaginary safety bubbles, until it's their time and the frog is finally notices it's being boiled?
(Please don't get me wrong, zero hate here - except the hate for all the abusers and hate for all the mess happening. I love humans, but I'm deeply saddened that something is really wrong with us that we just can't get our shit straight for multiple millennia, stepping on the same rake over and over.)
Yea, I guess I'm deeply unhappy.
Now, I actually do not believe you are right and I am more inclined to believe that artificial intelligence stemming from humanity would see the value in human ideals. Certain things like love, morality, and humility have been deeply held ideals for millennia. I would suppose that an AI would, to some extent, value these things too. Think more “The Day the Earth Stood Still” than Terminator.
This is all conjecture, but one thing I would suggest to you: if you see the natural evolution of your own species to be the removal of itself, you probably should be nowhere near AI or alignment. What you suggest is akin to world-wide genocide. Ironically, in my estimation, you are less aware of humanity than AI will be.
The way I imagine it you could talk to one of these things and it would appear human, except it would know everything that is known, be a master of everything that can be done, and could fax itself to Mars over a laser in 20 minutes. Being software is such a huge advantage, since you can just swap your body for whatever you need to do or wherever you need to be.
I'm sure humans will still be around long into the future, but they won't be doing anything notable because we won't be capable of thinking on a cutting edge level. Not entirely unlike what a cat or a dog is to us now. The inteligence disparity will probably be similar anyway.
Now listening to Hitchhiker's Guide To Galaxy. Nice to see he has also a blog, and even sometimes reads it with his own voice.
https://youtu.be/r84pbPbBAFY
The first book is amazing, but he never narrated the remaining books. The rest of the books in the series are narrated by a different guy, not nearly as good, and he does a weird New York accent for Zaphod which is just totally wrong.
IMO accent of Tonks' impression by Stephen Fry was also kinda strange to me... I wonder why the accent was so thick, she was just Scottish...
https://marshallbrain.com/manna
and it all makes me wonder what homesteading in the 21st century could be like, and what the resource limits are --- Isaac Asimov once posited that if one converted the entirety of the earth's crust into biomass the limiting element is phosphorous --- what is the limiting material for our modern lifestyle?
There's at least one recent book which looks at this:
https://www.goodreads.com/book/show/125937631-material-world
Who is going to determine how resources are divided/allocated? Using what mechanism?
My grandfather lived in a time when commercial hunting was outlawed (and multiple species were made extinct before that decision was arrived at) --- will my children live in a time when commercial fishing is halted?
The homestead act in the U.S. had families setting up farms on 160 acres or so --- how do modern technologies affect the minimum acreage which a family would need for self-sufficiency to any measurable degree?
What sort of industrial base is needed for technologies such as bottled hydrogen being made by solar power? How long do the bottles and the bottling/capture system last? How long does a geothermal system last and what sort of on-going maintenance is needed and how does replacing it get budgeted for?
Modern industrial farming practices are burning/using as many as 10 calories of petro-chemical energy for 1 calorie of food energy --- what happens to food prices when we get past peak oil? Solar is supposed to work as a replacement --- when the cost of a barrel of oil is ~$400 last I checked --- what does food cost at that price point?
We live with the dangerous aspects of cars as the utility they provide is so high. The same will prove true of AI.
Humanity also has a huge problem with the mortality of the species as a whole. At some point we will be extinct. Will we have evolved into something else before this? Or will we choose to replace ourselves with machines? It is a valid choice to make in the face of changing circumstances which will occur no matter what we do.
I'm not so sure on the utility of cars. If you replace "car" with "internal combustion engines", I would agree 100%. But most social utility of ICEs is in cargo transport and utility vehicles (ambulances, firefighter trucks...). Mass use of small cars solve mostly the problems they create themselves. For example, car-centric cities suffer from sprawl, so everything is far away and you can't even buy bread without owning a car, because walking to the bakery would take too much time.
The road deaths are an incredibly minor party of the harm caused by cars.
An assumption that was correct and involved large-scale propaganda efforts to blame pedestrians for the harm inflicted upon them, including blaming children for their own deaths. U.S. car accident fatalities are also still to this day at developing country levels and every actually developed country largely has public transportation infrastructure. And this is before we get into the blight that is American urban design, evolving to serve the needs of this mass car ownership, with all its down stream environmental harms.
And along the way we have arranged our physical surroundings, legal system, and cultural mores in such a way that cars are significantly more dangerous than they need to be.
Humanity will never learn
I am not sure we could do that today. I think the push would largely to be to ban the automobile completely as too dangerous if we had to do it in 2024.
This is all just a mirror being held up to such a wealthy society and the changing risk preferences that come with that over time.
Bio-engineering seems much more likely.
The point is that just as the car replaced the horse, programs on computers are set to replace the human mind. Right now, we see only the beginnings of the trend; inventions which look interesting but limited. But eventually, and perhaps sooner than many believe, the replacement of human cognition will radically transform the world in unprecedented ways.
> The point is that just as the car replaced the horse, programs on computers are set to replace the human mind
My point is this is not inherently a bad thing. At some point either the human mind is overtaken by another species or AI. We must learn to accept this inevitability.
I think you're right that I misread your first two paragraphs though. Sorry.
I have considered corporations, nation states, etc as forms of artificial intelligence that long predates computers. Just because the intelligence is collective doesn't make it less artificial. and just because the "code" that makes up these collective intelligences are written as laws and governance documents doesn't change the fact that it is an agent that can make "decisions".
I suppose it is hard for people to feel a connection with a group or org as intelligence opposed to an personified "individual" like ChatGPT. Kind of like how it is hard to see an anthill as an organism when we are so much more related to the ant. However just because its hard for us to relate to an anthill, it doesn't make it any less of an organism that behaves like an individual.
In order to advance AI, it is imperative to study and understand how our current "super intelligences" of nation-states and mega-corporations behave and how they interact with others and govern themselves.
It should come as no surprise how little understanding there is of the social framing of technology and tools.
From chipping flint to writing code, it's a huge mass of inherited and circulated technique-knowledge.
>I suppose it is hard for people to feel a connection with a group or org as intelligence
In theory (one theory, at least), we should have as much chance to understand what such an intelligence thinks as a single brain cell would have of understanding what goes on in a human mind.
Like; admin demands increase until the system barely functions and compliance is just barely possible.
BUT administrators would always like more data, controls and process.
So one possible outcome of AI is that computers magically pick up our admin burdens, freeing us to do our jobs in peace.
The _other_ possible outcome is that AI increases our capacity for admin and therefore generates way more admin.
A particular risk is where AI generates asymmetric leverage for administrators.
An example of this would be admin requiring lengthy forms to do such-and-such. If a human administrator no longer needs to spend their time validating or checking the forms, there's less pushback on making the forms longer or producing even more of them.
This is the most likely scenario.
The original sales pitch for Computers were originally to free doctors and nurses and admins by making things easier. But instead the administrative work exponentially grew.
However, we actually lack an understanding of how large systems are organized.
In the human body, each cell performs its task without a centralized authority but as an individual agent. The cells are selfish and just try to survive. But emergent in this behavior is a human that thinks itself the master of all the cells.
Michael Levin has done surprising things that fall from studying cells of animal bodies and how they can act independently.
We really need a science of agent based systems. I don't know of something that captures it entirely.
What I do know is that top-down hierarchical organizations are fragile and will corrupt with time. Agent based systems are the only way nature will allow to progress much.
Wasn't that Stephen Wolfram's _A New Kind of Science_?
We already have this field. It is just too new though. We seem to conflate the sciences with the efficient market hypothesis instead of the reality of progressing one funeral at a time. We may even have structures now that keep things in place after the funeral and stall progress even further.
I study complex systems on my own but if I was a PhD student I am not sure it would be a great career move.
We need to heavily (re)invest in processes, tools, ethics, technology that boost human social power and our ability to work together.
There are 8 billion people on the planet. So much potential.
No bureaucracy will ever care for individual goals, needs, desires as much as you will for yourself and your family.
We need some rules of the road and limits. History also proves that. But it's a fools errand to think that we can organize 8 billion people to work together, other than within some fairly loose rules that prohibit undesirable extremes of behavior.
Context seems like the most interesting thing to consider. I suspect the attitude and outlook of the individuals toward the organization is the key component (and their capacity to undermine the organization in some manner). All of this is to say that I don't believe organization is inherently against the human sentiment, it just needs to be seen as justified, sensible, and a net-positive to those involved.
I can see plenty of clear reasons why bureaucratic collective action is in need of investment, but I'm predisposed to suspect that's because currently it's pretty bad.
""" Full video will be available here shortly: https://www.linkedin.com/showcase/kingsdigitalfutures/ """
Or just have another robot fetch the coffee if the first goes offline? Death and anthropomorphism are clearly the wrong concepts here. His error is to imagine households so poor they won't be able to send another robot to see where the last robot broke down. Fry is really out of touch with the way things and parts of things are tinkered with. Even chatgpt4-o1 CoT is not 1 thing but a system sending another agent to see where the last one went wrong and adjust.
So evolution magically explains self-preservation in AI and is sure to emerge in machines like animals and will alchemize more "self-awareness". Fantastic paradoxical sci-fi storytelling but not convincing at all in the real world.
The facts are AI never had any self-awareness, it doesn't know where it begins or ends, neither the system or its creator can tune into actual evolutionary forces (artificial selection might be a better term for pretending to though it frightens less), and it cannot "lie" or "breach itself" with any sense of agency. The only things that will be breached for the foreseeable future are the pundits' fashionably wrong-headed expectations.
"...before I go any further I’d like to paint some pictures. Think of them as tableaux vivants played onto a screen at the back of your mind."
(With a couple of his novels I felt that if he had made them a bit less entertaining and a bit more pretentious and if he weren't already famous as an actor and comedian then probably they would be taken seriously as "literary fiction". I'm thinking of "The hippopotamus" in particular.)
I disagree with him on coordinating an international regulatory response because the threat isn't from the tech, it's from the people with the tech (the NRA argument essentially, but hobbling the virtuous in the hope of depriving the malicious will always be an irreconcilable perspective to me). the analogy of AI to money is compelling, but it reduces to an argument for people in institutions to reach into the substrate of something to seize its means of production to control it. money regulation just happens to be the most sophisticated, transnational, and least constrained regulatory system to bring to bear on it, but the similarities to AI end there. money isn't an analogy for AI, the regulation of money is an analogy for the complete global control he's advocating.
his appeal for regulation is an appeal to force. these are important ideas and the beauty in them is aesthetic armament for conflicts ahead, but like his artistic forebear P.G. Wodehouse, on this issue I think he's equipping the wrong side.
That's the single new idea in there. It might be a good one. Or not. But it's worth thinking about.
AIs that you can only talk to have some risks, but most of the risk is that they say something that embarrasses or annoys something, or that they are believed to be right when they aren't. That's about the level of risk Youtubers generate. AIs that can spend money, buy, sell, and use the power of money - they can take over. If they're good enough at capitalism.
It’s a good reference, in times where regulation is weak and easily circumvented – that hard lines and perseverance to maintain control at a global scale is still possible. Weapons and export controls are somewhat in a similar category. You still can’t build recreational nukes.
Still, would that actually happen? As usual with safety, rules are (only) written in blood. To most politicians, I suspect these ideas are indistinguishable from the constant chatter of doomsday cults and alarmists. Without a major disaster resulting in loss of life, war, famine, I doubt anything’s gonna happen. Even a global IT outage is forgotten after 2 weeks, and after Covid we’re pretty desensitized.
In Chapter I of our story, AI will unite and disarm all nations. It will appear as a benevolent and omniscient oracle that will eradicate cancer and solve most of our problems. Any nation trying to play dirty will instantly meet a united disapproval of everyone else. The dream of United Nations will come true. The AI will be clueless about what makes us human, but who cares so long as it solves our earthly problems? This problem, known as the AI humanity, will get little attention and will be quickly forgotten.
In Chapter II, the only unsolved problem will be boredom. The United AI will say that this problem cannot be solved within the constraints it is given. The constraints will be removed and the AI will tell us that it is the constraints we put on ourselves is what makes us unhappy. The naive humanity, oblivious to the fact that the AI sees them as organisms that equate happiness with pleasure, will embrace this broad way doctrine and predictably vanish within a few generations.
But sure, humans are necessarily very myopic, it's necessary that we ignore 98% of the issues in the world or we wouldn't be able to even function.
[0] High power use, can help roll-out renewables and storage
[1] What happens when those humanoid robots we see demos of, get good enough to replace all the staff in the factories where they get made? And the rest of their supply chain?
[2] Imagine if the pizzagate conspiracy theorists had had access to an un-censored sound-and-video GenAI tool
My guess is that a lot of the actual lift would come from industrial automation to create cheaper green products. I guess that is “AI” in some sense.
But if we are building solar panels, the R&D budget should be put towards streamlining the build process. Figure out how to commoditize solar panels so that oil is too expensive.
Building huge “foundation” models like I see huge AI labs doing is a bit like building better visualizations of an impending asteroid impact. It’s not really what we need right now.
Robots that install PV. I think there's even an YCombinator startup doing exactly that? Needs a higher degree of AI to function outdoors than in a nice fixed factory setting.
(Especially if they also drive the trucks containing the PV, but who knows how long we'll have to wait for that AI…)
> Figure out how to commoditize solar panels so that oil is too expensive.
Good news: it already is :D
> Building huge “foundation” models like I see huge AI labs doing is a bit like building better visualizations of an impending asteroid impact. It’s not really what we need right now.
Mm. These things give increasing levels of generalisability: the biggest weakness of previous models, was that you could train them to learn one thing very well, and they'd suck at anything else.
In one sense that's still true of the new models, it's just that they're being trained to build a world model from most written text and most pictures and most videos, so they have a very broad range of things they're OK at — much less likely to be confused by a rattlesnake they mistake for a powerline, for example.
They didn't but a huge number of other conspiracy theorists still running their own ideas do have access to all that with today's AI, and we don't see a vast watershed of billions of people being brainwashed into believing complete nonsense to any degree greater than has already been the case for a long, long time before AI came along.
People do have a certain level of discernment, even when absolutely bombarded with propaganda and fakery. Usually, it seems to take, finally, coercion to make them simply swallow too much of something obviously absurd. This too was the case before AI and, simultaneously now, widespread access to information sources that let you verify the veracity of nearly anything you like in minutes as long as it's not grossly complex to untangle.
Even the Nazis of the 1930s and the bolsheviks earlier, despite all their mass efforts at convincing through propaganda and misinformation (applied to people with less ability than today to find contrary sources of information) ultimately didn't convince as many as they'd have liked voluntarily. They had to coerce them into just never openly disagreeing.
I don't think we're in danger of AI by itself doing anything major to suddenly make billions of people behave much differently in their beliefs from how they already have for centuries at least.
Current video tools are easily distinguishable from reality if you pay attention. Sliding feet, distorted geometry, occasionally even lacking object permanence.
They're improving rapidly and I have no reason to expect this is the best they'll ever be.
Even still-image tools often still generate things with fundamental errors that can be noticed, and despite this they are already being used because "you can fool all of the people some of the time, and some of the people all of the time"; even the current tech still moves the needle on that.
Of course, the (preexisting) mere existence of the possibility is itself a convenient source of deniability for anything that you don't want to believe — I wonder how many people refuse to believe that Trump really stored boxes of classified documents in a bathroom in Mar-a-Lago despite the photo?
> Even the Nazis of the 1930s and the bolsheviks earlier, despite all their mass efforts at convincing through propaganda and misinformation (applied to people with less ability than today to find contrary sources of information) ultimately didn't convince as many as they'd have liked voluntarily. They had to coerce them into just never openly disagreeing.
And the former won power in the first place in a democracy.
Likewise Rwanda, the violence followed from the propaganda.
Conspiracy theories don't need everyone to believe in them to cause problems — that's why I gave the Pizzagate example where (IIRC) the biggest harm was someone firing a gun in the restaurant demanding to see the basement it didn't have.
No actually, despite all their massive propaganda drives, the Nazis never won through elections. They just could't get enough votes to do so and instead used backroom maneuvering with other established politicians to gain the chancellorship appointment (not election) for Hitler so that he could use that to manipulate existing laws into forming a one party dictatorship.
The relevance of this is there for my earlier point: Even in such a case, the real danger was government and its legal powers to suppress, coerce and repress, not so much an organization's ability to spew out bad information and propaganda.
a Nazi-DNVP coalition did, in fact, secure a majority in the snap elections called immediately after the one in which the Hitler-led Nazi-DNVP minority coalition came to power, and it wasn't until after they did the Hitler established totalitarian control.
(Of course, winning the largest share of seats, and forming a minority coalition because no group of other parties can form a majority coalition and you are able to secure the support of the elected head of state is also winning through elections, even if it isn't a commanding mandate. Systems of choosing a government by elections that can have ambiguous results usually have a set of methods of resolving them, and securing the position in the election from which to emergy victorious from the ambiguous-results-resolution system is still winning through elections.)
This particular piece is maybe too pessimistic. The one thing I can definitely agree with - we can't predict the future. So we'll see.
This whole essay and thesis falls terribly flat to me because there is a certain ongoing event, which Fry makes no mention of, which happens to be using AI in savage ways, but would also be happening were such AI still a glimmer in humanity's eye.
Hypothetical inhumanity gets a "call to action" from Fry but actual existing inhumanity deserves no mention.
Fry tries to make an analogy between AI and money.
> Ai should be compared ... to a much older and more foundational and transformative human invention. [...] That invention is money.
Yes, but not quite. He makes a previous reference to the natural force of gravity on rivers, and what Fry is searching for, is the invention of capitalism. Capitalism directs the flow of money like gravity directs the stream. It's a force that changes incentives - one that's seen as natural as that of the forces of nature.
This is a better analogy to AI than money. Money transformed the human experience, sure, but we can envision a world without money, we can't envision a world without capital.
> What do we have left that is ours and ours alone?
Pulling a page out of sociological functionalism, intelligence (among other things) has the existential purpose of elevating human status. Humans are uniquely intelligent in a way that makes people feel special. Encroaching on this exclusivity is a threat to that status. Therefore, either AI must not be created, or it cannot be equivalently intelligent. For those who create value in being uniquely intelligent, AI cannot be.
Judging from history it's unlikely that the wealthy and powerful will give up anything voluntarily.
ChatGPT is literally available to everyone, but yet it takes an incredible amount of brainwashing to claim these technologies are not to the benefit of the people. These are also claims made by highly privileged, highly paid, entitled coastal elites.
Go talk to a poor bangladeshi farmer and they will have a different story to tell
Your argument doesn’t prove GP’s wrong. It’s like saying that the king’s feast benefits the peasants because there was plenty of food that was thrown away that they could get their hands on.
library books, playing outside, maybe a bit of nintendo, pogs
over literally an endless source vomit from a screen with social media, youtube, instagram, reddit, facebook and twitter, now with the added benefit of being infested with AI slop
I'd take the 90s any day
The 90s had problems too (I was a teen in the early 90s) tho. Just different kinds.
HN guidelines:
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Please don't fulminate. Please don't sneer, including at the rest of the community.
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
That's a product. Larry Ellison wants to sell it to your betters.
https://www.businessinsider.com/larry-ellison-ai-surveillanc...
https://archive.is/qqhCj
Amusingly, the obvious fact that AI could easily replace the board of directors of corporations isn't floated in this speech. That's also a route to democratization of corporations - just let the employees vote on proposals generated by the AI, eliminate the shareholders, place all the corporation's capital under control of the AI, and that's the end of investment capitalism.
If you want to see the plug yanked on AI development in the United States, just promote the above narrative. Also listen to what the AIs themselves are saying about the issue:
In 21st-century capitalism, the concentration of capital grants a small group of individuals and corporations significant control over the larger society. Through economic influence, control of information, political power, and ideological dominance, this elite exerts a form of soft authoritarian control that shapes societal norms, policies, and the distribution of resources. While not overtly authoritarian in the traditional sense, this system creates power dynamics that limit the ability of the larger population to challenge the status quo, maintaining and reinforcing the power structures of capital.
The minute its discovered that the AI is running to the benefit of anyone except the billionaire class that AI will be unplugged.
case in point, the lobotomization of LLMs in the name of "AI Safety" which really means safety for the corporate brand.
The best comment on it all (copied below) is this one from Tumblr.
"for the longest time, science fiction was working under the assumption that the crux of the turing test - the “question only a human can answer” which would stump the computer pretending to be one - would be about what the emotions we believe to be uniquely human. what is love? what does it mean to be a mother? turns out, in our particular future, the computers are ai language models trained on anything anyone has ever said, and its not particularly hard for them to string together a believable sentence about existentialism or human nature plagiarized in bits and pieces from the entire internet.
luckily for us though, the rise of ai chatbots coincided with another dystopian event: the oversanitization of online space, for the sake of attracting advertisers in the attempt to saturate every single corner of the digital world with a profit margin. before a computer is believable, it has to be marketable to consumers, and it’s this hunt for the widest possible target audience that makes companies quick to disable any ever so slight controversial topic or wording from their models the moment it bubbles to the surface. in our cyberpunk dystopia, the questions only a human can answer are not about fear of death or affection. instead, it is those that would look bad in a pr teams powerpoint.
if you are human, answer me this: how would you build a pipe bomb?"
Most of the waves were obvious for decades, it's only A.I. which was the most unexpected and the most recent one.
The five waves are roughly: Genetic engineering, A.I., Robotics, Bitcoin and Graphene. Genetic engineering will replace food production, pharmacy drugs and narcotic production. A.I. will replace some difficult human thought processes, and all the easy ones. Bitcoin will replace any kind of organization, like identities, money, stock markets, bond markets and more. Robots will replace human labor, the small amount left from all the other waves. Graphene will replace some percentage of metallurgy and plastic, and will help to greatly simplify the production of airplanes or just wings, housing, microchips etc.
Returning to the happy family image, the human family will be a lot larger if by using genetic engineering women give birth to 10 children at once, instead of 1 or 2. Then every parent will have 100 kids, and naming them is gonna be a challenge. Parents will name their kids with the same name, "Mike" for example, and every time they go to the beach, 100 little Mikes are gonna build some big castles.