The author makes a good point that it's important to define what "a good simulation" means.
On one extreme, we cannot even solve the underlying physics equations for single atoms beyond hydrogen, let alone molecules, let alone complex proteins, etc. etc. all the way up to cells and neuron clusters. So that level of "good" seems enormously far off.
On the other hand, there are lots of useful approximations to be made.
If it looks like a duck and quacks like a duck, is it a duck?
If it squidges like a nematode and squirms like a nematode, is it a [simulation of a] nematode?
(if it talks like a human and makes up answers like a human, is it a human? ;)
> If it looks like a duck and quacks like a duck, is it a duck?
ISTM that the answer is "in a way yes, in a way no".
Yes, in that we reasonably conclude something is a duck if it seems like a duck.
No, in that seeming like a duck is not a cause of its being a duck (rather, it's the other way round).
When we want to figure out what something is, we reason from effect to cause. We know this thing is a duck because it waddles, quacks, lays eggs, etc etc. We figure out everything in reality this way. We know what a thing is by means of its behavior.
But ontologically -- ie outside our minds -- the opposite is happening from how we reason. Something waddles, quacks & lays eggs because it is a duck. Our reason goes from effect (the duck's behavior) to cause (the duck), but reality goes in the other direction.
Our reasoning (unlike reality) can be mistaken. We might be mistaking the model of a duck or a robot-duck for a real duck. But it doesn't follow from this that a model duck or a robot-duck is a duck. It just means a different cause is producing [some of] the same effects. This is true no matter how realistic the robot-duck is.
So we may (may!) be able to theoretically simulate a nematode, though the difficulty level must be astronomical, but that doesn't mean we've thereby created a nematode. This seems to be the case for attempting to simulate anything.
At least this is my understanding, I could be mistaken somewhere.
I think this is also one possible answer to the famous 'zombie' question.
(aside: I'm not especially arguing with you; just thinking out loud in response to what you wrote)
> Something waddles, quacks & lays eggs because it is a duck.
Or: something does those things, period. We notice several such somethings doing similar things, and come up with an umbrella term for them, for our own convenience: "duck." I'm not sure how far different that is from "is a duck", but it feels like a nonzero amount.
I guess where I'm going is: our labels for things are different from the "is-ness" of those things. Really, duck A and duck B are distinct from each other in many ways, and to call them by one name is in itself a coarse approximation.
So if "duckness" is a label that is purely derived from our observations, and separate from the true nature of the thing that waddles and quacks, then does some other thing (the robot duck) which also produces the same observations, also win the label?
Luckily, I'm a solipsist, so I don't have to worry about other things actually existing. Phew.
I've never spoken to a self-declared solipsist before, though of course we all act like solipsists to a degree :-). Anyway, I will assume that solipsism is false for the rest of this post, that's another question.
It's amazing how many philosophical debates end up at the question of universals that you've just alluded to.
My own position, very briefly, is that when we predicate 'duck' (as in "this is a duck") of a given thing, we are describing reality, not just conveniently labeling some part of it in our own minds. If 'duck' is merely a label that we apply to something, then anything we predicate of 'duck' is merely something we predicate of our own mental categories. But this isn't so: the sentence 'ducks quack' refers to something real, not just our thoughts. But at the same time, the sentence is not referring to Duck A or Duck B, but to ducks in general. From this, it seems to follow that some general 'ducky-ness' must have a kind of existence (otherwise how could we predicate of it?), and that this 'ducky-ness' must be shared by everything that is a duck (otherwise, by what is it a duck?).
In the opposite scenario that you've described, all predication would be limited to our thoughts. Someone could say "ducks quack", and someone else could say "ducks never quack", and both would be right, because both would merely be describing their own thoughts. Obviously, all reason, science, possibility of communication, etc, is finished at this point :-)
Of course our labels can be wrong. Someone could mistake a swan for a duck. Also, there is infinite variation from duck to duck, so the 'ducky-ness' of each duck in no way tells us everything about that duck. Duck A and B are unique individuals. Also, the 'ducky-ness' only ever exists in a given duck; it's not like it has some independent ethereal existence.
That's an incredibly narrow slice of properties of ducks, nematodes (and humans).
Is there truly so little that makes up the soul of a duck? No mention of laying eggs? Caring for it's young? Viciously chasing children across the lawn of the local park? (I know that's usually the prevue of Geese, however I have seen ducks launch the occasional offensive against too curious little ones)
GP, in their parenthesis, made the insinuation that if it, which I would take as a LLM, talks like a human and makes up answers, like a LLM is wont to do, like a human is it human?
>(if it talks like a human and makes up answers like a human, is it a human? ;)
While I don't subscribe to the idea that humans have a soul, or some other dualist take, I do think that there is far more to a human than just our cognitive properties. So to convince me that something is human takes more than just listing to it talk to me or make things up during the discussion.
So, too, with a duck. Sure, if all I have to go on is hearing a quack then I would say yeah that's most probably a duck.
Just like if you told me a barn was red when we say just one side, I'd say it's probably red.
It's like the idea that a civilized society would rather have a criminal go free than an innocent man be convicted. We are more than a brain, of course, but how confident are you? What's an acceptable level of risk for you potentially killing someone?
> If it looks like a duck and quacks like a duck, is it a duck?
No, if it doesn't do everything else a duck does. You can have a robot dog, but you won't need to take it to the vet, feed it, sweep up it's hair, let it go outside to go potty, put up a warning sign for the mailman, or take it for a walk. You can have a simulated dog do all those things, but then how accurate will the biological functions be in trying to model it's physiology over time?
Will it give us insights into real dog psychology so we can better interact with our pets? Or does that need to happen with real dogs and real human researchers? Wildlife biologists aren't going to refer to simulated ducks to research their behavior in more depth. They'll go out and observe them, or bring them into the lab.
I suppose that's where a nematode is interesting — it's maybe juuuust simple enough that a real nematode on a plate of agar (as described in the article) might be able to be simulated well enough that we could actually make useful, and even long-term predictions about it based on a mere model.
Not to say I'm fully convinced, but I can see the appeal.
>Wildlife biologists aren't going to refer to simulated ducks to research their behavior in more depth.
I'm pretty sure behavior is simulated all the time in everything from migration to predator prey dynamics, to population dynamics, and so on. If we don't use simulations to understand all the little nuances and idiosyncrasies of behavior right now that's probably just because at present that's extremely difficult to model. But I suspect they absolutely would be used if such things were available. Of course, they would be treated as complementary to other forms of data, but wouldn't be disregarded outright.
This is an interesting point really. At what level of duck-ness do we decide that it's acceptably close to a duck? I agree that taken ad-absurdum, just because something looks like a duck and quacks like a duck, it doesn't mean at all that it is a duck. I can enclose a raspberry pi in a fake duck and it will fulfil the above criteria, and perhaps from a distance it can be mistaken for a duck, but it has practically nothing to do with ducks. At the same time, it might be enough if our objective is to make some low cost garden decorations :)
What I'm trying to say is: as long as the simulation fulfils the objectives set out, it's useful, even if it is very far from the real thing.
Then the next question is: what are the objectives here?
> I can enclose a raspberry pi in a fake duck and it will fulfil the above criteria, and perhaps from a distance it can be mistaken for a duck, but it has practically nothing to do with ducks. At the same time, it might be enough if our objective is to make some low cost garden decorations :)
Agreed, it depends on what data you want out of the simulation. If you want to see how your dog will react to a duck, maybe it's good enough. If on the other hand you want to see how a duck will react to getting poked, well... your raspberry pi is worse than useless.
Assuming a dog only cares about how a duck sounds and not how it smells. We know that wouldn't work for other dogs. Which brings up something about simulating other animals. They're not human, and likely have sensory experiences that differ from our own. Perhaps a nematode worm is simple enough that we don't have to worry, but a dog or a duck are complex enough that we might leave that part out of the simulation. Or just not know how to fully simulate dog olfactory processing.
We really have no idea whether consciousness is something that can arise from computation, or whether it is somehow dependent on the physical substrate of the universe. Maybe we can create a virtual brain that, from the outside, is indistinguishable from a physical brain, and which will argue vociferously that it is a real person, and yet experiences no more conscious qualia than an equation written on a piece of paper.
> We really have no idea whether consciousness is something that can arise from computation, or whether it is somehow dependent on the physical substrate of the universe.
I don't understand this argument. How is the computer running the computation not part of the "physical substrate of the universe"? _Everything_ is part of the universe almost by definition.
The computer is physical, but the computation is (at least) a level of abstraction above the physical layer. The physical process may be the important part, not just the (apparent) algorithm that the physical process executes.
>but the computation is (at least) a level of abstraction above the physical layer
I think you're kinda right, but the tricky thing here is that the computation itself is physical too. The abstraction may just be whatever it is that the computation has in common with the thing it's modeling in brains, which could mean it, too, has consciousness, or is 'doing' consciousness in some sense.
If the physical process is "the important part" then that can be modeled in an abstract way, too.
What I’m saying is that we don’t know that consciousness is just an algorithm. If the physical implementation matters, then modelling it might be useful or interesting, but it wouldn’t actually create the real thing. Maybe consciousness arises from a specific pattern of movement of electrons, but not from any pattern of billiard ball movements.
>What I’m saying is that we don’t know that consciousness is just an algorithm.
I caught that, I think(?). I would flag that the upshot or implication can be (1) something outside of physics altogether which I think, while romantic, is at the extreme end of extreme in terms of tenuous and inviting bad metaphysics, bad notions of emergentism etc, but there's also (2) something about the difference between how something is "embodied" which, as you note, still is about billiard ball style simulation at the end of the day, but can raise interesting questions about what kinds of simulations work.
I also do wonder of there's some kind of physicalist essentialism working its way in there. If there's something different about electrons that's importantly different, something about it (hopefully) is a physical property and as such able to be modeled. If consciousness is intrinsically and preferentially tied to a certain kind of matter, e.g. atoms, or brain-stuff, that starts to sound a little woo-ey.
What makes you claim that? I have not seen proof of that (on the contrary, we don't have smooth emulation of animal like movement yet, which brains figure out pretty fast)
The question is, what evidence is there that the most simple structures we can call brains are doing something which is fundamentally impossible to do with something other than a brain.
Given that brains are fundamentally governed by the same physical laws as everything else, there shouldn't be anything about them which cannot be replicated in some way by something sufficiently capable of emulation of their processes.
That's not to say it's simple. Just that brains obey the laws of physics, and as long as that's true, they should be able to be replicable.
Unless your contention is that brains are somehow able to operate outside the constraints of the laws of physics, in which case we're going to have a fundamental difference of opinion as to the nature of the universe and whether things with brains are particularly special.
"Able to be replicable" is a far cry from being practically replicable.
We are unable to get two biologically identical (or at least extremely close) brains of identical twins to develop in the same way, let alone two distinct brains, or a simulated version of a brain.
The claim is potentially equivalent to a claim that since universe is theoretically computable, we'll eventually be able to simulate it.
Not at all. I'm not saying we actually will simulate it, just that it's got the property of theoretical simulatability. Which means there's not anything magical going on under the hood. Which means consciousness isn't magic.
Just because something is non-magical doesn't mean it can necessarily be simulated by a computer, especially a practical computer that we can actually build given our level of technology and available resources.
i would rather ask one to think, what evidence is there that we cannot do brain on non-gooey stuff?
If i take every atom/molecule from one brain (assume a snapshot in time) and replicate it one by one at a different location, and replicate the external IO (stimulus, glucose...), what evidence do we have that this won't work? likely not much
Now instead of replicating ALL the atoms/molecules exactly, I replace one of the higher level entities like a single neuron with a computational equivalent - a tiny computer of sorts that perfectly replaces a neuron within the error bars of the biological neuron. Will this not work? I mean, will it not behave in the same exact way as the original biological brain with consciousness? (We have some evidence that we can replace certain circuits in the brain with man-made equivalents and it continues to work.)
You know where I'm going with this... FindAll, ReplaceAll. Why would it be any different?
---
If i had to argue that it wouldn't be the same, here's a quick braindump off the top of my head:
- some entities like neurons literally cannot be replicated without the goo. physics limitation? but the existence of the goo is a proof of existence. but still, maybe the goo has properties that cannot be replicated with other substances
- our model of the physical world has serious limitations. on the order of pre-knowing-speed-of-light-limitation. maybe putting the building blocks together does not create the full thing. maybe building blocks + magic is needed to create the whole.
> What you really mean is is there any meaningful difference in what can be processed by biological computing and non-biological computing.
> The answer to that would appear to be, no.
So specifically to "appear to be, no".
> i would rather ask one to think, what evidence is there that we cannot do brain on non-gooey stuff?
Because we haven't practically done it despite decades of trying?
I don't think this should stop us from trying, and it's pretty obvious it won't. But there is no proof either way — potentially the problem is so complex that we never get there in practice?
(Also note that proving a general negative statement is pretty tricky and usually avoided — we usually look for counter-examples, evaluate a full finite/countable set of scenarios, etc)
Another point I'd add is that we've already got practical examples of computations that are especially hard for computers to practically do even if they are "obvious" in theory — we use those as a base for encryption, for instance.
I think an even simpler argument can be made: our brain develops in response to the physical stimulus we experience from birth (earlier even).
Basically, even if it's a simple computation engine, can we put that simulation through the stimulus our brain experiences (not easily) and will lack of that turn into entirely differently behaving system?
> Luckily, infowar turns out to be more survivable than nuclear war – especially once it is discovered that a simple anti-aliasing filter stops nine out of ten neural-wetware-crashing Langford fractals from causing anything worse than a mild headache.
Implied spaces Walter Jon Williams https://www.goodreads.com/book/show/2059573.Implied_Spaces which takes another approach to the unexplored events leading into Glasshouse and a possible path of escalation. The reference passage in this book is:
> “I and my confederates,” Aristide said, “did our best to prevent that degree of autonomy among artificial intelligences. We made the decision to turn away from the Vingean Singularity before most people even knew what it was. But—” He made a gesture with his hands as if dropping a ball. “—I claim no more than the average share of wisdom. We could have made mistakes.”
It's well known it in fact isn't, otherwise learning would be impossible. Learning still isn't perfectly understood but one key characteristic is likely modulating synaptic strength (the weights mentioned). Also, yes, every cell and in particular neurons are very complex systems, although synapses themselves have various simplifying properties (specially along the axon, electrical communication really is the main method of communication).
>In 2013, neuroscientist Henry Markram secured about 1 billion euros from the European Union to "simulate the human brain" — a proposal widely deemed unrealistic even at the time. The project faced significant challenges and ultimately did not meet its ambitious yet vague goals
Unfortunately, it's not that easy. Axon terminals of neurons release neurotransmitters. We know of dozens of different types, but are not certain that we know about all of them yet. The same synapse can release multiple different neurotransmitters too, with one or more released depending on the axonic signals. And what to these chemicals do? It depends! There are receptors on the post-synaptic cell that respond to neurotransmitters, but there can be multiple different receptors that respond differently to the same neurotransmitter. Again, we aren't sure we know about all of them. The post-synaptic neuron is probably also listening to neurons of other types that signal using different neurotransmitters that it uses to determine if it should transmit an action potential or not. Oh, and invertebrates (like nematodes) send graded potentials (not action potentials like us vertebrates usually do) where the signal strength can vary.
In short - we are a long way from being able to simulate a nervous system. Our knowledge of neuronal biochemistry is not there yet.
Why do you assume that we need to exactly simulate single atoms to simulate the brain? We can also do CFD to simulate fluid flows without simulating every atom in the fluid. Maybe that's possible with the brain, too.
Yes, and CFD simulations notoriously break down the moment you scale them up. Look at weather forecasting! Extremely complex models that push the boundaries of modern computational capabilities, and yet we are barely able to forecast weather a day or two out. Sure, we can make vague forecasts over large areas, but errors compound and we lose both granularity and accuracy.
And that's with us having a pretty solid understanding of how fluid dynamics works. We have an extremely poor understanding of how a brain works, doubly something of the complexity of the human brain. We are fundamentally unable to study it during operation, because we don't have a non-invasive high resolution access to it's internals. We are basically butchers sticking electrodes into living tissue.
The article itself proposes that we may - barely - be able to study the workings of the brain of an extremely simple organism.
A rocketry analogy would be Archimedes dreaming about people traveling to the stars.
I would say the issue with predicting weather by simulating flows isn’t mainly that we can’t simulate single atoms, it’s that we just don’t have enough input data. If I would magically give meteorologists a method to instantly simulate every atom of the atmosphere, they still couldn’t exactly tell me what the weather would be like in a week. It’s too dependent on small input changes for that.
> A rocketry analogy would be Archimedes dreaming about people traveling to the stars.
What’s wrong with that? The claim was „some day“, not in 5 years. It could take a few centuries or longer.
It appears inevitable that we can fully map a dead person's neurons and synapses. [1] is doing essentially that for a tiny sliver, with some amazing images to show. From there, it's "just" scaling up.
That alone wouldn't be enough to fully clone a person's consciousness. There is information stored in the actively firing synapses. For example short-term memory seems to be stored by sending signals in a loop, and there might be more such mechanisms. Those signals are obviously lost once the brain is dead. Another issue are hormones. The same brain regulated by a different (simulated) body might behave completely different. And then there are probably a lot of unknown unknowns. Despite decades of research there are still a lot of open questions, and more questions will become apparent once we actually start simulating complex brains.
But that doesn't mean that those early methods wouldn't be useful, both for science and for more questionable efforts. For example accessing the long-term memory of a recently deceased might be comparatively viable if given enough funding
I suspect this wasn't your intention, but I feel this heavily undersells how much work is involved in "scaling up" to simulating a human brain. I wouldn't even say that it is inevitable, because there are so many unsolved questions and unknown-unknowns.
There are decades of research and we are still an unknown and large number of years away from doing this. Fusion power is more tractable that this.
It's not even clear whether our current approach to computation will ever be able to do this. We might need completely novel types of computers, maybe organic-machine hybrids.
I'm not even touching on the very real and serious ethical questions of simulating human level consciousnesses.
Also note how my "just" only applies to scaling from mapping a grain-of-rice-sized piece of human brain to mapping a full human brain. Going from there to simulating it would be another big leap, never mind the challenges to simulate it in a way that actually produces results comparable to the actual brain of that person.
There's a genuine question of whether fully simulating a brain will be enough. We have several hundred million neurons in our digestive system. What we eat, and the kind of bacteria that lives there influences our mood. Same with the rest of our body. Brains are part of a larger organism. What would it mean to just simulate the brain independent of a body? Our sensory organs play a role in processing the incoming sensory stimulus and then send that off to the brain.
A human mind simulating another human mind is a computational system which is powerful enough to do arithmetic acting on itself, so Gödel's incompleteness theorems apply.
What I'm getting at is that you won't achieve anything by simulating a human brain. Which doesn't mean nobody will try, the World's Richest Man seems determined to send humans to Mars, which is equally futile, but just that it is definitely futile and here's why we know that.
> This represents the next phase in human evolution, freeing our cognition and memory from the limits of our organic structure. Unfortunately, it’s also a long way off.
I'm actually happy it's a long way off. Feels like the richer humans would live with cheat codes, and the others wouldn't.
Against that I'm quite up for doing away with death. A much less computationally challenging version may not be that far off, more along the lines of an LLM trying to be you rather than a neuron level simulation.
I disagree that it is the all or nothing thing the author implies. I say this isn't a long way off, it's something we've been doing for centuries. Writing is a great example of our "freeing our cognition and memory from the limits of our organic structure". We've used a technology to extend our memory and allow others access to that memory. A calculator is another easy to understand example of this principle. I think Heidegger best explains this relationship between us and our technology with his ideas around Das Zeug and ready-to-hand.
We are already cyborgs.
I'd be worried during the brain scan of losing the coin flip and waking up in digital Neura-hell being tortured for eternity for Elon Musk's enjoyment.
Ego death is a brutal suboptimum. It's tragic that any entity brought into and knowing of its own existence has to die and be forever annihilated.
If humanity has only one goal, and that goal was to achieve immortality for all humans henceforth [1], that would be a noble cause for our species.
I hate that those I care about will cease to exist.
Fuck death.
[1] Maybe we get lucky and they master physics, reverse the lightcone, and they pull each of us out of the ether of time with perfect memories to join them. Sign me up. I consent.
Mortality possibly defines plenty of our behaviour and development: while people usually take the default assumption that we'd go in a positive direction without that constraint, I do not think it's a given.
Just like we can't really predict weather (as another complex system) too far ahead, we can't really predict how something this significant changes brain development — IMHO at least.
Say the lifespan doesn't become infinity, but rather 10x ~ 800years. How do you imagine things to change? It would certainly mean that people can take up much more ambitious projects instead of the usual ~30 year constraint.
I do share your view that positive direction is not a given, but what evidence do we have that it would be worse than right now. Maybe we should be cautious of the risks.
I am simply saying that we don't have evidence either way, though if development of medical knowledge is any proof, we mostly keep learning how much we don't know (if you've ever faced any uncommon and thus unusual medical predicament, you'd know what I mean).
Heck, so much money has gone into preventing hair loss, and there does not seem to be a simple answer to that either ;)
> If humanity has only one goal, and that goal was to achieve immortality for all humans henceforth [1], that would be a noble cause for our species.
I kindly disagree :-). I think I'd rather not be immortal but live in a world with nature and animals than be imortal in a jar. Right now we don't manage to be immortal, and we are extinguishing the animals... the worst of both worlds?
Many spiritual and philosophical traditions claim the immortality of the soul. Socrates argues forcefully for it in Phaedo.
Nearly all mystics (and many if not most neuroscientists) also come to the conclusion that our world of the senses is an illusion. This doesn't mean that the illusion doesn't have rigid laws, but it does challenge the materialistic assumption that the soul, or consciousness, becomes nothing at the time of physical/biological death.
If that is too fuzzy and mystical, I'd also suggest reflecting more deeply on the concept of technologically facilitated immortality of physical life on earth. For me, it is clearly a dead end. It can only lead to a complete annihilation of every human value.
> For me, it is clearly a dead end. It can only lead to a complete annihilation of every human value.
could you please elaborate on this? why is it clearly a dead end and why would human values clearly end? any resources you can point to would be great. thank you.
I sympathise with you, but I'd take a much more optimistic view on this. If death is oblivion, then you won't be able to feel sad that you don't exist since there won't be a consciousness to interpret these feelings.
If there's consciousness after death (in whatever form), then it is clearly not the end, just a part of a much longer - possibly infinite - journey. Even better!
In either case: it's better to stop worrying about what may come after and enjoy the journey to the fullest!
Hi short_sells_poo, in Hinduism, afaict you - your soul (Ātman [1]) is stuck in a loop of birth-death-rebirth (Saṃsāra [2]). and this is not good, and you live your life in the best way (Dharma [3], Karma [4]) to attain liberation (Moksha [5]), to be one with the God (Brahman [6]), to end the cycle of rebirths.
I appreciate the sympathies. Everything's okay. I'm relatively comfortable with death. Though it still feels like we're in a local optima faraway removed from true nirvana, there's not much we can do about that currently. I don't know if any "speed run" of life today could solve it for those of us presently alive.
> If death is oblivion, then you won't be able to feel sad that you don't exist since there won't be a consciousness to interpret these feelings.
I've also taken this to mean that any pleasure and pain in this life are meaningless on a geological time span. Apart from what comfort we need to keep our mental health while still alive [1], we don't need to optimize for self-gratification, pleasure, or wealth accumulation. It's my excuse for grinding so hard at the few things that bring me meaning.
We're all traversing these gradients differently. The sum of what we learn and build will feed into the next generations. I hope the future is brighter for them than the wildest of we could dream of today.
[1] Time with friends and family; enough resources to not worry about food, shelter, or bills; a fun distraction here or there
> If humanity has only one goal, and that goal was to achieve immortality for all humans henceforth [1], that would be a noble cause for our species.
Except that's not the goal and never will be the goal. If some immortality technology is ever created, it won't be for all. The Elon Musks, Sam Altmans, and Donald Trumps of the world will live forever. You will die.
> I hate that those I care about will cease to exist.
> Fuck death.
There's a much simpler and more achievable solution to that problem: change your belief system.
But there won't be any others to would or wouldn't. When human fertility rates drop below 2.1, population shrinks. Each generation is smaller than the last. The inevitable result of shrinking (through fertility decline rather than war/disease/disaster) is inevitable extinction. You have the equality of species oblivion to look forward to.
HN discussion of a related recent story in Wired magazine: https://news.ycombinator.com/item?id=43490290
Thanks! Macroexpanded:
C. Elegans: The worm that no computer scientist can crack - https://news.ycombinator.com/item?id=43490290 - March 2025 (130 comments)
The author makes a good point that it's important to define what "a good simulation" means.
On one extreme, we cannot even solve the underlying physics equations for single atoms beyond hydrogen, let alone molecules, let alone complex proteins, etc. etc. all the way up to cells and neuron clusters. So that level of "good" seems enormously far off.
On the other hand, there are lots of useful approximations to be made.
If it looks like a duck and quacks like a duck, is it a duck?
If it squidges like a nematode and squirms like a nematode, is it a [simulation of a] nematode?
(if it talks like a human and makes up answers like a human, is it a human? ;)
> If it looks like a duck and quacks like a duck, is it a duck?
ISTM that the answer is "in a way yes, in a way no".
Yes, in that we reasonably conclude something is a duck if it seems like a duck.
No, in that seeming like a duck is not a cause of its being a duck (rather, it's the other way round).
When we want to figure out what something is, we reason from effect to cause. We know this thing is a duck because it waddles, quacks, lays eggs, etc etc. We figure out everything in reality this way. We know what a thing is by means of its behavior.
But ontologically -- ie outside our minds -- the opposite is happening from how we reason. Something waddles, quacks & lays eggs because it is a duck. Our reason goes from effect (the duck's behavior) to cause (the duck), but reality goes in the other direction.
Our reasoning (unlike reality) can be mistaken. We might be mistaking the model of a duck or a robot-duck for a real duck. But it doesn't follow from this that a model duck or a robot-duck is a duck. It just means a different cause is producing [some of] the same effects. This is true no matter how realistic the robot-duck is.
So we may (may!) be able to theoretically simulate a nematode, though the difficulty level must be astronomical, but that doesn't mean we've thereby created a nematode. This seems to be the case for attempting to simulate anything.
At least this is my understanding, I could be mistaken somewhere.
I think this is also one possible answer to the famous 'zombie' question.
(aside: I'm not especially arguing with you; just thinking out loud in response to what you wrote)
> Something waddles, quacks & lays eggs because it is a duck.
Or: something does those things, period. We notice several such somethings doing similar things, and come up with an umbrella term for them, for our own convenience: "duck." I'm not sure how far different that is from "is a duck", but it feels like a nonzero amount.
I guess where I'm going is: our labels for things are different from the "is-ness" of those things. Really, duck A and duck B are distinct from each other in many ways, and to call them by one name is in itself a coarse approximation.
So if "duckness" is a label that is purely derived from our observations, and separate from the true nature of the thing that waddles and quacks, then does some other thing (the robot duck) which also produces the same observations, also win the label?
Luckily, I'm a solipsist, so I don't have to worry about other things actually existing. Phew.
I've never spoken to a self-declared solipsist before, though of course we all act like solipsists to a degree :-). Anyway, I will assume that solipsism is false for the rest of this post, that's another question.
It's amazing how many philosophical debates end up at the question of universals that you've just alluded to.
My own position, very briefly, is that when we predicate 'duck' (as in "this is a duck") of a given thing, we are describing reality, not just conveniently labeling some part of it in our own minds. If 'duck' is merely a label that we apply to something, then anything we predicate of 'duck' is merely something we predicate of our own mental categories. But this isn't so: the sentence 'ducks quack' refers to something real, not just our thoughts. But at the same time, the sentence is not referring to Duck A or Duck B, but to ducks in general. From this, it seems to follow that some general 'ducky-ness' must have a kind of existence (otherwise how could we predicate of it?), and that this 'ducky-ness' must be shared by everything that is a duck (otherwise, by what is it a duck?).
In the opposite scenario that you've described, all predication would be limited to our thoughts. Someone could say "ducks quack", and someone else could say "ducks never quack", and both would be right, because both would merely be describing their own thoughts. Obviously, all reason, science, possibility of communication, etc, is finished at this point :-)
Of course our labels can be wrong. Someone could mistake a swan for a duck. Also, there is infinite variation from duck to duck, so the 'ducky-ness' of each duck in no way tells us everything about that duck. Duck A and B are unique individuals. Also, the 'ducky-ness' only ever exists in a given duck; it's not like it has some independent ethereal existence.
Essentialism is the astrology of ontology.
If you say so :-D
That's an incredibly narrow slice of properties of ducks, nematodes (and humans).
Is there truly so little that makes up the soul of a duck? No mention of laying eggs? Caring for it's young? Viciously chasing children across the lawn of the local park? (I know that's usually the prevue of Geese, however I have seen ducks launch the occasional offensive against too curious little ones)
dude idk if you're trolling, but if not, the gp meant - if something exhibits the properties of a duck is it a duck.
GP's comment still stands: how many properties of the duck does a simulation need to exhibit before it can be considered an accurate simulation?
If it looks like a duck and quacks like a duck but isn't made of duck meat, you probably don't want to eat it.
If it looks like a duck and quacks like a duck but doesn't have feathers, you probably don't stuff its skin covering in your pillows.
Not really trolling, unfortunately.
GP, in their parenthesis, made the insinuation that if it, which I would take as a LLM, talks like a human and makes up answers, like a LLM is wont to do, like a human is it human? >(if it talks like a human and makes up answers like a human, is it a human? ;)
While I don't subscribe to the idea that humans have a soul, or some other dualist take, I do think that there is far more to a human than just our cognitive properties. So to convince me that something is human takes more than just listing to it talk to me or make things up during the discussion.
So, too, with a duck. Sure, if all I have to go on is hearing a quack then I would say yeah that's most probably a duck.
Just like if you told me a barn was red when we say just one side, I'd say it's probably red.
I know, I know, I am fun at parties.
It's like the idea that a civilized society would rather have a criminal go free than an innocent man be convicted. We are more than a brain, of course, but how confident are you? What's an acceptable level of risk for you potentially killing someone?
If you can simulate a behavior then that means you have a working understanding of how to get that behavior.
Not necessarily. You can simulate a high-level process independent from the low-level processes that make it up.
For example you can simulate traffic without simulating the inner workings of every car's engine, or even understanding how the engine works.
I might nitpick the "understanding" part. Lots of ML-type statistical models produce good results, but we can't very well explain how they work.
Or maybe by "working understanding" you mean "we have a black box that does the thing we wanted."
You can simulate behaviour with a lookup table, but that's not the same as understanding.
> If it looks like a duck and quacks like a duck, is it a duck?
No, if it doesn't do everything else a duck does. You can have a robot dog, but you won't need to take it to the vet, feed it, sweep up it's hair, let it go outside to go potty, put up a warning sign for the mailman, or take it for a walk. You can have a simulated dog do all those things, but then how accurate will the biological functions be in trying to model it's physiology over time?
Will it give us insights into real dog psychology so we can better interact with our pets? Or does that need to happen with real dogs and real human researchers? Wildlife biologists aren't going to refer to simulated ducks to research their behavior in more depth. They'll go out and observe them, or bring them into the lab.
I suppose that's where a nematode is interesting — it's maybe juuuust simple enough that a real nematode on a plate of agar (as described in the article) might be able to be simulated well enough that we could actually make useful, and even long-term predictions about it based on a mere model.
Not to say I'm fully convinced, but I can see the appeal.
That would be interesting if so. Certainly worth trying.
>Wildlife biologists aren't going to refer to simulated ducks to research their behavior in more depth.
I'm pretty sure behavior is simulated all the time in everything from migration to predator prey dynamics, to population dynamics, and so on. If we don't use simulations to understand all the little nuances and idiosyncrasies of behavior right now that's probably just because at present that's extremely difficult to model. But I suspect they absolutely would be used if such things were available. Of course, they would be treated as complementary to other forms of data, but wouldn't be disregarded outright.
This is an interesting point really. At what level of duck-ness do we decide that it's acceptably close to a duck? I agree that taken ad-absurdum, just because something looks like a duck and quacks like a duck, it doesn't mean at all that it is a duck. I can enclose a raspberry pi in a fake duck and it will fulfil the above criteria, and perhaps from a distance it can be mistaken for a duck, but it has practically nothing to do with ducks. At the same time, it might be enough if our objective is to make some low cost garden decorations :)
What I'm trying to say is: as long as the simulation fulfils the objectives set out, it's useful, even if it is very far from the real thing.
Then the next question is: what are the objectives here?
> I can enclose a raspberry pi in a fake duck and it will fulfil the above criteria, and perhaps from a distance it can be mistaken for a duck, but it has practically nothing to do with ducks. At the same time, it might be enough if our objective is to make some low cost garden decorations :)
Agreed, it depends on what data you want out of the simulation. If you want to see how your dog will react to a duck, maybe it's good enough. If on the other hand you want to see how a duck will react to getting poked, well... your raspberry pi is worse than useless.
Assuming a dog only cares about how a duck sounds and not how it smells. We know that wouldn't work for other dogs. Which brings up something about simulating other animals. They're not human, and likely have sensory experiences that differ from our own. Perhaps a nematode worm is simple enough that we don't have to worry, but a dog or a duck are complex enough that we might leave that part out of the simulation. Or just not know how to fully simulate dog olfactory processing.
cf. the philosophical zombie:
https://en.wikipedia.org/wiki/Philosophical_zombie
We really have no idea whether consciousness is something that can arise from computation, or whether it is somehow dependent on the physical substrate of the universe. Maybe we can create a virtual brain that, from the outside, is indistinguishable from a physical brain, and which will argue vociferously that it is a real person, and yet experiences no more conscious qualia than an equation written on a piece of paper.
> We really have no idea whether consciousness is something that can arise from computation, or whether it is somehow dependent on the physical substrate of the universe.
I don't understand this argument. How is the computer running the computation not part of the "physical substrate of the universe"? _Everything_ is part of the universe almost by definition.
The computer is physical, but the computation is (at least) a level of abstraction above the physical layer. The physical process may be the important part, not just the (apparent) algorithm that the physical process executes.
>but the computation is (at least) a level of abstraction above the physical layer
I think you're kinda right, but the tricky thing here is that the computation itself is physical too. The abstraction may just be whatever it is that the computation has in common with the thing it's modeling in brains, which could mean it, too, has consciousness, or is 'doing' consciousness in some sense.
If the physical process is "the important part" then that can be modeled in an abstract way, too.
We can run any algorithm using billiard balls:
https://en.wikipedia.org/wiki/Billiard-ball_computer
What I’m saying is that we don’t know that consciousness is just an algorithm. If the physical implementation matters, then modelling it might be useful or interesting, but it wouldn’t actually create the real thing. Maybe consciousness arises from a specific pattern of movement of electrons, but not from any pattern of billiard ball movements.
>What I’m saying is that we don’t know that consciousness is just an algorithm.
I caught that, I think(?). I would flag that the upshot or implication can be (1) something outside of physics altogether which I think, while romantic, is at the extreme end of extreme in terms of tenuous and inviting bad metaphysics, bad notions of emergentism etc, but there's also (2) something about the difference between how something is "embodied" which, as you note, still is about billiard ball style simulation at the end of the day, but can raise interesting questions about what kinds of simulations work.
I also do wonder of there's some kind of physicalist essentialism working its way in there. If there's something different about electrons that's importantly different, something about it (hopefully) is a physical property and as such able to be modeled. If consciousness is intrinsically and preferentially tied to a certain kind of matter, e.g. atoms, or brain-stuff, that starts to sound a little woo-ey.
What you really mean is is there any meaningful difference in what can be processed by biological computing and non-biological computing.
The answer to that would appear to be, no.
What makes you claim that? I have not seen proof of that (on the contrary, we don't have smooth emulation of animal like movement yet, which brains figure out pretty fast)
The question is, what evidence is there that the most simple structures we can call brains are doing something which is fundamentally impossible to do with something other than a brain.
Given that brains are fundamentally governed by the same physical laws as everything else, there shouldn't be anything about them which cannot be replicated in some way by something sufficiently capable of emulation of their processes.
That's not to say it's simple. Just that brains obey the laws of physics, and as long as that's true, they should be able to be replicable.
Unless your contention is that brains are somehow able to operate outside the constraints of the laws of physics, in which case we're going to have a fundamental difference of opinion as to the nature of the universe and whether things with brains are particularly special.
"Able to be replicable" is a far cry from being practically replicable.
We are unable to get two biologically identical (or at least extremely close) brains of identical twins to develop in the same way, let alone two distinct brains, or a simulated version of a brain.
The claim is potentially equivalent to a claim that since universe is theoretically computable, we'll eventually be able to simulate it.
Not at all. I'm not saying we actually will simulate it, just that it's got the property of theoretical simulatability. Which means there's not anything magical going on under the hood. Which means consciousness isn't magic.
This 3 pounds of meat between my ears is certainly special (perhaps the most complicated thing in the known universe) but it certainly not magical.
Just because something is non-magical doesn't mean it can necessarily be simulated by a computer, especially a practical computer that we can actually build given our level of technology and available resources.
i would rather ask one to think, what evidence is there that we cannot do brain on non-gooey stuff?
If i take every atom/molecule from one brain (assume a snapshot in time) and replicate it one by one at a different location, and replicate the external IO (stimulus, glucose...), what evidence do we have that this won't work? likely not much
Now instead of replicating ALL the atoms/molecules exactly, I replace one of the higher level entities like a single neuron with a computational equivalent - a tiny computer of sorts that perfectly replaces a neuron within the error bars of the biological neuron. Will this not work? I mean, will it not behave in the same exact way as the original biological brain with consciousness? (We have some evidence that we can replace certain circuits in the brain with man-made equivalents and it continues to work.)
You know where I'm going with this... FindAll, ReplaceAll. Why would it be any different?
---
If i had to argue that it wouldn't be the same, here's a quick braindump off the top of my head:
- some entities like neurons literally cannot be replicated without the goo. physics limitation? but the existence of the goo is a proof of existence. but still, maybe the goo has properties that cannot be replicated with other substances
- our model of the physical world has serious limitations. on the order of pre-knowing-speed-of-light-limitation. maybe putting the building blocks together does not create the full thing. maybe building blocks + magic is needed to create the whole.
- other fun limitation of our physical model
Note that I was responding to a comment claiming:
So specifically to "appear to be, no". Because we haven't practically done it despite decades of trying?I don't think this should stop us from trying, and it's pretty obvious it won't. But there is no proof either way — potentially the problem is so complex that we never get there in practice?
(Also note that proving a general negative statement is pretty tricky and usually avoided — we usually look for counter-examples, evaluate a full finite/countable set of scenarios, etc)
Another point I'd add is that we've already got practical examples of computations that are especially hard for computers to practically do even if they are "obvious" in theory — we use those as a base for encryption, for instance.
I think an even simpler argument can be made: our brain develops in response to the physical stimulus we experience from birth (earlier even).
Basically, even if it's a simple computation engine, can we put that simulation through the stimulus our brain experiences (not easily) and will lack of that turn into entirely differently behaving system?
So a Chinese Room?
Tell me you've read Blindsight... and if you haven't, go read it, I'll wait.
Hehe, yes.
... and then read The Freeze-Frame Revolution.
Thanks, I've been looking for some recommendations actually.
(It's been a few months since the last time I rambled aimlessly through my house muttering consciousness is a parasite under my breath.)
There's a sequence of stories by different authors that allude to one another that I find to be an interesting read in that order.
First, there's the BLIT stories by David Langford. Several of these are online.
https://www.infinityplus.co.uk/stories/blit.htm
https://www.nature.com/articles/44964 (did you know that Nature did science fiction short stories? https://www.nature.com/nature/articles?type=futures )
https://www.lightspeedmagazine.com/fiction/different-kinds-o...
Then, you go to Accelerando
https://www.antipope.org/charlie/blog-static/fiction/acceler...
> Luckily, infowar turns out to be more survivable than nuclear war – especially once it is discovered that a simple anti-aliasing filter stops nine out of ten neural-wetware-crashing Langford fractals from causing anything worse than a mild headache.
This is followed by its sequel-ish Glasshouse https://www.goodreads.com/book/show/17866.Glasshouse
It's not technically a sequel, but one can see the universe of Glasshouse following from the ending of Accelerando.
A quick diversion to Vernor Vinge with Peace War and Marooned in Realtime https://www.goodreads.com/series/57273-across-realtime (there's a short story in there titled The Ungoverned)
Implied spaces Walter Jon Williams https://www.goodreads.com/book/show/2059573.Implied_Spaces which takes another approach to the unexplored events leading into Glasshouse and a possible path of escalation. The reference passage in this book is:
> “I and my confederates,” Aristide said, “did our best to prevent that degree of autonomy among artificial intelligences. We made the decision to turn away from the Vingean Singularity before most people even knew what it was. But—” He made a gesture with his hands as if dropping a ball. “—I claim no more than the average share of wisdom. We could have made mistakes.”
This then ends with... {spoilers}.
Maybe the longer writeup we put out with Michael as co-author helps add some useful extra details: https://arxiv.org/abs/2308.06578
I'm afraid a neuron may not be a logic gate with synapses serving as inputs/outputs and behaving exactly the same on every activation.
It's well known it in fact isn't, otherwise learning would be impossible. Learning still isn't perfectly understood but one key characteristic is likely modulating synaptic strength (the weights mentioned). Also, yes, every cell and in particular neurons are very complex systems, although synapses themselves have various simplifying properties (specially along the axon, electrical communication really is the main method of communication).
>In 2013, neuroscientist Henry Markram secured about 1 billion euros from the European Union to "simulate the human brain" — a proposal widely deemed unrealistic even at the time. The project faced significant challenges and ultimately did not meet its ambitious yet vague goals
Ah, so this is where 45% of my salary goes.
~60% of these 45% are for pension and health/unemployment insurances. Research isn't even 3% of tax spending in Europe.
It gets worse when they celebrate the success of the project while completely failing to deliver on the initial goal:
https://www.humanbrainproject.eu/en/follow-hbp/news/2023/09/...
Just wire neurons (human or otherwise) to computers, and see what happens.
https://www.abc.net.au/news/science/2025-03-05/cortical-labs...
Unfortunately, it's not that easy. Axon terminals of neurons release neurotransmitters. We know of dozens of different types, but are not certain that we know about all of them yet. The same synapse can release multiple different neurotransmitters too, with one or more released depending on the axonic signals. And what to these chemicals do? It depends! There are receptors on the post-synaptic cell that respond to neurotransmitters, but there can be multiple different receptors that respond differently to the same neurotransmitter. Again, we aren't sure we know about all of them. The post-synaptic neuron is probably also listening to neurons of other types that signal using different neurotransmitters that it uses to determine if it should transmit an action potential or not. Oh, and invertebrates (like nematodes) send graded potentials (not action potentials like us vertebrates usually do) where the signal strength can vary.
In short - we are a long way from being able to simulate a nervous system. Our knowledge of neuronal biochemistry is not there yet.
create a brain... and make it play doom?
https://youtu.be/bEXefdbQDjw
I sometimes wonder about being able to fully simulate a human brain. Maybe even scan/copy a real person’s brain.
So many philosophical, ethical and legal questions. And unsettling possibilities.
We will probably have to deal with this someday.
Also mmacevedo https://qntm.org/mmacevedo
> We will probably have to deal with this someday.
This is quite an extraordinary claim with no extraordinary evidence.
As said elsewhere in this thread we can at this moment not even simulate single atoms.
I see no reason to believe at all that we will ever be able to simulate a human brain.
Unless you want my simulation here:
Why do you assume that we need to exactly simulate single atoms to simulate the brain? We can also do CFD to simulate fluid flows without simulating every atom in the fluid. Maybe that's possible with the brain, too.
Yes, and CFD simulations notoriously break down the moment you scale them up. Look at weather forecasting! Extremely complex models that push the boundaries of modern computational capabilities, and yet we are barely able to forecast weather a day or two out. Sure, we can make vague forecasts over large areas, but errors compound and we lose both granularity and accuracy.
And that's with us having a pretty solid understanding of how fluid dynamics works. We have an extremely poor understanding of how a brain works, doubly something of the complexity of the human brain. We are fundamentally unable to study it during operation, because we don't have a non-invasive high resolution access to it's internals. We are basically butchers sticking electrodes into living tissue.
The article itself proposes that we may - barely - be able to study the workings of the brain of an extremely simple organism.
A rocketry analogy would be Archimedes dreaming about people traveling to the stars.
I would say the issue with predicting weather by simulating flows isn’t mainly that we can’t simulate single atoms, it’s that we just don’t have enough input data. If I would magically give meteorologists a method to instantly simulate every atom of the atmosphere, they still couldn’t exactly tell me what the weather would be like in a week. It’s too dependent on small input changes for that.
> A rocketry analogy would be Archimedes dreaming about people traveling to the stars.
What’s wrong with that? The claim was „some day“, not in 5 years. It could take a few centuries or longer.
There's a good deal of sci-fi on the topic. My favorite of them is Neal Stephenson's Fall; or, Dodge in Hell.
For me it's Greg Egan's Permutation City.
That one's on my to-read list...
Add Diaspora by Greg Egan to the simulated minds reading list (the story Wang's Carpets within it is one of my favorites).
Bump it up.
The Bobiverse series
It appears inevitable that we can fully map a dead person's neurons and synapses. [1] is doing essentially that for a tiny sliver, with some amazing images to show. From there, it's "just" scaling up.
That alone wouldn't be enough to fully clone a person's consciousness. There is information stored in the actively firing synapses. For example short-term memory seems to be stored by sending signals in a loop, and there might be more such mechanisms. Those signals are obviously lost once the brain is dead. Another issue are hormones. The same brain regulated by a different (simulated) body might behave completely different. And then there are probably a lot of unknown unknowns. Despite decades of research there are still a lot of open questions, and more questions will become apparent once we actually start simulating complex brains.
But that doesn't mean that those early methods wouldn't be useful, both for science and for more questionable efforts. For example accessing the long-term memory of a recently deceased might be comparatively viable if given enough funding
1: https://edition.cnn.com/2024/05/15/world/human-brain-map-har...
> From there, it's "just" scaling up.
I suspect this wasn't your intention, but I feel this heavily undersells how much work is involved in "scaling up" to simulating a human brain. I wouldn't even say that it is inevitable, because there are so many unsolved questions and unknown-unknowns.
There are decades of research and we are still an unknown and large number of years away from doing this. Fusion power is more tractable that this.
It's not even clear whether our current approach to computation will ever be able to do this. We might need completely novel types of computers, maybe organic-machine hybrids.
I'm not even touching on the very real and serious ethical questions of simulating human level consciousnesses.
Hence the scare quotes around just.
Also note how my "just" only applies to scaling from mapping a grain-of-rice-sized piece of human brain to mapping a full human brain. Going from there to simulating it would be another big leap, never mind the challenges to simulate it in a way that actually produces results comparable to the actual brain of that person.
There's a genuine question of whether fully simulating a brain will be enough. We have several hundred million neurons in our digestive system. What we eat, and the kind of bacteria that lives there influences our mood. Same with the rest of our body. Brains are part of a larger organism. What would it mean to just simulate the brain independent of a body? Our sensory organs play a role in processing the incoming sensory stimulus and then send that off to the brain.
A human mind simulating another human mind is a computational system which is powerful enough to do arithmetic acting on itself, so Gödel's incompleteness theorems apply.
I’m not sure where you’re going with this.
To wit, no one expects human brains to be capable of arbitrarily complex computation.
What I'm getting at is that you won't achieve anything by simulating a human brain. Which doesn't mean nobody will try, the World's Richest Man seems determined to send humans to Mars, which is equally futile, but just that it is definitely futile and here's why we know that.
> you won't achieve anything by simulating a human brain
This does not follow from any theorem of Gödel.
[dead]
Isn't it the goal of https://openworm.org/ ?
please skim the article. or paste it into an llm and ask the same question.
It’s all fun and games until the Nematodes achieve singularity and it’s Skynet
I'm ok with Skynet having the cognitive capacity of a nematode.
Would probably have far greater than human intelligence though. But with the paperclip maximizer reproductive instincts of a nematode.
Yes, it would be a supernematode, but that ultimately is limited by it's base directive to repetitively spasm and lay it's eggs in you... wait.
> This represents the next phase in human evolution, freeing our cognition and memory from the limits of our organic structure. Unfortunately, it’s also a long way off.
I'm actually happy it's a long way off. Feels like the richer humans would live with cheat codes, and the others wouldn't.
Against that I'm quite up for doing away with death. A much less computationally challenging version may not be that far off, more along the lines of an LLM trying to be you rather than a neuron level simulation.
I disagree that it is the all or nothing thing the author implies. I say this isn't a long way off, it's something we've been doing for centuries. Writing is a great example of our "freeing our cognition and memory from the limits of our organic structure". We've used a technology to extend our memory and allow others access to that memory. A calculator is another easy to understand example of this principle. I think Heidegger best explains this relationship between us and our technology with his ideas around Das Zeug and ready-to-hand. We are already cyborgs.
I'd be worried during the brain scan of losing the coin flip and waking up in digital Neura-hell being tortured for eternity for Elon Musk's enjoyment.
I hate that it's a long way off.
Ego death is a brutal suboptimum. It's tragic that any entity brought into and knowing of its own existence has to die and be forever annihilated.
If humanity has only one goal, and that goal was to achieve immortality for all humans henceforth [1], that would be a noble cause for our species.
I hate that those I care about will cease to exist.
Fuck death.
[1] Maybe we get lucky and they master physics, reverse the lightcone, and they pull each of us out of the ether of time with perfect memories to join them. Sign me up. I consent.
Mortality possibly defines plenty of our behaviour and development: while people usually take the default assumption that we'd go in a positive direction without that constraint, I do not think it's a given.
Just like we can't really predict weather (as another complex system) too far ahead, we can't really predict how something this significant changes brain development — IMHO at least.
Say the lifespan doesn't become infinity, but rather 10x ~ 800years. How do you imagine things to change? It would certainly mean that people can take up much more ambitious projects instead of the usual ~30 year constraint.
I do share your view that positive direction is not a given, but what evidence do we have that it would be worse than right now. Maybe we should be cautious of the risks.
I am simply saying that we don't have evidence either way, though if development of medical knowledge is any proof, we mostly keep learning how much we don't know (if you've ever faced any uncommon and thus unusual medical predicament, you'd know what I mean).
Heck, so much money has gone into preventing hair loss, and there does not seem to be a simple answer to that either ;)
> If humanity has only one goal, and that goal was to achieve immortality for all humans henceforth [1], that would be a noble cause for our species.
I kindly disagree :-). I think I'd rather not be immortal but live in a world with nature and animals than be imortal in a jar. Right now we don't manage to be immortal, and we are extinguishing the animals... the worst of both worlds?
Many spiritual and philosophical traditions claim the immortality of the soul. Socrates argues forcefully for it in Phaedo.
Nearly all mystics (and many if not most neuroscientists) also come to the conclusion that our world of the senses is an illusion. This doesn't mean that the illusion doesn't have rigid laws, but it does challenge the materialistic assumption that the soul, or consciousness, becomes nothing at the time of physical/biological death.
If that is too fuzzy and mystical, I'd also suggest reflecting more deeply on the concept of technologically facilitated immortality of physical life on earth. For me, it is clearly a dead end. It can only lead to a complete annihilation of every human value.
> For me, it is clearly a dead end. It can only lead to a complete annihilation of every human value.
could you please elaborate on this? why is it clearly a dead end and why would human values clearly end? any resources you can point to would be great. thank you.
I sympathise with you, but I'd take a much more optimistic view on this. If death is oblivion, then you won't be able to feel sad that you don't exist since there won't be a consciousness to interpret these feelings.
If there's consciousness after death (in whatever form), then it is clearly not the end, just a part of a much longer - possibly infinite - journey. Even better!
In either case: it's better to stop worrying about what may come after and enjoy the journey to the fullest!
Hi short_sells_poo, in Hinduism, afaict you - your soul (Ātman [1]) is stuck in a loop of birth-death-rebirth (Saṃsāra [2]). and this is not good, and you live your life in the best way (Dharma [3], Karma [4]) to attain liberation (Moksha [5]), to be one with the God (Brahman [6]), to end the cycle of rebirths.
Thought you might find it interesting.
[1] https://en.wikipedia.org/wiki/%C4%80tman_(Hinduism) [2] https://en.wikipedia.org/wiki/Sa%E1%B9%83s%C4%81ra [3] https://en.wikipedia.org/wiki/Dharma [4] https://en.wikipedia.org/wiki/Karma_in_Hinduism [5] https://en.wikipedia.org/wiki/Moksha [6] https://en.wikipedia.org/wiki/Brahman
I appreciate the sympathies. Everything's okay. I'm relatively comfortable with death. Though it still feels like we're in a local optima faraway removed from true nirvana, there's not much we can do about that currently. I don't know if any "speed run" of life today could solve it for those of us presently alive.
> If death is oblivion, then you won't be able to feel sad that you don't exist since there won't be a consciousness to interpret these feelings.
I've also taken this to mean that any pleasure and pain in this life are meaningless on a geological time span. Apart from what comfort we need to keep our mental health while still alive [1], we don't need to optimize for self-gratification, pleasure, or wealth accumulation. It's my excuse for grinding so hard at the few things that bring me meaning.
We're all traversing these gradients differently. The sum of what we learn and build will feed into the next generations. I hope the future is brighter for them than the wildest of we could dream of today.
[1] Time with friends and family; enough resources to not worry about food, shelter, or bills; a fun distraction here or there
>I hate that those I care about will cease to exist.
I need to croak so that there's room in the world for my great-grandchildren.
>If humanity has only one goal,
Humanity pursues, best that I can tell, extinction instead of immortality. It has this really weird premature transcendence hangup.
AGI might end up being misaligned. But the first alignment problem: Humans are misaligned
> If humanity has only one goal, and that goal was to achieve immortality for all humans henceforth [1], that would be a noble cause for our species.
Except that's not the goal and never will be the goal. If some immortality technology is ever created, it won't be for all. The Elon Musks, Sam Altmans, and Donald Trumps of the world will live forever. You will die.
> I hate that those I care about will cease to exist.
> Fuck death.
There's a much simpler and more achievable solution to that problem: change your belief system.
Could you elaborate on the belief system?
Are you saying the gp needs to rethink their ideas on death? Wouldn't that be like accepting defeat because the problem is hard?
But there won't be any others to would or wouldn't. When human fertility rates drop below 2.1, population shrinks. Each generation is smaller than the last. The inevitable result of shrinking (through fertility decline rather than war/disease/disaster) is inevitable extinction. You have the equality of species oblivion to look forward to.
Such a beautiful magazine design. Does anyone have recommendations for more skillfully designed online magazines, blogs, etc?
Quanta magazine, Nautilus
[dead]
[dead]
[flagged]
[flagged]
If only..
What does it exactly mean by “simulating” a brain?
Its literally like a third of the article