Certainly. Our brains have 86b neurons and 100 trillion to 1 quadrillion synapses (couldn't find a definitive number, so we don't even know that!)
Gpt4 has been estimated at about 1.8 trillion parameters, so absolutely nowhere near the complexity of a human brain.
Plus I'm sure there's still very much we don't understand about the human brain still. Quantum tubules were de rigeur at one point but I think the theory has since lost credence.
..and tricks: somatic mutation in single human neurons tracks developmental and transcriptional _history_ (:history track), every neuron in the brain has unique DNA and ancestorship (costly, think possibilities: tree indexing schemes, DNA addressed branches, affinity functions..), traveling mitochondria - filing DNA into our brain cells (:-why ? not moving.. addresses?? - live hw, E, effort, reenginers : 'friendly', ..oversight??? - not voting ;), feedback?! existential?, multicolony² will-power?, ..) .. .. ? ( https://news.ycombinator.com/item?id=42835594 )
.. moreover the soft machine is (evolving) DNA expression under factors, ..with many other hidden wonders including being able to operate with even more abstract factors (transcendency), about self and cosmos.
LLM are data and fixed. The Intelligence I see more like a process (..having knowledge .., ways to "think" ). There would be no "hype around AI technologies" and less misunderstanding if we call it according to what we get there instead of Intelligence : Artificial Knowledge and Artificially Generated/Generalized Knowledge (hallucinations and slope).
Regarding Artificial and General in AGI - aren't those.. contradictions (as a clue) ?
I consider it being a reminder since beginning. (And the knowledge self being alive by memetics - then dead and profaned in LLM, like not yet, but "to become" Frankenstain's monster.)
I would like to see a genuine attempt at modeling the neural activity in the brain as a continuous feedback loop. You could include deferent subsystems like auditory and visual input, for the output side you could do mechsnicsl output.
They are kind of attacking a straw man though. I don't think any of the current AI companies are saying scaling current software will produce AGI. That's why they are all employing a lot of expensive computer scientists to try to make better software.
>most say current AI models are unlikely to lead to artificial general intelligence with human-level capabilities, even as companies invest billions of dollars in this goal
The billions of dollars are at least partly going to develop better models and techniques.
Certainly. Our brains have 86b neurons and 100 trillion to 1 quadrillion synapses (couldn't find a definitive number, so we don't even know that!)
Gpt4 has been estimated at about 1.8 trillion parameters, so absolutely nowhere near the complexity of a human brain.
Plus I'm sure there's still very much we don't understand about the human brain still. Quantum tubules were de rigeur at one point but I think the theory has since lost credence.
..and tricks: somatic mutation in single human neurons tracks developmental and transcriptional _history_ (:history track), every neuron in the brain has unique DNA and ancestorship (costly, think possibilities: tree indexing schemes, DNA addressed branches, affinity functions..), traveling mitochondria - filing DNA into our brain cells (:-why ? not moving.. addresses?? - live hw, E, effort, reenginers : 'friendly', ..oversight??? - not voting ;), feedback?! existential?, multicolony² will-power?, ..) .. .. ? ( https://news.ycombinator.com/item?id=42835594 )
.. moreover the soft machine is (evolving) DNA expression under factors, ..with many other hidden wonders including being able to operate with even more abstract factors (transcendency), about self and cosmos.
LLM are data and fixed. The Intelligence I see more like a process (..having knowledge .., ways to "think" ). There would be no "hype around AI technologies" and less misunderstanding if we call it according to what we get there instead of Intelligence : Artificial Knowledge and Artificially Generated/Generalized Knowledge (hallucinations and slope).
Regarding Artificial and General in AGI - aren't those.. contradictions (as a clue) ?
I think most consider the "artificial" to be the kind that's not implemented with meat.
I consider it being a reminder since beginning. (And the knowledge self being alive by memetics - then dead and profaned in LLM, like not yet, but "to become" Frankenstain's monster.)
I think "implemented" is the specific distinction here, unless you're religious I suppose.
I think he just mean not Soft Machine kind - self *, alive without any adv. magic.
I would like to see a genuine attempt at modeling the neural activity in the brain as a continuous feedback loop. You could include deferent subsystems like auditory and visual input, for the output side you could do mechsnicsl output.
They are kind of attacking a straw man though. I don't think any of the current AI companies are saying scaling current software will produce AGI. That's why they are all employing a lot of expensive computer scientists to try to make better software.
>most say current AI models are unlikely to lead to artificial general intelligence with human-level capabilities, even as companies invest billions of dollars in this goal
The billions of dollars are at least partly going to develop better models and techniques.
Aren't we all...