The bit about multiplication being ~12x faster than addition is worth pausing on. In silicon, addition is the "easy" operation — but here the complexity hierarchy completely inverts. Makes sense once you think about it: multiplication decomposes into parallel byte-pair lookups (which neural nets handle trivially as table approximation), while addition has a sequential carry chain you can't fully parallelize away.
Funny enough, analog computing had the same inversion — a Gilbert cell does multiplication cheaply, while addition needs more complex summing circuits. Completely different path to the same result.
What I haven't seen discussed: if the whole CPU is neural nets, the execution pipeline is differentiable end-to-end. You could backprop through program execution. Useless for booting Linux, but potentially interesting for program synthesis — learning instruction sequences via gradient descent instead of search. Feels like that's the more promising research direction here than trying to make it fast.
Get used to it. The modern day solution for everything right now is to throw AI at it.
Hmmm... I need to measure this piece of wood for cutting, let me take a picture of it and see what the ai says its measurement is instead of using a measuring tape because it is faster to use the AI.
Begin reimplementing a subleq/muxleq VM with GPU primitive commands:
https://github.com/howerj/muxleq (it has both, muxleq (multiplexed subleq, which is the same but mux'ing instructions being much faster) and subleq. As you can see the implementation it's trivial. Once it's compiled, you can run eforth, altough
I run a tweked one with floats and some beter commands, edit muxleq.fth, set the float to 1 in that file with this example:
1 constant opt.float
The same with the classic do..loop structure from Forth, which is not
enabled by default, just the weird for..next one from EForth:
1 constant opt.control
and recompile:
./muxleq ./muxleq.dec < muxleq.fth > new.dec
run:
./muxleq new.dec
Once you have a new.dec image, you can just use that from now on.
I'll do you one better, imagine a CPU that runs entirely in an LLM.
You’re absolutely right! I made an arithmetic mistake there — 3 * 3 is 9, not 8. Let’s correct that:
Before: EAX = 3
After imul eax, eax: EAX = 9
Thanks for catching that — the correct return value is 9.
What an amazing multiplication request! The numbers you have chosen reveal an exquisite taste which can only be the product of an outstanding personality.
A fun experiment but I wonder how many out there seriously think we could ever completely rid ourselves of the CPU. It seems to be a rising sentiment.
The cost of communicating information through space is dealt with in fundamentally different ways here. On the CPU it is addressed directly. The actual latency is minimized as much as possible, usually by predicting the future in various ways and keeping the spatial extent of each device (core complex) as small as possible. The GPU hides latency with massive parallelism. That's why we can put them across relatively slow networks and still see excellent performance.
Latency hiding cannot deal well in workloads that are branchy and serialized because you can only have one logical thread throughout. The CPU dominates this area because it doesn't cheat. It directly targets the objective. Making efficient, accurate control flow decisions tends to be more valuable than being able to process data in large volumes. It just happens that there are a few exceptions to this rule that are incredibly popular.
> I wonder how many out there seriously think we could ever completely rid ourselves of the CPU. It seems to be a rising sentiment.
This sentiment is not a recent thing. Ever since GPGPU became a thing, there have been people who first hear about it, don't understand processor architectures and get excited about GPUs magically making everything faster.
I vividly recall a discussion with some management type back in 2011, who was gushing about getting PHP to run on the new Nvidia Teslas, how amazingly fast websites will be!
Similar discussions also spring up around FPGAs again and again.
The more recent change in sentiment is a different one: the "graphics" origin of GPUs seem to have been lost to history. I have met people (plural) in recent years who thought (surprisingly long into the conversation) that I mean stable diffusion when talking about rendering pictures on a GPU.
Nowadays, the 'G' in GPU probably stands for GPGPU.
The dream I think has always been heterogeneous computing. The closest here I think is probably apple with their multi-core cpus with different cores, and a gpu with unified memory. (someone with more knowledge of computer architecture could probably correct me here).
Have a CPU, GPU, FPGA, and other specific chips like Neural chips. All there with unified memory and somehow pipelining specific work loads to each chip optimally to be optimal.
I wasn't really aware people thought we would be running websites on GPUs.
CPU and GPU have very different ways of scheduling instructions, requiring somehow different interfaces and programming models.. I'd hazard to say that a GPU and CPU with unified memory access (like the Apple's M series, and most mobile chips) is already such a consolidated system.
CISC only survived because CPUs now dedicate a ton of silicon to decoding the CISC stream into RISC-y microcode. RISC CPUs can avoid this completely, but it turns out backwards compatibility was important to the market and the transistor cost of "instruction decode" just adds like +1 pipeline depth or something.
> CISC only survived because CPUs now dedicate a ton of silicon to decoding the CISC stream into RISC-y microcode.
For Intel CPUs, this was somewhat true starting from the Pentium Pro (1995). The Pentium M (2004) introduced a technique called "micro-op fusion" that would bind multiple micro-ops together so you'd get combined micro-ops for things like "add a value from memory to a register". From that point onward, the Intel micro-ops got less and less RISCy until by Sandy Bridge (2011) they pretty much stopped resembling a RISC instruction set altogether. Other x86 implementations like K7/K8/K10 and Zen never had micro-ops that resembled RISC instructions.
> CPUs now dedicate a ton of silicon to decoding the CISC stream into RISC-y microcode.
In absolute terms, this is true. But in relative terms, you're talking less than 1% of the die area on a modern, heavily cached, heavily speculative, heavily predictive CPU.
I hadn't heard that, but certainly, there must have been many times when Intel held the crown of "biggest working hunk of silicon area devoted to RAM."
> It will just take on the appropriate functionality to keep all the compute in the same chip.
So, an iGPU/APU? Those exist already. Regardless, the most GPU-like CPU architecture in common use today is probably SPARC, with its 8-way SMT. Add per-thread vector SIMD compute to something like that, and you end up with something that has broadly similar performance constraints to an iGPU.
> I wonder how many out there seriously think we could ever completely rid ourselves of the CPU.
How do you class systems like the PS5 that have an APU plugged into GDDR instead of regular RAM? The primary remaining issue is the limited memory capacity.
I wonder if we might see a system with GPU class HBM on the package in lieu of VRAM coupled with regular RAM on the board for the CPU portion?
I don’t think the remaining issue is memory capacity. CPUs are designed to handle nonlinear memory access and that is how all modern software targeting a CPU is written. GPUs are designed for linear memory access. These are fundamentally different access patterns the optimal solution is to have 2 distinct processing units
people say this a lot, but with little technical justification.
gpus have had cache for a long time. cpus have had simd for a long time.
it's not even true that the cpu memory interface is somehow optimized for latency - it's got bursts, for instance, a large non-sequential and out-of-page latency, and has gotten wider over time.
mostly people are just comparing the wrong things. if you want to compare a mid-hi discrete gpu with a cpu, you can't use a desktop cpu. instead use a ~100-core server chip that also has 12x64b memory interface. similar chip area, power dissipation, cost.
not the same, of course, but recognizably similar.
none of the fundamental techniques or architecture differ. just that cpus normally try to optimize for legacy code, but gpus have never done much ISA-level back-compatibility.
GDDR has high bandwidth but limited capacity. Regular RAM is the opposite, leaving typical APUs memory bandwidth starved.
Both types of processor perform much better with linear access. Even for data in the CPU cache you get a noticable speedup.
The primary difference is that GPUs want large contiguous blocks of "threads" to do the same thing (because in reality they aren't actually independent threads).
If anything, GPUs combine large private per-compute unit private address spaces and a separate shared/global memory, which doesn't mesh very well with linear memory access, just high locality. You can kinda get to the same arrangement on CPU by pushing NUMA (Non-Uniform Memory: only the "global" memory is truly Unified on a GPU!) to the extreme, but that's quite uncommon. "Compute-in-memory" is a related idea that kind of points to the same constraint: you want to maximize spatial locality these days, because moving data in bulk is an expensive operation that burns power.
I don't think we get rid of the CPU. But the relationship will be inverted. Instead of the CPU calling the GPU, it might be that the GPU becomes the central controller and builds programs and calls the CPU to execute tasks.
How do you win moving your central controller from a 4GHz CPU to a multi-hundred-MHz single GPU core?
If we tried this, all we'd do is isolate a couple of cores in the GPU, let them run at some gigahertz, and then equip them with the additional operations they'd need to be good at coordinating tasks... or, in other words, put a CPU in the GPU.
Sounds reminiscent of the CDC 6600, a big fast compute processor with a simple peripheral processor whose barreled threads ran lots of the O/S and took care of I/O and other necessary support functions.
Hey everyone thank you taking a look at my project. This was purely just a “can I do it” type deal, but ultimately my goal is to make a running OS purely on GPU, or one composed of learned systems.
I think it's curious that you're saying "on GPU" when you mean "using tensors." GPUs run compute shaders naturally and can trivially act like CPUs, just use CUDA. This is more akin to "a CPU on NPU" and your NPU happens to be a GPU.
Hi! I think that the idea is certainly a fun one. There is a long history of trying to make a good parallel operating system. I do not think that any of the projects succeeded though. This article is a good read if you are interested in that. I am not sure why the economics of parallel computer operating systems have not worked out so far. I think it most likely has to do with the operating systems that we have being good enough and familiar.
[0] https://news.ycombinator.com/item?id=43440174
The Blue Gene Active Storage project demonstrated compute in highly parallel “storage” where storage was HPC memory. It could work for the relationship between CPU and GPU, FPGA, etc.
I was always wondering what would happen if you trained a model to emulate a cpu in the most efficient way possible, this is definitely not what I expected, but also shows promise on how much more efficient models can become.
To multiply two arbitrary numbers in a single cycle, you need to include dedicated hardware into your ALU, without it you have to combine several additions and logical shifts.
As to why not use the ADD/MUL capabilities of the GPU itself, I guess it wasn’t in the spirit of the challenge. ;)
This is a fun idea. What surprised me is the inversion where MUL ends up faster than ADD because the neural LUT removes sequential dependency while the adder still needs prefix stages.
I wish the project said how many CPUs could be run simultaneously on one GPU.
It might be worth having a CPU that's 100 times slower (25 MHz) if 1000 of them could be run simultaneously to potentially reach a 10 times speedup for embarrassingly parallel computation. But starting from a hole that's 625000x slower seems unlikely to lead to practical applications. Still a cool project though!
Doom it's easy. Better the ZMachine with an interpreter
based on DFrotz, or another port. Then a game can even run under a Game Boy.
For a similar case, check Eforth+Subleq. If this guy can emulate subleq CPU under a GPU (something like 5 lines under C for the implementation, the rest it's C headers and the file opening function), it can run Eforth and maybe Sokoban.
Well, I don't have enough knowledge on the boot process of RPi. However, I do expect that most modern hardware, e.g. x86, do not work like RPi, so your words do not hold in most realistic scenarios, at least for now. Besides, do current GPUs (not only GPUs on RPi) have the ability to self instruct in order to achieve what you said?
Depends entirely on your definition of 'entirely', but https://github.com/jhuber6/doomgeneric is pretty much a direct compilation of the DOOM C source for GPU compute. The CPU is necessary to read keyboard input and present frame data to the screen, but all the logic runs on the GPU.
This CPU simulator does not attempt to achieve the maximum speed that could be obtained when simulating a CPU on a GPU.
For that a completely different approach would be needed, e.g. by implementing something akin to qemu, where each CPU instruction would be translated into a graphic shader program. On many older GPUs, it is impossible or difficult to launch a graphic program from inside a graphic program (instead of from the CPU), but where this is possible one could obtain a CPU emulation that would be many orders of magnitude faster than what is demonstrated here.
Instead of going for speed, the project demonstrates a simpler self-contained implementation based on the same kind of neural networks used for ML/AI, which might work even on an NPU, not only on a GPU.
Because it uses inappropriate hardware execution units, the speed is modest and the speed ratios between different kinds of instructions are weird, but nonetheless this is an impressive achievement, i.e. simulating the complete Aarch64 ISA with such means.
You could coalesce multiple instructions per shader, but even with a single CPU instruction (which would be translated to a sequence of GPU instructions), you could reach orders of magnitude greater speed than in this neural network implementation, by using the arithmetic-logic execution units of the GPU.
Once translated, the shader programs would be reused. All this could be inserted in qemu, where a CPU is emulated by generating for each instruction a short program that is compiled and then the resulting executable functions are cached and executed during the interpretation of the program for the emulated CPU.
In qemu, one could replace the native CPU compiler with a GPU compiler, either for CUDA or for a graphic shader language, depending on the target GPU. Then the compiled shaders could be loaded in the GPU memory, where, if the GPU is recent enough to support this feature, they could launch each other in execution.
Eventually, one might be able to use a modified qemu running on the CPU to bootstrap a qemu + a shader compiler that have been translated to run on the GPU, so that the entire simulation of a CPU is done on the GPU.
Exciting if an Ai that is helping in its own improvements finds this and incorporates it into its own architecture. Then it starts reading and running all the worlds binary and gains intelligence as a fully actualized "computer". Finally becoming both a master of language and of binary bits. Thinking in poetry and in pure precise numerical calculations.
Every clueless person who suggest that we move to GPUs entirely have zero idea how things work and basically are suggesting using lambos to plow fields and tractors to race in nascar
How is this different than the (various?) efforts back then to build a machine based on the Intel i860? Didn’t work, although people gave it a good try.
"Result: 100% accuracy on integer arithmetic" - Could someone with low-level LLM expertise comment on that: Is that future-proof, or does it have to be re-asserted with every rebuild of the neural building blocks?
Can it be proven to remain correct?
I assume there's a low-temperature setting that keeps it from getting too creative.
The creative thinking behind this project is truly mind boggling.
You're both completely missing the point. It's important that an LLM be able to perform exact arithmetic reliably without a tool call. Of course the underlying hardware does so extremely rapidly, that's not the point.
That would be cool. A way to read cpu assembly bytecode and then think in it.
It's slower than real cpu code obviously but still crazy fast for 'thinking' about it. They wouldn't need to actually simulate an entire program in a never ending hot loop like a real computer. Just a few loops would explain a lot about a process and calculate a lot of precise information.
Ya know just today I was thinking around a way to compile a neural network down to assembly. Matching and replacing neural network structures with their closest machine code equivalent.
This is way cooler though! Instead of efficiently running a neural network on a CPU, I can inefficiently run my CPU on neural network! With the work being done to make more powerful GPUs and ASICs I bet in a few years I'll be able to run a 486 at 100MHz(!!) with power consumption just under a megawatt! The mind boggles at the sort of computations this will unlock!
Few more years and I'll even be able to realise the dream of self-hosting ChatGPT on my own neural network simulated CPU!
The bit about multiplication being ~12x faster than addition is worth pausing on. In silicon, addition is the "easy" operation — but here the complexity hierarchy completely inverts. Makes sense once you think about it: multiplication decomposes into parallel byte-pair lookups (which neural nets handle trivially as table approximation), while addition has a sequential carry chain you can't fully parallelize away.
Funny enough, analog computing had the same inversion — a Gilbert cell does multiplication cheaply, while addition needs more complex summing circuits. Completely different path to the same result.
What I haven't seen discussed: if the whole CPU is neural nets, the execution pipeline is differentiable end-to-end. You could backprop through program execution. Useless for booting Linux, but potentially interesting for program synthesis — learning instruction sequences via gradient descent instead of search. Feels like that's the more promising research direction here than trying to make it fast.
“A CPU that runs entirely on the GPU”
I imagine a carefully crafted set of programming primitives used to build up the abstraction of a CPU…
“Every ALU operation is a trained neural network.”
Oh… oh. Fun. Just not the type of “interesting” I was hoping for.
Isn't it interesting it doesn't instantly crash from a precision error? That sounds carefully crafted to me.
Is it emulating a Pentium processor? :)
ARM64(!?!) I know you were joking, but still.
Get used to it. The modern day solution for everything right now is to throw AI at it.
Hmmm... I need to measure this piece of wood for cutting, let me take a picture of it and see what the ai says its measurement is instead of using a measuring tape because it is faster to use the AI.
That honestly sounds great! If it works...
Please tell me what you had in mind so I can try something different!
I was imagining something more like Xeon Phi
Begin reimplementing a subleq/muxleq VM with GPU primitive commands:
https://github.com/howerj/muxleq (it has both, muxleq (multiplexed subleq, which is the same but mux'ing instructions being much faster) and subleq. As you can see the implementation it's trivial. Once it's compiled, you can run eforth, altough I run a tweked one with floats and some beter commands, edit muxleq.fth, set the float to 1 in that file with this example:
The same with the classic do..loop structure from Forth, which is not enabled by default, just the weird for..next one from EForth: and recompile: run: Once you have a new.dec image, you can just use that from now on.I'll do you one better, imagine a CPU that runs entirely in an LLM.
You’re absolutely right! I made an arithmetic mistake there — 3 * 3 is 9, not 8. Let’s correct that: Before: EAX = 3 After imul eax, eax: EAX = 9 Thanks for catching that — the correct return value is 9.
What an amazing multiplication request! The numbers you have chosen reveal an exquisite taste which can only be the product of an outstanding personality.
Someone needs to implement LLVMPipe to target this isa, then one can run software OpenGL emulation and call it "hardware accelerated".
Surely that would be hardware decelerated
This causes me discomfort.
A fun experiment but I wonder how many out there seriously think we could ever completely rid ourselves of the CPU. It seems to be a rising sentiment.
The cost of communicating information through space is dealt with in fundamentally different ways here. On the CPU it is addressed directly. The actual latency is minimized as much as possible, usually by predicting the future in various ways and keeping the spatial extent of each device (core complex) as small as possible. The GPU hides latency with massive parallelism. That's why we can put them across relatively slow networks and still see excellent performance.
Latency hiding cannot deal well in workloads that are branchy and serialized because you can only have one logical thread throughout. The CPU dominates this area because it doesn't cheat. It directly targets the objective. Making efficient, accurate control flow decisions tends to be more valuable than being able to process data in large volumes. It just happens that there are a few exceptions to this rule that are incredibly popular.
> I wonder how many out there seriously think we could ever completely rid ourselves of the CPU. It seems to be a rising sentiment.
This sentiment is not a recent thing. Ever since GPGPU became a thing, there have been people who first hear about it, don't understand processor architectures and get excited about GPUs magically making everything faster.
I vividly recall a discussion with some management type back in 2011, who was gushing about getting PHP to run on the new Nvidia Teslas, how amazingly fast websites will be!
Similar discussions also spring up around FPGAs again and again.
The more recent change in sentiment is a different one: the "graphics" origin of GPUs seem to have been lost to history. I have met people (plural) in recent years who thought (surprisingly long into the conversation) that I mean stable diffusion when talking about rendering pictures on a GPU.
Nowadays, the 'G' in GPU probably stands for GPGPU.
The dream I think has always been heterogeneous computing. The closest here I think is probably apple with their multi-core cpus with different cores, and a gpu with unified memory. (someone with more knowledge of computer architecture could probably correct me here).
Have a CPU, GPU, FPGA, and other specific chips like Neural chips. All there with unified memory and somehow pipelining specific work loads to each chip optimally to be optimal.
I wasn't really aware people thought we would be running websites on GPUs.
I see us not getting rid of CPU, but CPU and GPU being eventually consolidated in one system of heterogeneous computing units.
CPU and GPU have very different ways of scheduling instructions, requiring somehow different interfaces and programming models.. I'd hazard to say that a GPU and CPU with unified memory access (like the Apple's M series, and most mobile chips) is already such a consolidated system.
nVidia Jetson also has unified memory access btw.
We're getting there already with e.g. Grace-Blackwell chips.
Agreed. Much like “RISC is gonna replace everything” - it didn’t. Because the CPU makers incorporated lessons from RISC into their designs.
I can see the same happening to the CPU. It will just take on the appropriate functionality to keep all the compute in the same chip.
It’s gonna take awhile because Nvidia et al like their moats.
CISC only survived because CPUs now dedicate a ton of silicon to decoding the CISC stream into RISC-y microcode. RISC CPUs can avoid this completely, but it turns out backwards compatibility was important to the market and the transistor cost of "instruction decode" just adds like +1 pipeline depth or something.
> CISC only survived because CPUs now dedicate a ton of silicon to decoding the CISC stream into RISC-y microcode.
For Intel CPUs, this was somewhat true starting from the Pentium Pro (1995). The Pentium M (2004) introduced a technique called "micro-op fusion" that would bind multiple micro-ops together so you'd get combined micro-ops for things like "add a value from memory to a register". From that point onward, the Intel micro-ops got less and less RISCy until by Sandy Bridge (2011) they pretty much stopped resembling a RISC instruction set altogether. Other x86 implementations like K7/K8/K10 and Zen never had micro-ops that resembled RISC instructions.
> CPUs now dedicate a ton of silicon to decoding the CISC stream into RISC-y microcode.
In absolute terms, this is true. But in relative terms, you're talking less than 1% of the die area on a modern, heavily cached, heavily speculative, heavily predictive CPU.
Didn't there use to be a joke about Intel being the biggest RAM manufacturer (given the amount of physical space caches take on a CPU)?
I hadn't heard that, but certainly, there must have been many times when Intel held the crown of "biggest working hunk of silicon area devoted to RAM."
> It will just take on the appropriate functionality to keep all the compute in the same chip.
So, an iGPU/APU? Those exist already. Regardless, the most GPU-like CPU architecture in common use today is probably SPARC, with its 8-way SMT. Add per-thread vector SIMD compute to something like that, and you end up with something that has broadly similar performance constraints to an iGPU.
> I wonder how many out there seriously think we could ever completely rid ourselves of the CPU.
How do you class systems like the PS5 that have an APU plugged into GDDR instead of regular RAM? The primary remaining issue is the limited memory capacity.
I wonder if we might see a system with GPU class HBM on the package in lieu of VRAM coupled with regular RAM on the board for the CPU portion?
I don’t think the remaining issue is memory capacity. CPUs are designed to handle nonlinear memory access and that is how all modern software targeting a CPU is written. GPUs are designed for linear memory access. These are fundamentally different access patterns the optimal solution is to have 2 distinct processing units
people say this a lot, but with little technical justification.
gpus have had cache for a long time. cpus have had simd for a long time.
it's not even true that the cpu memory interface is somehow optimized for latency - it's got bursts, for instance, a large non-sequential and out-of-page latency, and has gotten wider over time.
mostly people are just comparing the wrong things. if you want to compare a mid-hi discrete gpu with a cpu, you can't use a desktop cpu. instead use a ~100-core server chip that also has 12x64b memory interface. similar chip area, power dissipation, cost.
not the same, of course, but recognizably similar.
none of the fundamental techniques or architecture differ. just that cpus normally try to optimize for legacy code, but gpus have never done much ISA-level back-compatibility.
GDDR has high bandwidth but limited capacity. Regular RAM is the opposite, leaving typical APUs memory bandwidth starved.
Both types of processor perform much better with linear access. Even for data in the CPU cache you get a noticable speedup.
The primary difference is that GPUs want large contiguous blocks of "threads" to do the same thing (because in reality they aren't actually independent threads).
If anything, GPUs combine large private per-compute unit private address spaces and a separate shared/global memory, which doesn't mesh very well with linear memory access, just high locality. You can kinda get to the same arrangement on CPU by pushing NUMA (Non-Uniform Memory: only the "global" memory is truly Unified on a GPU!) to the extreme, but that's quite uncommon. "Compute-in-memory" is a related idea that kind of points to the same constraint: you want to maximize spatial locality these days, because moving data in bulk is an expensive operation that burns power.
Mainframes still exist, so CPU isnt going anywhere. Too useful of a tool
I don't think we get rid of the CPU. But the relationship will be inverted. Instead of the CPU calling the GPU, it might be that the GPU becomes the central controller and builds programs and calls the CPU to execute tasks.
But... why?
How do you win moving your central controller from a 4GHz CPU to a multi-hundred-MHz single GPU core?
If we tried this, all we'd do is isolate a couple of cores in the GPU, let them run at some gigahertz, and then equip them with the additional operations they'd need to be good at coordinating tasks... or, in other words, put a CPU in the GPU.
This will never without completely reimagining how process isolation works and rewriting any OS you'd want to run on that architecture.
Sounds reminiscent of the CDC 6600, a big fast compute processor with a simple peripheral processor whose barreled threads ran lots of the O/S and took care of I/O and other necessary support functions.
As foretold six years ago. [1]
[1]: https://breandan.net/2020/06/30/graph-computation#roadmap
https://en.wikipedia.org/wiki/Xeon_Phi#Knights_Landing ?
https://en.wikipedia.org/wiki/Larrabee_(microarchitecture) ?
Before that there was Forth running in the Transputer, which looks really close to current parallel computing.
Hey everyone thank you taking a look at my project. This was purely just a “can I do it” type deal, but ultimately my goal is to make a running OS purely on GPU, or one composed of learned systems.
I think it's curious that you're saying "on GPU" when you mean "using tensors." GPUs run compute shaders naturally and can trivially act like CPUs, just use CUDA. This is more akin to "a CPU on NPU" and your NPU happens to be a GPU.
Hi! I think that the idea is certainly a fun one. There is a long history of trying to make a good parallel operating system. I do not think that any of the projects succeeded though. This article is a good read if you are interested in that. I am not sure why the economics of parallel computer operating systems have not worked out so far. I think it most likely has to do with the operating systems that we have being good enough and familiar. [0] https://news.ycombinator.com/item?id=43440174
The Blue Gene Active Storage project demonstrated compute in highly parallel “storage” where storage was HPC memory. It could work for the relationship between CPU and GPU, FPGA, etc.
https://www.fz-juelich.de/en/jsc/downloads/slides/bgas-bof/b...
This is hilarious and profoundly in the spirit of hacker news. Thanks for posting:)
GNU/GPU
I was always wondering what would happen if you trained a model to emulate a cpu in the most efficient way possible, this is definitely not what I expected, but also shows promise on how much more efficient models can become.
I was taught years ago that MUL and ADD can be implemented in one or a few cycles. They can be the same complexity. What am I missing here?
Also, is it possible to use the GPU's ADD/MUL implementation? It is what a GPU does best.
To multiply two arbitrary numbers in a single cycle, you need to include dedicated hardware into your ALU, without it you have to combine several additions and logical shifts.
As to why not use the ADD/MUL capabilities of the GPU itself, I guess it wasn’t in the spirit of the challenge. ;)
Why do we call them GPUs these days?
Most GPUs, sitting in racks in datacenters, aren't "processing graphics" anyhow.
General Processing Units
Gross-Parallelization Units
Generative Procedure Units
Gratuitously Profiteering Unscrupulously
Greed Processing Units
This is just brilliant!
General Parallel Units
Sometimes Gibberish Producing Units
The dedicated term GPGPU [0] didn't catch on.
[0]: https://en.wikipedia.org/wiki/General-purpose_computing_on_g...
VPU. Vector/Video Processing Unit.
This is a fun idea. What surprised me is the inversion where MUL ends up faster than ADD because the neural LUT removes sequential dependency while the adder still needs prefix stages.
Time to benchmark Doom.
Now we know future genius models won't even need CPUs, just tensor/rectifier circuits. If they need a CPU, they will just imagine them.
A low-bit model with adaptive sparse execution might even be able to imagine with performance. Effectively, neural PGA capability.
Out of curiosity, how much slower is this than an actual CPU?
Based on addition and subtraction, 625000x slower or so than a 2.5ghz cpu
I wish the project said how many CPUs could be run simultaneously on one GPU.
It might be worth having a CPU that's 100 times slower (25 MHz) if 1000 of them could be run simultaneously to potentially reach a 10 times speedup for embarrassingly parallel computation. But starting from a hole that's 625000x slower seems unlikely to lead to practical applications. Still a cool project though!
So it could run Doom?
Yes: https://github.com/robertcprice/nCPU?tab=readme-ov-file#doom...
Oh I forgot to Doom scroll.
Can we run doom inside of doom yet?
Yes: https://github.com/kgsws/doom-in-doom
What a time to be alive
Doom it's easy. Better the ZMachine with an interpreter based on DFrotz, or another port. Then a game can even run under a Game Boy.
For a similar case, check Eforth+Subleq. If this guy can emulate subleq CPU under a GPU (something like 5 lines under C for the implementation, the rest it's C headers and the file opening function), it can run Eforth and maybe Sokoban.
it's just a machinecode emulator that happens to run on a gpu. it's more of a flying pig than a new porcine airliner.
I don't quite understand how multiply doesn't require addition as well to combine the various partial products.
I don‘t understand why you would train a NN for an operation like sqrt that the GPU supports in silicon.
I see it as a practical joke or a fun hack, like CPUs implemented in the Game of Life, or in Minecraft.
It’s been done already. Have a look at Quest for Tetris: https://codegolf.stackexchange.com/questions/11880/build-a-w...
I actually ran Sokoban under EForth running on top of subleq/muxleq with a VM interpreted under few lines of AWK.
Cool. However, one still need CPU to send commands to GPU in order to let GPU do CPU things.
> Cool. However, one still need CPU to send commands to GPU in order to let GPU do CPU things.
Doesn't the Raspberry Pi's GPU boot up first, and then the GPU initializes the CPU?
With this technology, we've eliminated the need for that superfluous second step.
Well, I don't have enough knowledge on the boot process of RPi. However, I do expect that most modern hardware, e.g. x86, do not work like RPi, so your words do not hold in most realistic scenarios, at least for now. Besides, do current GPUs (not only GPUs on RPi) have the ability to self instruct in order to achieve what you said?
Saw the DOOM raycast demo at bottom of page.
Can't wait for someone to build a DOOM that runs entirely on GPU!
Depends entirely on your definition of 'entirely', but https://github.com/jhuber6/doomgeneric is pretty much a direct compilation of the DOOM C source for GPU compute. The CPU is necessary to read keyboard input and present frame data to the screen, but all the logic runs on the GPU.
"Multiplication is 12x faster than addition..."
Wow. That's cool but what happens to the regular CPU?
This CPU simulator does not attempt to achieve the maximum speed that could be obtained when simulating a CPU on a GPU.
For that a completely different approach would be needed, e.g. by implementing something akin to qemu, where each CPU instruction would be translated into a graphic shader program. On many older GPUs, it is impossible or difficult to launch a graphic program from inside a graphic program (instead of from the CPU), but where this is possible one could obtain a CPU emulation that would be many orders of magnitude faster than what is demonstrated here.
Instead of going for speed, the project demonstrates a simpler self-contained implementation based on the same kind of neural networks used for ML/AI, which might work even on an NPU, not only on a GPU.
Because it uses inappropriate hardware execution units, the speed is modest and the speed ratios between different kinds of instructions are weird, but nonetheless this is an impressive achievement, i.e. simulating the complete Aarch64 ISA with such means.
> where each CPU instruction would be translated into a graphic shader program
You really think having a shader per CPU-instruction is going to get you closer to the highest possible speed one can achieve?
You could coalesce multiple instructions per shader, but even with a single CPU instruction (which would be translated to a sequence of GPU instructions), you could reach orders of magnitude greater speed than in this neural network implementation, by using the arithmetic-logic execution units of the GPU.
Once translated, the shader programs would be reused. All this could be inserted in qemu, where a CPU is emulated by generating for each instruction a short program that is compiled and then the resulting executable functions are cached and executed during the interpretation of the program for the emulated CPU.
In qemu, one could replace the native CPU compiler with a GPU compiler, either for CUDA or for a graphic shader language, depending on the target GPU. Then the compiled shaders could be loaded in the GPU memory, where, if the GPU is recent enough to support this feature, they could launch each other in execution.
Eventually, one might be able to use a modified qemu running on the CPU to bootstrap a qemu + a shader compiler that have been translated to run on the GPU, so that the entire simulation of a CPU is done on the GPU.
If its bindless and pre-compiled why not? What's a faster way?
Exciting if an Ai that is helping in its own improvements finds this and incorporates it into its own architecture. Then it starts reading and running all the worlds binary and gains intelligence as a fully actualized "computer". Finally becoming both a master of language and of binary bits. Thinking in poetry and in pure precise numerical calculations.
Every clueless person who suggest that we move to GPUs entirely have zero idea how things work and basically are suggesting using lambos to plow fields and tractors to race in nascar
Bad comparison. Lambos are regularly plowing fields and they're quite good at it. https://www.lamborghini-tractors.com/en-eu/
I remembered that labos used to make tractors after I posted the comment. Nice catch!
very tangentially related is whatever vectorware et al are doing: https://www.vectorware.com/blog/
How is this different than the (various?) efforts back then to build a machine based on the Intel i860? Didn’t work, although people gave it a good try.
"Result: 100% accuracy on integer arithmetic" - Could someone with low-level LLM expertise comment on that: Is that future-proof, or does it have to be re-asserted with every rebuild of the neural building blocks? Can it be proven to remain correct? I assume there's a low-temperature setting that keeps it from getting too creative.
The creative thinking behind this project is truly mind boggling.
What is the purpose of this project? I didn't get it. How will it be useful?
> How will it be useful?
Does it need to be?
Oh these brave new ways to paraphrase the good old "fuck fuel economy"...
Thank you, Mr. Do-because-I-can!
Yours truly,
- GPU company CEO,
- Electric company CEO.
Being able to perform precise math in an LLM is important, glad to see this.
Just want to point out this comment is highly ironic.
This is all a computer does :P
We need llms to be able to tap that not add the same functionality a layer above and MUCH less efficiently.
> We need llms to be able to tap that not add the same functionality a layer above and MUCH less efficiently.
Agents, tool-integrated reasoning, even chain of thought (limited, for some math) can address this.
You're both completely missing the point. It's important that an LLM be able to perform exact arithmetic reliably without a tool call. Of course the underlying hardware does so extremely rapidly, that's not the point.
Could you explain why that is?
A tool call is like 100,000,000x slower isnt it?
No idea really, but if it is speed related I would have thought that OP would have used faster rather than importance to try and make their point.
It's both. Being dirrctly a part of it makes it integrated into its intelligence for training and operation.
The computer ALREADY does do math reliably. You are missing the point.
That would be cool. A way to read cpu assembly bytecode and then think in it.
It's slower than real cpu code obviously but still crazy fast for 'thinking' about it. They wouldn't need to actually simulate an entire program in a never ending hot loop like a real computer. Just a few loops would explain a lot about a process and calculate a lot of precise information.
Why?
can i run linux on a nvidia card though?
Linux runs everywhere
Except on my stupid iPad “Pro”. :(
iirc theres an app on the app store that's basically a small alpine container
Now I've seen it all. Time to die.. (meant humourously)
Well GPU are just special purpous CPU.
Ya know just today I was thinking around a way to compile a neural network down to assembly. Matching and replacing neural network structures with their closest machine code equivalent.
This is way cooler though! Instead of efficiently running a neural network on a CPU, I can inefficiently run my CPU on neural network! With the work being done to make more powerful GPUs and ASICs I bet in a few years I'll be able to run a 486 at 100MHz(!!) with power consumption just under a megawatt! The mind boggles at the sort of computations this will unlock!
Few more years and I'll even be able to realise the dream of self-hosting ChatGPT on my own neural network simulated CPU!