Based on the animation, I personally don't expect this to be very helpful. The main way diffusion models help is preventing answers like "No. [proceeds to explain why the answer is yes]", and since the blocks are so small, the LLM can't fully explain before it has to say yes or no.
Could you expound on this? From what I'm reading, this sounds like an issue with diffusion models that their block diffusion model is purposefully designed to mitigate, by conditioning on previous blocks and allowing for larger blocks if that conditioning still doesn't help maintain coherence.
It's an issue that you run into as long as you're forced to start with a yes/no answer. It's a problem forward-only LLMs have and diffusion models don't, and normal block diffusion is closer to forward LLMs than diffusion models.
You could increase the block size to act more like a full diffusion model, but you would lose some of the benefits of block diffusion.
Those early tokens aren't necessarily immutable, they still could be "edited" depending on UI. Human conversation and even internal compositional cogitation is full of "what I meant by that" or "on second thought" type clarifications and corrections. Sometimes these aren't verbosely disclaimed, there's body language involved. Likewise there could be occasional lookback parsing and later blocks could convey modifications. The UI can then highlight those revisions transparently by applying strikethrough styling, coloration, dotted underline with tooltip on hover, etc.
Like we've seen with human interactions and media, this may be susceptible to misinterpretation by the reader or listener, especially via second-hand clips or screenshots lacking full context. But if the UX is clean and speedy it would be less likely.
To be fair, it's not "obviously" better, but it opens a new point on the tradeoff curve. For a lot of use cases full autoregression is clearly better, and for some others ful diffusion will still be better.
Autoregressivity has high quality outputs but is fairly slow.
Diffusion has low quality output but is quite fast.
This allows you to go in the middle, not as high quality as full autoregression and not as fast as full diffusion, but a balance between both.
The memory bandwidth bottleneck limits the speed of running local models, the fact that this model is parallelizable means that even with one batch inference it will be possible to balance memory bandwidth bottleneck and compute bottleneck (aka much more speed).
this animation really makes the difference hit home: https://x.com/_akhaliq/status/1900027075370586262
Based on the animation, I personally don't expect this to be very helpful. The main way diffusion models help is preventing answers like "No. [proceeds to explain why the answer is yes]", and since the blocks are so small, the LLM can't fully explain before it has to say yes or no.
Could you expound on this? From what I'm reading, this sounds like an issue with diffusion models that their block diffusion model is purposefully designed to mitigate, by conditioning on previous blocks and allowing for larger blocks if that conditioning still doesn't help maintain coherence.
It's an issue that you run into as long as you're forced to start with a yes/no answer. It's a problem forward-only LLMs have and diffusion models don't, and normal block diffusion is closer to forward LLMs than diffusion models.
You could increase the block size to act more like a full diffusion model, but you would lose some of the benefits of block diffusion.
My understanding here is block size can be arbitrarily large, under similar constraints as diffusion models. Is that not the case?
This is cool but I feel like you lose the best part of language-diffusion models which is their ability to edit early tokens.
Those early tokens aren't necessarily immutable, they still could be "edited" depending on UI. Human conversation and even internal compositional cogitation is full of "what I meant by that" or "on second thought" type clarifications and corrections. Sometimes these aren't verbosely disclaimed, there's body language involved. Likewise there could be occasional lookback parsing and later blocks could convey modifications. The UI can then highlight those revisions transparently by applying strikethrough styling, coloration, dotted underline with tooltip on hover, etc.
Like we've seen with human interactions and media, this may be susceptible to misinterpretation by the reader or listener, especially via second-hand clips or screenshots lacking full context. But if the UX is clean and speedy it would be less likely.
To be fair, it's not "obviously" better, but it opens a new point on the tradeoff curve. For a lot of use cases full autoregression is clearly better, and for some others ful diffusion will still be better.
Autoregressivity has high quality outputs but is fairly slow. Diffusion has low quality output but is quite fast.
This allows you to go in the middle, not as high quality as full autoregression and not as fast as full diffusion, but a balance between both.
I love when someone comes up with a good idea that becomes immediately obvious as soon as it is introduced
I wonder how sliding window would work over blocks.
Excellent question
Only starts approaching PPL of AR at block size 4. May as well just use multi-token prediction with a standard AR model.
The memory bandwidth bottleneck limits the speed of running local models, the fact that this model is parallelizable means that even with one batch inference it will be possible to balance memory bandwidth bottleneck and compute bottleneck (aka much more speed).
Isn't this basically the diffusion-autoregressive sampling strategy from the LLaDA paper, maybe more carefully evaluated?