This was a difficult read, not necessarily due to the merry-go-round of trying to find the right fit, but the overhead involved and the maintenance burden this is going to become over time. Once it starts it's never going to be just another thin layer.
I'd prefer taking a step back to see if this is the right approach or if this safety can be solved in a much simpler way. Looking at the title of the submission I don't consider it practical. I would welcome any other ideas, I'm thinking along the lines of a simple and boring bff which does the parsing and validation and sending strongly typed objects to the front end.
This is the same complaint people had against react itself and any other number of frameworks that introduce a higher level of abstraction. Take it or leave it but llms will require strong frameworks wrapping their output- output which will be structured/constrained heavily
I've been thinking about this approach. How do you handle updates on this setup? For example, a status component that goes from "loading" to "success".
Does the LLM go back and modify the previous response, sending deltas to the client? Effectively changing the component from loading to success. Or is something else happening behind the scenes?
I quite like this, specially coming from the idea of creating custom UIs depending on the type of output required for the answer. Could be a great tool for visual learners.
Obviously the next step and already happening but for some reason I doubt it'll be this exact technology. We're probably going to see a lot of competition here.
Author here. I'd love to see it! I settled on this approach after not finding any other ways to do it, but it's definitely not the only way. I'd like to see what other approaches people might come up with.
This was a difficult read, not necessarily due to the merry-go-round of trying to find the right fit, but the overhead involved and the maintenance burden this is going to become over time. Once it starts it's never going to be just another thin layer.
I'd prefer taking a step back to see if this is the right approach or if this safety can be solved in a much simpler way. Looking at the title of the submission I don't consider it practical. I would welcome any other ideas, I'm thinking along the lines of a simple and boring bff which does the parsing and validation and sending strongly typed objects to the front end.
This is the same complaint people had against react itself and any other number of frameworks that introduce a higher level of abstraction. Take it or leave it but llms will require strong frameworks wrapping their output- output which will be structured/constrained heavily
Really nice writeup!
I've been thinking about this approach. How do you handle updates on this setup? For example, a status component that goes from "loading" to "success".
Does the LLM go back and modify the previous response, sending deltas to the client? Effectively changing the component from loading to success. Or is something else happening behind the scenes?
The components are standard react components so once it is rendered they can set their own state and effects and load in their own data.
I hope I understood your question correctly?
I quite like this, specially coming from the idea of creating custom UIs depending on the type of output required for the answer. Could be a great tool for visual learners.
Obviously the next step and already happening but for some reason I doubt it'll be this exact technology. We're probably going to see a lot of competition here.
Author here. I'd love to see it! I settled on this approach after not finding any other ways to do it, but it's definitely not the only way. I'd like to see what other approaches people might come up with.
Starred in GitHub.
I use assistant-ui to handle my frontend rendering. It would make sense for them to adopt something like this
Interesting- I wonder if we can integrate this approach with a framework like AG-UI (as opposed to copilotkit)
https://docs.ag-ui.com/quickstart/clients
Chatgpt already looks a lot like that, tbh.