The problem with claims of imminent societal collapse is they treat "collapse" like an event, with the world neatly divided into before and after. Historically, societal collapse is much more fuzzy. No neat single cause. No single moment when things flipped. You can only truly know the collapse has happened well in retrospect.
In terms of computing, this means that any strategy looking at a before/after collapse framing is missing the entire interesting middle part - how do we transition from this high-tech to low-tech world over the course of decades. I seriously doubt it can happen by anticipating what computing we might need and just leapfrogging there.
Same author has a sort of "missing link" version that is better suited to modern PCs. Doesn't answer your philosophical questions about civilizations etc. but the goal isn't to start building Z80s out of transistors anytime soon.
I think the vision here is more likely to play out, if any were to. Not many fabs that can make modern chips, and the biggest one is under the business end of Chinese artillery so. Gotta make do with whatever we have even if new supply dries up.
Plausible, but there really may not be a middle part longer than a few hours or days, depending on the mechanism (and I'm not even considering nuclear war here). I recall reading a first-hand account of surviving in Sarajevo during the war, and one of the things that most struck the author was how quickly -a mere couple of days- everything changed from going about the day as normal to apocalyptic survival-mode hellscape. And not because of specific battle events, but the breakdown of supply chains.
We really have no way to predict it. It is even likely to be different in different locales.
He kinda deals with that - he has also built another OS called DuskOS, which is when we still have modern computers lying around, but can't make more of them. It's the OS you would use prior to devolving all the way to CollapseOS
I feel like he has considered everything. He also has AbacOS which is an abacus-based operating system, for when we run out of energy, then HamstOS for a hamster-wheel powered operating system
HamstOS is just the microcontroller for the power supply. Pros: it is voice activated and fault tolerant. Cons: only responds to voice prompts beginning "Hey Mr. Binky". Also, it secretly hates you.
Or worse, decades in the future when historians look back, they might determine that collapse started before 2025 (perhaps 9/11, 2008 financial collapse, covid, the re-emergence of populist fascism, AI, microplastics, climate change ... take your pick).
Which means we might be living in collapse now, we just don't know it yet. "The Long Emergency" as James Howard Kunstler put it.
Is there any Empire that collapsed without military conquest? You can argue the collapse predated the conquest, but there’s always a conquest to seal the deal.
The British Empire didn't collapse due to military conquest by the Germans, it fell due to economic conquest by the United States (see the Atlantic Charter and U.S. intervention in the Suez Crisis).
Hasn't Kunstler been predicting the imminent end of oil supplies since the 1990s?
When a professional prognosticator's prognostications continue to be falsified by actual events, year after endless year, at some point don't people stop listening to them?
"When a professional prognosticator's prognostications continue to be falsified by actual events, year after endless year, at some point don't people stop listening to them?"
Yet people still listen to Musk about when Tesla will have autonomous driving.
> When a professional prognosticator's prognostications continue to be falsified by actual events, year after endless year, at some point don't people stop listening to them?
Funnily enough - not whatsoever! And the benefits of this amazing fact are reaped by economists* the world over, year after year.
* And, to be fair, journalists, politicians, various flavours of youtuber, etc, etc.
I'm assuming you're talking about the internet, networked services, etc, because that's the only conceivable way a collapse could happen that quickly... except all modern infrastructure we rely on like that has backups and redundancy built in, and largely separated failure domains, so that such a failure would never be able to happen that quickly.
Yes, exactly. Even if some fabs survived, the ability to supply and staff them would not last long. The world's ability to manufacture advanced computers would disappear.
Even if the collapse is a fuzzy decades one I think it would also have an ever increasing probability of the remainder in under an hour as something entirely forgotten fails.
The frog will boil slowly and discontinuously. The net effects are that by 2030:
- Industrial food will gradually become extremely expensive and homesteading will become more popular. Crop failures and famines will be routine as food security and international trade wanes.
- Unemployment will soar with automation and overall decreased purchasing power.
- Crime and theft will soar as police decline (Austin TX is already understaffed by 300 officers and doesn't respond to property crimes until 48 hours later)
- Civil/national/world wars.
- 100M's of climate refugees migrating across and between continents. If you think scapegoating of "illegal" immigrants is bad now, just wait until all forms of hate in the forms of racism, xenophobia, and classism become turbocharged. Expect militarized borders guarded by killer robots.
In the past couple of years I've worked on a Ghidra extension that can export any subset of a program as a relocatable object file. I built that because I tried to reverse-engineer/decompile a video game and had unconventional opinions on how to proceed about it.
I didn't design it for a post-collapse scenario, but one can salvage program parts from binary artifacts and create new programs with it, just like how a mechanic in a Mad Max world can scavenge car parts from a scrapyard and create new contraptions. It deconstructs the classical compile-assemble-link workflow in a manner that tends to cause migraines to sane persons.
I've even managed to slightly bend the rules around ABI and sling binary code across similar-ish platforms (PlayStation to Linux MIPS and Linux x86 to Windows x86), but to really take this to its logical conclusion I'd need a way to glue together incompatible ABIs or even ISAs, in order to create program chimeras from just about any random set of binaries.
I did this once years ago (on Mac OS classic). I had an mp3 player I liked for its low CPU usage, but didn't like the UI. I figured out the entry points for the mp3 decoding routines and turned it basically into a library to call from my own UI.
Virtual machines/emulators are one extreme, recreating the environment it ran in so no human examination of the particular program is necessary. The approach you describe is at the other end, using bits of programs directly in other code to do basically black-box functions that aren't worth figuring out and coding properly.
This is a technique used in "living off the land" in infosec circles. An attacker might not have root privileges, but there might be a bit of code in this program that does one thing with escalated privilege, and a bit of code in that program that does another thing with escalated privilege, and if you have enough of these you can stitch together a "gadget", a series of jumps or calls to these bits of privileged code, to get a shell or do what you want.
This is a bit different. In your case, you're reusing code and data laying around in a running process to do your bidding from within.
Here, I'm actually stealing code and data from one or more executables and turning these back into object files for further use. Think Doctor Frankenstein, but with program parts instead of human parts.
I only operate on dead patients in a manner of speaking, but you could delink stuff from a running process too I suppose. I don't think it would be useful in the context of return-oriented programming, since all the code you can work with is already loaded in memory, there's no need to delink it back to relocatable code first.
Neat! Looks like the closest equivalent in your tooling would be pe2obj, which repacks a PE executable as a COFF object file.
Unless I'm mistaken, it doesn't seem to do anything in particular w.r.t. relocations, which is the tricky part about delinking. My educated guess it that repacking might be enough for purely self-contained position-independent code and data, but anything that contains an absolute address will lead to dangling references to the original program's address space.
My tooling recovers relocatable section bytes and recreates relocation tables in order to export object files that are actually relocatable, regardless of their contents.
You've tickled more than one person's brain here :D
Please consider writing up something about this binary mashup toolkit. You have taken an unusual path and shown how far one can go... it's worthy of sharing more widely.
The only unusual part about my toolkit is the delinker. Since it outputs relocatable object files, you can then make use of them with conventional tooling.
I've mostly used a linker to build new programs with the parts I've exported. Some of my users use objdiff to compare the output of their decompilation efforts against recreated object files. Others use objcopy or homemade tools to further modify the object files (mostly tweaking the symbol table) prior to reuse. One can generate assembly listings with objdump, although Ghidra already gives you that.
Ironically, the hardest part about using my delinker I think (besides figuring out errors when things go wrong) is just realizing and imagining what you can do with it. It's rather counter-intuitive at first because this goes against anything one might learn in CS 101, but once you get it then you can do all kinds of heretical things with it.
To take this to the next level, beyond adding support for more ISAs and object file exporters, I'd need to generate debugging symbols in order to make debugging the pilfered code a less painful experience. There are plenty of ways delinking can go wrong if the Ghidra database is incorrect or incomplete and troubleshooting undefined behavior from outer space can get tedious.
But in the context of a post-collapse scenario, what one would be really after is some kind of linker that can stitch together a working program from just about any random bunch of object files, regardless of ABI and ISA. My experiments on cross-delinking remained rather tame (same ISA, similar ABI) because gluing all that mess by hand gets really complicated really fast.
I've demonstrated a bunch of use-cases on my blog, ranging from porting software to pervasive binary patching and pilfering application code into libraries, all done on artifacts without source code available.
I have one user in particular who successfully used it on a huge (7 MiB) program in a span of a couple of weeks, most of them spent fixing bugs on my end. They then proceeded to crank up the insanity to 11 by splitting that link-time optimized artifact into pieces and swapping parts out willy-nilly. That is theoretically impossible because LTO turns the entire program into a single translation unit subject to inter-procedure optimizations, so functions will exhibit nonstandard calling conventions you can't replicate with freshly built source code, but I guess anything's possible when you binary patch __usercall into MSVC.
I should get back to it once work is less hectic. Delinking is an exquisitely intricate puzzle to solve algorithmically and I've managed to make that esoteric but powerful reverse-engineering technique not only scalable to megabytes worth of code, but also within reach of everyday reverse-engineers.
I've been meaning to write more articles and not just about delinking or reverse-engineering. I've been hoarding some lower-grade cursed stuff, like a git proxy/cache written in a hundred lines of bash, or a Jenkins plugin that exposes a Debian source repository as a SCM provider and enables one to trivially create a Debian package build server with it (complete with dependency management).
Guess I'd better unbrick my Chromebook first. I still don't know how changing its battery managed to bork an unmodded, non-developer mode Chromebook so thoroughly that the official recovery claims it doesn't know what model it is ("no defined device identities matched this device.").
If "our global supply chain [will] collapse before we reach 2030," why is having a some type of computer a priority?
I feel these kinds of projects may be fun, but they're essentially just LARPing. Though this one seems to have a more reasonable concept than most (e.g. "My raspbery pi-based "cyberdeck" with offline wikipedia will help!" No, you need a fucking farming manual on paper.).
Beyond LARPing the feasibility of building new computers post societal collapse, the website's proposal for collapse being so imminent is... weakly justified?
I'm not an optimist in today's societal climate, but the rationale [1] stems from peak oil and cultural bankruptcy. Peak oil is (was?) the concern of limited oil production in the face of static or increasing demand. The 2019 peak from big oil wasn't about declining production, but declining demand [2]. Which is good, if the ideal is long-lasting migration to renewables.
I won't try to predict the future w/r/t what current societal trends mean for the long-term success of global supply chains, but I would be greatly surprised if cultural bankruptcy alone causes the complete collapse of society in the next 5-10 years.
I think full scale collapse is unlikely. I'd rate balkanization / large scale internet disruptions as pretty likely to happen within the next few years due to a combination of:
- lightly protected and globally reachable attack surface
- increasing geopolitical tensions
- the bizarre tendency to put everything possible online, i.e. IoT devices that always need internet, resources that rely on CDNs to function, other weird dependencies.
You're thinking of a PC. This project doesn't address that type of computer at all. 8-bit computers programmed in Forth suggests very strongly that this is aimed at control engineering, and area where having a few computers makes a big difference to what you can do in an industrial process. It might also be useful for simple radio communication.
Solar powered micro-controllers might be the way to go that could hook up to crude automated farm
machines. Depending on the life span of batteries, solar powered ebook readers to store those farming manuals and other guides, solar powered computers for spreadsheets and planning software, and solar powered cb ham radios for communications with built in encryption keys.
In a societal collapse, the only thing that you list that I think would be worth the effort would be "solar powered cb ham radios for communications."
Crude automated [solar powered] farm machines - would probably be useless. Animal power is probably the way to go. Or steam. Go to a threshing bee sometime: I've seen a tractor that runs on wood.
Solar powered ebook readers to store those farming manuals and other guides - the life span of batteries would be short, get that shit on paper fast.
Solar powered computers for spreadsheets and planning software - plan in your head or use a paper spreadsheet.
Computers might be everywhere today, and you personally might not know how to do anything without a computer, but practically no one had a computer 40-45 years ago, and literally no one had a computer when society was last at a "collapse" level of technology. COMPUTERS ARE NOT NECESSARY.
Just because computers weren't around 40-50 years ago doesn't mean that computers won't be very handy to have around in a post-collapse world. Technology is very much path-dependent: the future does not look like the past, and incorporates everything that has happened up to that point. The world after the collapse of the Roman Empire did not look at all like the world before the Roman Republic, and incorporated many of the institutions and infrastructure left behind by the Roman Empire. It continued to use Roman coinage, for example, the Latin language, Roman roads, Roman provincial government, and so on.
The point of having computers is simply that they perform certain tasks orders of magnitude faster than humans. They're a tool, no more and no less. Before computers, a "calculator" was a person with paper and a slide rule, and you needed hundreds of them to do something like compute artillery trajectories, army logistics, machine tool curves, explosive lensing, sending rockets into space, etc. Managing to keep just one solar-powered calculator working for 10 years after a collapse frees up all those people to do things like farming. Keeping a solar-powered electric tractor working frees up all those farmers, and frees up the animals for eating.
IMHO this project is at least operating under the right principles, i.e. make the software work on scavenged parts, control your dependencies, be efficient with your computations, focus on things you can't do with humans.
> The world after the collapse of the Roman Empire did not look at all like the world before the Roman Republic, and incorporated many of the institutions and infrastructure left behind by the Roman Empire.
Our modern institutions and infrastructure depend on impossibly-complex, precariously-fragile world-spanning supply chains that rely on untold quantities of highly-skilled labor whose own training and employment is dependent upon having enough pre-existing material prosperity that 90% of the population is exempt from needing to grow their own food.
Meanwhile, the supply chain for the pre-Roman and post-Roman worlds were not very different. They were producing tin in Britain in 2000 BC, and they were producing tin in Britain in 1000 AD. Crucially, the top of the production pyramid (finished goods) was still close to the bottom of it (raw materials harvestable with minimal material dependencies) without a hundred zillion intervening layers of middlemen.
> Meanwhile, the supply chain for the pre-Roman and post-Roman worlds were not very different.
This isn't true! We know of huge differences between who was producing what goods and where between Roman and post-Roman Britain. To give one example: ceramic production came to a complete halt, and people essentially had to make do with whatever pre-exiting ceramics they had had beforehand. Sure, an agricultural worker living on their own land off in the countryside might not have noticed a huge difference -- but someone who had been living by a legionary fortress, or one of the primary imperial administrative centers, or in one of the burgeoning villas, certainly would have had to make significant changes across the period.
Yes, I'm guilty of painting with an overbroad brush here in an attempt to emphasize the difference in scale between then and now. It's not the case that the collapse of Roman authority had no effect on the people of the former territories; it certainly led to an indisputable loss of living conditions across the board, including in industrial output. But my point is that, in the event of a modern collapse, we aren't going to revert to some "checkpoint" of irreversible technological progress; we could just as likely revert to the living conditions of a denizen of the remnants of the Eastern empire as of 600 AD (and that might be an optimistic outcome!). Technological progress is not a one-way street, is my meaning, and from our lofty perch, we are entirely capable of crashing hard to Earth.
And yet a consumer durable is a consumer durable regardless of whether the manufacturer stays in business, at least before the modern practice of giving everything an Internet connection and making it phone home to keep working (which CollapseOS explicitly avoids). The Mac LC that I got in 1991 would still boot up in 2012. The CD-ROMs that I burned in the late 90s, I was able to transfer to external hard disk in 2021. My solar panels and PowerWall continue to work when the power and Internet goes down.
Post-collapse society will look very different from modern Information-age society, and will definitely have a lot more people growing their own food. Knowing how to identify plants, and the care instructions (sun/soil/water/space requirements) for each variety you're growing, and how other people have handled problems like pests and rot, can save you several years of failed harvests. Several years of failed harvests is likely the difference between surviving and not surviving.
I'm not intending to denigrate CollapseOS; they appear to be deliberately taking precautions for a specific degree of technological catastrophe, and it seems worthwhile for someone to prepare for that, regardless of how likely one thinks it may be.
> Just because computers weren't around 40-50 years ago doesn't mean that computers won't be very handy to have around in a post-collapse world.
They won't be handy to you or me, because we'd need to put our efforts into securing basic needs. You won't have the luxury to spend your time scavenging parts to build an 8-bit computer to do anything, then spending a bunch more time programming it. Even if you did, how would give you and advantage in acquiring food, shelter, or fuel over much simpler solutions using more basic technologies like paper?
Computers are the kind of thing people with a lot of surplus food spend their time with.
> The point of having computers is simply that they perform certain tasks orders of magnitude faster than humans. They're a tool, no more and no less. Before computers, a "calculator" was a person with paper and a slide rule, and you needed hundreds of them to do something like compute artillery trajectories, army logistics, machine tool curves, explosive lensing, sending rockets into space, etc.
Computers are useful for those tasks, but those are tasks only giant organizations like governments need to do. That's not you in a post-collapse world.
> Managing to keep just one solar-powered calculator working for 10 years after a collapse frees up all those people to do things like farming.
I think you have that backwards. No one's going to skip needed farming work and starve so they can go compute artillery trajectories. If they need to farm, they'll go without the artillery computations.
> Keeping a solar-powered electric tractor working frees up all those farmers, and frees up the animals for eating.
I address that up-thread, but solar-powered electric tractors are a fantasy. Even if such a thing existed, it would wear out, break down, and become irreparable long before technological civilization could be rebooted, so you might as well assume it doesn't exist in your planning.
Also, I don't think you're thinking things through: an animal can both be used to do work and (later) be eaten. If you're very poor, which you would be after some kind of civilization collapse, you don't eat animals very often.
Having a government in a box when everyone around you is scrounging for food makes you king, particularly if you also managed to save a couple militarized drones through the collapse. That's a pretty enviable position to be in.
The point, as with every capital investment, is to make more efficient the labor of the people who are securing those basic needs, so that you can free them up for work progressively higher on the value chain.
During the collapse itself, the way to do this is pretty easy: you kill the people who have food, shelter, or fuel but are not aligned with you, and give it to people who are aligned with you. And then once you have gotten everyone aligned with you, you increase the efficiency of the people who are doing the work. Saving even just one working tractor can cut the labor requirements from farming enough to support a village from several hundred people to one or two people. You will not have petrol in a post-collapse world, so better hope it's an electric tractor, or drop a scavenged electric motor + EV battery into an existing tractor. Use scavenged solar panels for power, there's plenty of that where I am.
All this requires that you know how things work, so you can trace out what to connect to what and repurpose electronic controls and open up the innards of the stuff you find abandoned on the street, and that's where having a computer and a lot of downloaded datasheets and physical/electronic/mechanical/chemical principles available will help.
> Having a government in a box when everyone around you is scrounging for food makes you king, particularly if you also managed to save a couple militarized drones through the collapse. That's a pretty enviable position to be in.
Come the fuck on. A fucking 8-bit computer (even a fucking 64-bit computer) is not a fucking "government in a box." And where the fuck are you going to get your "couple militarized drones"? Assuming they're not suicide drones (where "a couple" is not much), how long will they last? How useless would they be without spare parts, maintenance, and ammunition?
We live in the fucking real world, not some videogame where you can find a goddamn robot in a cave still functioning after 500 years and lethal enough for a boss-battle.
> You will not have petrol in a post-collapse world, so better hope it's an electric tractor, or drop a scavenged electric motor + EV battery into an existing tractor. Use scavenged solar panels for power, there's plenty of that where I am.
Look: if they don't have petrol, they won't have battery factories either. Batteries wear out. Your fantasy electric tractor will be just as useless a petrol one in short order.
There is middle ground between individuals and governments… I may not need automatic accounting and inventory via spreadsheets at a small scale, but being able to model the next 3 days of weather based on local conditions without any expectation of online communications could come in pretty handy
> I may not need automatic accounting and inventory via spreadsheets at a small scale, but being able to model the next 3 days of weather based on local conditions without any expectation of online communications could come in pretty handy
Alright. You have a computer in your possession that is vastly more powerful than an 8-bit machine built from scavenged parts.
1. Do you actually "model the next 3 days of weather based on local conditions without any expectation of online communications" with it?
2. If not, do you know how to built the required sensor suite and write the software to do that?
I feel like you're misunderstanding computers as magic boxes that can do some useful thing with little effort. But this is supposed to be a form of software engineers, building a weather forecasting system would be hard to do even with full access to a university library, Digikey, and the money to work on it full time. But we're talking about doing it with scavenged components while you're hungry looking for food.
Weather forecasts get to be useful when you have samples from a wide range of locations. A few weather stations on maybe a few acres of land wouldn't really get you decent weather predictions. You wouldn't know about some disturbance in the upper atmosphere leading to the jet stream pushing that cold winter blast further South than typical leading to the freeze that destroys your crops. You wouldn't know about that hurricane growing off the shore.
I 100% agree that computers are not necessary. But regardless of that, the ability to program microcontrollers is still a superpower and if you can have it, you have a hell of an edge.
> But regardless of that, the ability to program microcontrollers is still a superpower and if you can have it, you have a hell of an edge.
I really disagree with that. It will give practically no edge. It's a specialist skill that's only really useful in the context of an already computerized society for mass production or big one-offs.
If you have a collapse, I think the assumption is there would little to no mass production of advanced goods (hence, the scavenging concept). Then you're left with big one-offs, which are things large organizations like governments build and not all the time.
I dunno, microcontrollers could help in this zombie apocalypse world we're imagining.
I would think even in small scale having things like a 3d printers, CNC machines, networked video surveillance and alarm systems, solar arrays etc would be very beneficial.
Absolutely though the top priorities would be much simpler things though like food, clean water and shelter.
Process control seems like the big one. Precise tools for precise measurements and monitoring are pretty high up the tech tree, so if you can say "we saved a box of Arduinos and sensors from the Before Times", you can get those capabilities back sooner, and potentially use them as references for tools built with more renewable resources.
In a societal collapse, the only thing that you list that I think would be worth the effort would be "solar powered cb ham radios for communications."
Considering how much potentially invaluable info is only/mostly/only easily available as a PDF, I'm thinking a working ereader would be of nontrivial value.
Now...if there was only a way to crunch a PDF on an 8-bit processor I recovered from my washing machine...
- A Forth with Starting Forth it's hugely valuable, ditto with a Math Book like
the Calculus from Spivak.
Not a collapse, but a network attach on infra makes most modern OSes unusable, they need to be constantly updated. If you can salvage some older machine with DuskOS+networking, simple gopher and IRC clients/servers will work with really low bandwidth (2/3 KBPS and less).
Point to a PDF to something useful converter that runs in 64K bytes and can handle the 'PDF as a wrapper around some non-text image of the document' and we can talk. Seriously...I'd be fascinated.
- cp/m can read TXT files generated from gnuplot
Not sure how that helps. And can you port gnuplot to run in 8-bit/64k?
* a network attach on infra makes most modern OSes unusable, *
Ridiculous, unless your definition of 'usable' is 'unless I can get to TwitFaceTubeIn and watch cat videos ima gonna die!'. If civilization collapses and the network goes away tomorrow, my Debian 12, FreeBSD 14 and NetBSD 10 machines will work exactly as well as it does today until I can't power it and/or the hardware dies (sans email and web, of course). Yeah, the windows 10/11 things will bitch and moan constantly, and I assume MacOS too, but even with degraded functionality, it's far from 'unusable'. And I'll be able to load Linux or BSD on them so no worries.
they need to be constantly updated
No, they don't. Updates come in 2 broad categories: security fixes and feature release. Post-collapse and no network makes security much less urgent, and no new features is the new normal...get used to it. I have gear that runs HP/UX 10 (last support in 2003); still runs fine and delivering significant value.
And that ignores DOS, Win3, XP and such, which are still (disturbingly) common.
I meant, well, not 8 bit capable, but you can send the generated text files to them.
On 'gnuplot' for cp/m... doing a simple chart from X and Y values in TSV paired in two colums can be done either from forth or pascal or whatever you have to read two arrays.
I'm not stating a solution to read future PDF files. Forget the future ones; what I mean it's to 'convert' the ones we currently have.
I'm at a Spanish pubnix/tilde, a public unix server. Here I have a script which converts -with the help of a Cron job and sfeed-, RSS feeds into plain text to be readable over gopher. These can be read even from the crustiest DOS machines in Latin America and Windows 95/98/XP machines. I even pointed a working Retrozilla release, and it's spreading out quickly.
They are at least able to read news too with a gopher/gemini client and News Waffle, with a client written in TCL/TK saving tons of bandwidth. The port with IronTCL will work in the spot on XP machines. Download, decompress, run 'launch.bat' (lanzar.bar in Spanish).
The ZIP file weights 15MB. No SSE2 it's required. Neither tons of RAM.
Compare that to a Chrome install. And Chrome requeriments.
The 2nd/3rd world might not be under an apocalipse, but they don't have the reliability of the first world. And lots of folks adored the News Waffle service saving a 95% of the bandwidth.
Instead of the apocalipse, think about 3rd world guys or the rural America. Somethink like https://lite.cnn.com and https://neuters.de will work even under natural disasters with really reduced bandwidth data. Or https://telae.net for Google Maps searchs.
gopher://magical.fish has news feeds, an English to French/Spanish and so on translator , good links to blogs, games and even TPB search. These can be run on any machine, or Lagrange under Android. And, yes, it might work better under a potential earthquake/flood than
the web.
I was opining about reading or converting PDFs on an 8-bit processor. And you're...I dunno what this is but it's certainly not a response to anything I said.
I read the book "The Knowledge: How to Rebuild Our World from Scratch", and to some extent that also felt like LARPing, but I enjoyed it nonetheless.
It also left me wanting more. It has pretty extensive references at the end, I wonder if anyone's put together a collection of all the referenced materials?
>Some people doubt that computers will stay relevant after a civilizational collapse. I mostly agree. The drastic simplification of our society will mean that we have a lot less information to manage. We'll be a lot more busy with more pressing activities than processing data.
>However, the goal of Collapse OS is not to save computing, but electronics. Managing electricity will stay immensely useful.
I think thats a notable goal. We will conduct business with pen and paper but given that even in a collapse scenario there will be lots of remnants of the current era lying around might as well try and make use of it. Its one of the reason I really don't like the people collecting e-waste in bulk and then melting down the chips for tiny scraps of gold. So much effort went into making those chips there has got to be a better way to preserve these things for usage later.
Civilization doesn't even have to collapse for a project like this to be useful. Like you said, there's lots of e-waste. If you happened to live in a place that ended up with a lot of this stuff but didn't have a lot of infrastructure, you could possibly build up some convenient services with something like this. I like the idea of building software to make hardware less reliant on external resources in general. Over time, it could be useful to have more resilient electronics, because we seem to be designing machines to be more reliant on specific networked infrastructure every year.
What services? For most modern services the big cost inputs are things like human labor, utilities, and real estate. Reusing obsolete hardware doesn't gain anything. It's likely to be a net negative if it takes up more space, uses more electricity, and requires more maintenance.
A lot of functions of electronics don't require tons of processing power or efficiency. Microcontrollers can be used for just about anything. General purpose computing can be put to whatever purpose you can image.
Things supported by this OS like the Z80, 8086, 6502 etc. use around 5-10W. Using simple parts to control complicated machines is a standard operation, and even advanced electronics tend to use a lot of parts using older manufacturing techniques because it's more efficient to keep the old processes running.
If you're running a tractor, sure, 5 watts is not a big deal. But there are a lot of hypothetical post-collapse circumstances where such a high power usage would be prohibitive. Consider, for example, the kinds of radio stations you'd need for the kinds of weather and telecommunications uses I discussed in https://news.ycombinator.com/item?id=43484415, which benefit from being placed on inaccessible mountaintops and running unattended for years on end.
5 watts will drain a 100-amp-hour car battery in 10 days and is basically infeasible to get from improvised batteries made with common solid metals. Current mainstream microcontrollers like an ATSAMD20 are not only much nicer to program but can use under 20 μW, twenty thousand times less. A CR2032 coin cell (220mAh, 3V) can provide 20 μW for about 4 years. But the most the coin cell can provide at all is about 500 μW, so to run a 5-watt computer you'd need 10,000 coin cells. Totally impractical.
And batteries are a huge source of unreliability. What if you make your computing device more reliable by eliminating the battery? Then you need a way to power it, perhaps winding a watchspring or charging a supercapacitor.
Consider winding up such a device by pulling a cord like the one you'd use to start a chainsaw. That's about 100 newtons over about a meter, so 100 joules. That energy will run a 5W Z80 machine for 20 seconds, so you have to yank the cord three times a minute, or more because of friction. That yank will run a 20 μW microcontroller for two months.
I agree with your point! Old electronics aren't going to be appropriate for every situation, and modern alternatives are superior for lots of situations. But that doesn't mean that it isn't worth maintaining projects to keep the old ones useful. Plenty of people are still using technology developed thousands of years ago even when there are modern alternatives that are thousands of times more efficient. It just suits their situation better. Putting one of these things in something like a tractor or a dam or anything that has enough energy to spare is exactly the use case. And the relative simplicity of old technology can be a benefit if someone is trying to apply it to a new situation with limited resources or knowledge.
What cases are you thinking of when you say "Plenty of people are still using technology developed thousands of years ago even when there are modern alternatives that are thousands of times more efficient"? I considered hand sewing, cultivation with digging sticks instead of tractors, cooking over wood fires, walking, execution by stoning, handwriting, and several other possibilities, but none of them fit your description. In most cases the modern alternatives are less efficient but easier to use, but in every case I can think of where the efficiency ratio reaches a thousand or more in favor of the new technology, the thousands-of-years-old technology is abandoned, except by tiny minorities who are either impoverished or deliberately engaging in creative anachronisms.
I don't think "the relative simplicity of old technology" is a good argument for attempting to control your tractor with a Z80 instead of an ATSAMD20. You have to hook up the Z80 to external memory chips (both RAM and ROM) and an external clock crystal, supply it with 5 volts (regulated with, I think, ±2% precision), provide it with much more current (which means bypassing it with bigger capacitors, which pushes you towards scarcer, shorter-lived, less-reliable electrolytics), and program it in assembly language or Forth. The ATSAMD20 has RAM, ROM, and clock on chip and can run on anywhere from 1.62 to 3.63 volts, and you can program it in C or MicroPython. (C compilers for the Z80 do exist but for most tasks performance is prohibitively poor.) You can regulate the ATSAMD20's voltage adequately with a couple of LEDs and a resistor, or in many cases just a resistor divider consisting of a pencil lead or a potentiometer.
It would be pragmatically useful to use a Z80 if you have an existing Z80 codebase, or if you're familiar with the Z80 but not anything current, or if you have Z80 documentation but not documentation for anything current, or if you can get a Z80 but not anything current. (One particular case of this last is if the microcontrollers you have access to are all mask-programmed and don't have an "external access" pin like the 8048, 8051, and 80C196 family to force them to execute code from external memory. In that case the fact that the Z80 has no built-in code memory is an advantage instead of a disadvantage. But, if you can get Flash-programmed microcontrollers, you can generally reprogram their Flash.)
Incidentally, the Z80 itself "only" uses about 500 milliwatts, and there are Z80 clones that run on somewhat less power and require less extensive external supporting circuitry. (Boston Scientific's pacemakers run on a Z80 softcore in an FPGA, for example, so they don't have to take the risk of writing new firmware.) But the Z80's other drawbacks remain.
The other draw of an established "old architecture" is that it's fairly fixed and sourcable.
There are a bazillion Z80s and 8051s, and many of them are in convenient packages like DIP. You can probably scavenge some from your nearest landfill using a butane torch to desolder them from some defunct electronics.
In contrast, there are a trillion flavours of modern MCUs, not all drop-in interchangeable. If your code and tooling is designed for an ATSAMD20, great, but I only have a bag of CH32V305s. Moreover, you're moving towards finer pitches and more complex mounting-- going from DIP to TSSOP to BGA mounting, I'd expect every level represents a significant dropoff of how many devices can be successfully removed and remounted by low-skill scavengers.
I suppose the calculus is different if you're designing for "scavenge parts from old games consoles" versus proactively preparing a hermetically sealed "care package" of parts pre-selected for maximum usability.
It's a good point that older hardware is less diverse. The dizzying number of SKUs with different pinouts, different voltage requirements, etc., is potentially a real barrier to salvage. I have a 68000 and a bunch of PALs I pried out of sockets in some old lab equipment; not even desoldering was needed. And it's pretty common for old microprocessors to have clearly distinguishable address and data buses, with external memory. And I think I've mentioned the lovely "external access" pin on the 8048, 8051, and 80C196 family, though on the 80c196 it's active low.
On the other hand, old microcontrollers are a lot more likely to be mask-programmed or OTP PROM programmed, and most of them don't have an EA pin. And they have a dizzying array of NIH instruction sets and weird debugging protocols, or, often, no debugging protocol ("buy an ICE, you cheapskate"). And they're likely to have really low speeds and tiny memory.
Most current microcontrollers use Flash, and most of them are ARMs supporting OCD. A lot of others support JTAG or UPDI. And SMD parts can usually be salvaged by either hot air or heating the board up on a hotplate and then banging it on a bucket of water. Some people use butane torches to heat the PCB but when I tried that my lungs were unhappy for the rest of the day.
I was excited to learn recently that current Lattice iCE40 FPGAs have the equivalent of the 8051's EA pin. If you hold the SPI_SS pin low at startup (or reset) it quietly waits for an SPI master to load a configuration into it over SPI, ignoring its nonvolatile configuration memory. And most other FPGAs always load their configuration from a serial Flash chip.
The biggest thing favoring recent chips for salvage, though, is just that they outnumber the obsolete ones by maybe 100 to 1. People are putting 48-megahertz reflashable 32-bit ARMs in disposable vapes and USB chargers. It's just unbelievable.
In terms of hoarding "care packages", there is probably a sweet spot of diversity. I don't think you gain much from architectural diversity, so you should probably standardize on either Thumb1 ARM or RISC-V. But there are some tradeoffs around things like power consumption, compute power, RAM size, available peripherals, floating point, GPIO count, physical size, and cost, that suggest that you probably want to stock at least a few different part numbers. But more part numbers means more pinouts, more errata, more board designs, etc.
I appreciate the thought and detail you put into these responses. That's beyond the scope of what I anticipated discussing.
The types of things I had in mind are old techniques that people use for processing materials, like running a primitive forge or extracting energy from burning plant material or manual labor. What's the energy efficiency difference between generating electricity with a hand crank vs. a nuclear reactor? Even if you take into account all the inputs it takes to build and run the reactor, the overall output to input energy ratio is much higher, but it relies on a lot of infrastructure to get to that point. The type of efficiency I'm thinking of is precisely the energy required to maintain and run something vs. the work you get out of it.
In the same way, while old computers are much less efficient, models like these that have been manufactured for decades and exist all over might end up being a better fit in some cases, even with less efficiency. I can appreciate that the integration of components in newer machines like the ATSAMD20 can reduce complexity in many ways, but projects like CollapseOS are specifically meant to create code that can handle low-level complexity and make these things easier to use and maintain.
The Z80 voltage is 5V+/-5%, so right around what you were thinking. Considering the precision required for voltage regulation required is smart, but if you were having to replace crystals, they are simple and low frequency, 2-16Mhz, and lots have been produced, and once again the fact that it uses parts that have been produced for decades and widely distributed may be an advantage.
Your point about documentation is a good one. It does require more complicated programming, but there are plenty of paper books out there (also digitally archived) that in many situations might be easier to locate because they have been so widely distributed over time. If I look at archive.org for ATSAMD20 I come up empty, but Z80 gives me tons of results like this: https://archive.org/details/Programming_the_Z-80_2nd_Edition...
Anyway, thank you again for taking so much time to respond so thoughfully. You make great points, but I'm still convinced that it's worthwhile to make old hardware useful and resilient in situations where people have limited access to resources, people who may still want to deploy some forms of automation using what's available.
Projects like this one will hopefully never be used for their intended purpose, but they may form a basis for other interesting uses of technology and finding ways to take advantage of available computing resources even as machines become more complicated.
In my sibling comment about the overall systems aspects of the situation, I asserted that there was in fact enormously more information available for how to program in the 32-bit ARM assembly used by the ATSAMD20 than in Z80 assembly. This is an overview of that information, starting, as you did, from the Internet Archive's texts collection.
But the Archive isn't the best place to look. The most compact guide to ARM assembly language I've found is chapter 2 of "Archimedes Operating System: A Dabhand Guide" https://www.pagetable.com/docs/Archimedes%20Operating%20Syst..., which is 13 pages, though it doesn't cover Thumb and more recently introduced instructions. Also worth mentioning is the VLSI Inc. datasheet for the ARM3/VL86C020 https://www.chiark.greenend.org.uk/~theom/riscos/docs/ARM3-d... sections 1 to 3 (pp. 1-3 (7/56) to 3-67 (45/56)), though it doesn't cover Thumb and also includes some stuff that's not true of more recent processors. These are basically reference material like the ARM architectural reference manual I linked above from the Archive; learning how to program the CPU from them would be a great challenge.
I also appreciate your responses! I especially appreciate the correction about the Z80's power supply requirements.
> What's the energy efficiency difference between generating electricity with a hand crank vs. a nuclear reactor?
A hand crank is about 95% efficient. An electromechanical generator is about 90% efficient. Your muscles are about 25% efficient. Putting it together, the energy efficiency of generating electricity with a hand crank is about 21%. Nuclear reactors are about 40% efficient, though that goes down to about 4% if you include the energy cost of building the power plant, enriching the fuel, etc. The advantages of the nuclear reactor are that it's more convenient (requiring less human attention per joule) and that it can be fueled by uranium rather than potatoes.
> Even if you take into account all the inputs it takes to build and run the reactor, the overall output to input energy ratio is much higher. (...) The type of efficiency I'm thinking of is precisely the energy required to maintain and run something vs. the work you get out of it.
The term for that ratio, which I guess is a sort of efficiency, is "ERoEI" or "EROI". https://en.wikipedia.org/wiki/Energy_return_on_investment#Nu... says nuclear power plants have ERoEI of 20–81 (that is, 20 to 81 joules of output for every joule of input, an "efficiency" of 2000% to 8100%). A hand crank is fueled by people eating biomass and doing work at energy efficiencies within about a factor of 2 of the best power plants. Biomass ERoEI varies but is generally estimated to be in the range of 3–30. So ERoEI might improve by a factor of 30 or so at best (≈81 ÷ 3) in going from hand crank to nuclear, and possibly get slightly worse. It definitely doesn't change by factors of a thousand or more.
Even if it were, I don't think hand-crank-generated electricity is used by "plenty of people".
> projects like CollapseOS are specifically meant to create code that can handle low-level complexity and make these things easier to use and maintain.
I don't think CollapseOS really helps you with debugging the EMI on your RAM bus or reducing your power-supply ripple, and I don't think "ease of use" is one of its major goals. Anti-goals, maybe. Hopefully Virgil will correct me on that if he disagrees.
> if you were having to replace crystals, they are simple and low frequency, 2-16Mhz, and lots have been produced, and once again the fact that it uses parts that have been produced for decades and widely distributed may be an advantage.
I don't think a widely-distributed crystal makes assembly or maintenance easier than using an
on-chip RC oscillator instead of a crystal. It does have real advantages for timing precision, but you can use an external crystal with most modern microcontrollers just as easily as with a Z80, the only drawback being that the cheaper ones are rather short on pins. Sacrificing two pins of a 6-pin ATTiny13 to your clock really reduces its usefulness by a lot.
> If I look at archive.org for ATSAMD20 I come up empty, but Z80 gives me tons of results like...
Oh, that's because you're looking for the part number rather than the CPU architecture. If you don't know that the ATSAMD20 is a Cortex-M0(+) running the ARM Thumb1 instruction set, you are going to have a difficult time programming it, because you won't know how to set up your C compiler.
There is in fact enormously more information available for how to program in 32-bit ARM assembly than in Z80 assembly, because it's the architecture used by the Acorn, the Newton, the Raspberry Pi, almost every Android phone ever made, and old iPhones. See my forthcoming sibling comment for information about ARM programming.
Aside from being a much better compilation target for high-level languages like C, ARM assembly is much, much easier than Z80 assembly. And embedded ARMs support a debugging interface called OCD which dramatically simplifies the task of debugging broken firmware.
> models like [Z80s and 6502s] that have been manufactured for decades and exist all over might end up being a better fit
There are definitely situations where Z80s or 6502s, or entire computers already containing them, are more easily available than current ARM microcontrollers. (For example, if you're at my cousin's house—he's a collector of obsolete computers.) However, it's difficult to overstate how much more ubiquitous ARM microcontrollers are. The heyday of the Z80 and 6502 ended in about 01985, at which point a computer using one still cost about US$2000 and only a few million such computers were sold per year. The most popular 6502 machine was the Commodore 64, whose total lifetime production was 12 million units. The most popular 8080-family machine (supporting a few Z80 instructions) was probably the Gameboy, with 119 million units. We can probably round up the total of deployed 8080 and 6502 family machines to 1 billion, most of which are now in landfills.
That means about as many ARMs were being produced every two weeks as 8080 and 6502 machines in history, a speed of production which has probably only accelerated since then. Most of those are embedded microcontrollers, and I think that most of those microcontrollers are reflashable.
Other microcontroller architectures like the AVR are also both more pleasant to program and more abundant than Z80s and 6502s. They also feature simpler and more consistent sets of peripherals than typical Z80 and 6502 machines, in part because the CPU itself is so fast that a lot of the work these obsolete chips need special-purpose hardware for can instead be done in software.
So, I think that, if you want something useful and resilient in situations where people have limited access to resources, people who may still want to deploy some forms of automation using what's available, you should focus on ARM microcontrollers. Z80s and 6502s are rarely available, much less useful, fragile rather than resilient, inflexible, and unnecessarily difficult to use.
> though that goes down to about 4% if you include the energy cost of building the power plant, enriching the fuel, etc.
Rereading this, I don't know in what sense it could be true.
What I was thinking of was that the cost of energy from a nuclear power plant is on the order of ten times as many dollars as the cost of the fuel, largely as a result of the costs of building it, which represents a sort of inefficiency. However, what's being consumed inefficiently there isn't energy; it's things like concrete, steel, human attention, bulldozer time, human lives, etc., collectively "money".
If, as implied by my 4% figure, what was being consumed by the plant construction were actually 22.5x as much energy as comes out of the plant over its lifetime, rather than money, its ERoEI would be about 0.044. It would require the lifetime output of twenty or thirty 100-megawatt power plants to construct a single 100-megawatt nuclear power plant. That is not the case. In fact, as I explained later down in the same comment, the ERoEI of nuclear energy is generally accepted to be in the range of about 10 to 100.
About the return on investment, the methodology is interesting, and I’m surprised that a hand crank to nuclear would increase so little in efficiency. But although the direct comparison of EROI might be small, I wonder about this part from that article:
“It is in part for these fully encompassed systems reasons, that in the conclusions of Murphy and Hall's paper in 2010, an EROI of 5 by their extended methodology is considered necessary to reach the minimum threshold of sustainability,[22] while a value of 12–13 by Hall's methodology is considered the minimum value necessary for technological progress and a society supporting high art.”
So different values of EROI can yield vastly different civilizational results, the difference between base sustainability and a society with high art and technology. The direct energy outputs might not be thousands of times different, but the information output of different EROI levels could be considered thousands of times different. Without a massive efficiency increase, society over the last few thousand years got much more complex in its output. I’m not trying to change terms here just to win an argument but trying to qualify the final results of different capacities of harnessing energy and technology.
I think this gets to the heart of the different arguments we’re making. I’m not in any way arguing that these old architectures are more common in total quantity than ARM. That difference in production is only going to increase. I wouldn’t have known the specific difference, but your data is great for understanding the scope.
My argument is that projects meant to make technology that has been manufactured for a long period of time and has been widely distributed more useful and sustainable are worthwhile, even when we have more common and efficient alternatives. This doesn’t in any way contradict your point about ARM architecture being more common or useful, and I’d be fully in favor of someone extending this kind of project to ARM.
In response to some of the other points: using an external crystal is just an example of how you could use available parts to maintain the Z80 if it needed fixing but you had limited resources. In overall terms, it might be easier to throw away an ARM microcontroller and find 100 replacements for it than even trying to use an external crystal for either one, but again I’m not saying it’s a specific advantage to the Z80 that you could attach a common crystal, just something that might happen in a resource-constrained situation using available parts. Better than the kid in Snowpiercer sitting and spinning the broken train parts at least.
Also, let me clarify the archive.org part. I wasn’t trying to demonstrate the best process for getting info. I just picked that because they have lots of scanned books to simulate someone who needed to look up how to program a part they found. I know it’s using ARM, but the reason I mentioned that had to do with the distribution of paper books on the subject and how they’re organized. The book I linked to starts with very basic concepts for someone who has never programmed before and moves quickly into the Z80, all in one old book, because it was printed in a simpler time when no prior knowledge was assumed.
There are plenty of paper books on ARM too, and probably easier to find, but now that architectures are becoming more complicated, you’re more likely to find sources online that require access to a specific server and have specialized information requiring a certain familiarity with programming and the tools needed for it. More is assumed of the reader.
If you were able to find that one book, you could probably get pretty far in using the Z80 without any familiarity with complex tools. Again, ARM is of course popular and well-documented, but the old Z80 stuff is still out there and simple enough to understand and even analyze with your bare eyes in more detail than you could analyze an ARM microcontroller without some very specific tools.
So all that info about ARM is excellent, but this isn’t necessarily a competition. It’s someone’s passion project who chose a few old, simple, and still-in-production technologies to develop a resilient and translatable operating system for. It makes sense to start with the earlier technology because it’s simpler and less proprietary, but it would also make sense to extend it to modern architectures like ARM or RISC-V. I wouldn’t be surprised if sometime in the future some person or AI did just that. This project just serves as a nice starting point for an idea on resilient electronics.
What's your point? A lot of simple devices are still being manufactured with cheap microcontrollers. Most of them don't even have an OS as such. If society collapses it's not like people are going to scavenge the microcontroller out of their washing machine and use it to reboot civilization.
In https://news.ycombinator.com/item?id=43484415 I outlined some extremely advantageous uses for automatic computation even in unlikely deep collapse situations, for most of which the microcontroller out of your washing machine (or, as I mention in https://news.ycombinator.com/item?id=43487644, your disposable vape or USB charger) is more than sufficient if you can manage to reprogram it.
Even if your objectives are humbler than "rebooting civilization" (an objective I think Virgil opposes), you might still want to, for example, predict the weather, communicate with faraway family members, automatically irrigate plants and build other automatic control systems, do engineering and surveying calculations, encrypt communications, learn prices in markets that are more than a day's travel away, hold and transmit cryptocurrencies, search databases, record and play back music and voice conversations, tell time, set an alarm, carry around photographs and books in a compact form, and duplicate them.
Even a washing-machine microcontroller is enormously more capable of these tasks than an unaided human, though, for tasks requiring bulk data storage, it would need some kind of storage medium such as an SD card.
A pressing need for processing data post-collapse is weather forecasting. Knowing the upcoming weather can help avoid crop failures from planting at the wrong time. Of course the data aggregation would also be very challenging as you need data from remote sites for good forecasting.
It might also be nice to know where (and when) groundwater is safe to drink
our descendents will perhaps not be thrilled to learn about the "time bombs" we have left them, steadily inching into aquifers or up into surface water. that is of course if they are not too distracted locating water of even dubious potability to care
Weather forecasting before computers already relied on rapid telecommunications of low-bandwidth digital data (temperature, pressure, humidity, wind speed and direction, and precipitation) from a network of weather stations. Digital telecommunications is something that computers and radios can provide at enormously lower cost than networks of telegraph cables. See https://news.ycombinator.com/item?id=43484415 for details.
Even a little data from remote sites can provide a huge advantage for forecasting. Temperature, humidity, and air pressure, roughly three bytes, four times a day: 0.001 bps per weather station. (Precipitation and wind speed and direction are pretty useful, but worse cost-benefit.) And collection of that data is very much less labor-intensive when a microcontroller and radio does it for you.
Other kinds of very-low-bit-rate telecommunications messages that are still extremely valuable:
"Lucretia gravely ill. Hurry."
"I-44 mile 451: bandits."
"Corn $55 at Salem."
"Trump died."
"Springfield captured."
"General Taylor signed ceasefire."
"Livingstone found alive."
The first of these inspired Morse to invent the telegraph; she died before the mail reached him. None of them are over 500 bits even in ASCII, and probably each could be encoded in under 100 bits with some attention to coding, some much less. 100 bits over, say, 2 hours, requires a channel capacity of 0.014 bits per second.
Even without advanced compression algorithms, you could easily imagime the corn message being, say, "<figs>!$05000<ltrs>ZCSXV" in ITA2 "Baudot" code: 14 5-bit characters, 70 bits.
Information theory shows that there's no such thing as being out of communication range; it's just a question of what the bit rate of the channel is. But reducing that to practice requires digital signal processing, which is many orders of magnitude more difficult if you are doing it with pencil and paper. It also benefits greatly from precise timekeeping, which quartz resonator crystals make cheap, reliable, robust, and lightweight.
Encryption is another case where an amount of computation that is small for a microcontroller can be very valuable, even if you have to transmit the encrypted message by carving it into wood with your stone knife.
The Bitcoin blockchain in its current form requires higher bandwidth than a weather station network, but still a tiny amount by current internet standards, about 12kbps originally, I think about 26kbps with segwit. Bitcoin (or an alternative with a longer block time) could potentially provide a way to transmit not just prices but actual payments under adverse circumstances. It does require that participants have enough computing power to sign transactions; I think it should be relatively resilient against imbalances of computation power among participants, as long as no 51% attack becomes feasible through collusion.
I wrote, "<figs>!$05000<ltrs>ZCSXV". This should have read, "<figs>!$05500<ltrs>ZCSXV".
Also, see https://news.ycombinator.com/item?id=43487785 for a list of other end-uses for which even a microcontroller would provide an enormous advantage over no electronics at all.
Bitcoin will be useless in that case. Half of the techbros wouldn't even survive the early 90's. In the 80's, forget something like properly learning Forth with Starting Forth and doing something useful.
Bitcoin already withstood a rapid withdrawal of more than half of the mining power over about a month, that time the PRC outlawed Bitcoin mining. And it also survived the relatively sudden collapse of Mt. Gox, which accounted for significantly more than half the trading at the time. And it survived its price collapsing by more than half in 24 hours, from over US$8000 to under US$4000. It seems a have pretty good survival characteristics.
In an environment where there isn't a world hegemon to run something like the post-Bretton-Woods system, international payments, if they are to happen at all, need to be settled somehow. The approach used for the 3000 years up to and including the Bretton Woods period was shipping gold and silver across oceans in boats. Before that, the Mediterranean international economy was apparently a gift economy, while intranational trade in Mesopotamia used clay bills of deposit.
In a hypothetical post-collapse future without a Singularity, there may be much less international trade. But I hope it's obvious that international trade covers a spectrum from slightly advantageous to overwhelmingly advantageous, so it is unlikely to disappear altogether. And Bitcoin has overwhelming advantages over ocean shipping of precious metals. For example, it can't be stolen in transit or lost in a shipwreck, and the latency of a payment is about half an hour rather than six weeks.
And all the blockchain requires to stay alive is about 26 kilobits per second of bisection bandwidth.
With an adjustable hole punch, prong fasteners, (optionally) duct tape for the spine, and (optionally) sheet protectors for the front and back covers, you can crank out a survival library of (open-source) books as fast as pages come out your Brother laser.
I never bothered with fancy acid-free paper. Modern paper has good longevity, but to be safe I use non-recycled, non-whitened (91 brightness) paper.
> With an adjustable hole punch, prong fasteners, (optionally) duct tape for the spine, and (optionally) sheet protectors for the front and back covers, you can crank out a survival library of (open-source) books as fast as pages come out your Brother laser.
This is actually one of the few "post apocalyptic" computer ideas that actually makes some sense to me. Though it would probably still make more sense to pre-print that library than wait until conditions are difficult to do the printing.
Unless your plan is assume availability of printer paper, you'd still need to store the same volume of blank paper as your library, and you'd be stuck trying to run a printer when things like electricity are much less available.
> it would probably still make more sense to pre-print that library than wait until conditions are difficult to do the printing.
That's what I meant. Sorry I left it unclear. I should have explicitly called that out, thanks.
Basically it's cliff-notes from my shot at an "MVP of home hardcopy backups." No surprise, these simpler techniques (often seen when corporations had internal printing for employee handbooks and such) are better suited to home processing with minimal equipment. All you need is a three-hole punch.
It's not about achieving a "perfect" bookbinding (a real term), or for people who do bookbinding as a hobby. Instead it's a fast/easy/cheap technique for people who just want a hardcopy backup, without needing a special bookbinding vice in the house.
Three ring binders were my obvious first choice, but they're surprisingly costly, somewhat awkward to use, prone to tearing pages, and usually take more space on the shelf.
I'm still WAY better off with my solar panels and ALL the books on an external hard drive. I can't think of ANY situation where a print out of Wikipedia would be better than a digital version.
Wikipedia is an extreme example, since it's highly impractical to print the whole thing. OTOH printing your "Top 20" survival books is quick and affords a nice measure of system-level dissimilar redundancy (not to mention valuable barter goods :D).
In this scenario we have the print version AND the digital version.
You could also say in the middle of a house file with the paper on fire the digital version would be better but it's pointless to invent around the assumptions.
But the claim was that there is no scenario at all where print would be better than digital.
> I'm still WAY better off with my solar panels and ALL the books on an external hard drive. I can't think of ANY situation where a print out of Wikipedia would be better than a digital version.
I just gave a scenario where a print out of Wikipedia would be better than a digital version.
> In this scenario we have the print version AND the digital version.
Except this is the exact scenario I've been suggesting all along. If you want access to both versions, you still employ the same DIY bookbinding techniques. Nothing changes.
Of the bookbinding tips I gave, "obliterate your digital copy" was (and I didn't think I had to explain this) not one of the suggested steps. ;-)
> I'm still WAY better off with my solar panels and ALL the books on an external hard drive.
No you're not. One component is your setup gets fried and you lose access to everything.
Paper is its own reader device: it's far more resilient, because it has much few dependencies for use. Just think about digital archiving vs paper archiving for a bit, and it becomes clear.
> I can't think of ANY situation where a print out of Wikipedia would be better than a digital version.
Wikipedia would be of little practical value in a post collapse situation. And frankly, it's pretty terrible now for anything besides satisfying idle curiosity.
Whenever I read these kinds of collapse posts I wonder: where do the bodies go in this scenario? Because if that kind of collapse happens the management of seven billion corpses becomes pressing.
Given a land area of 510 million km², in the worst case, that's a sudden event producing 14 bodies per km², about 25 meters from one body to the next. If I were one of a small number of survivors in that situation, I'd dig a mass grave, dump 14 bodies into it, and enjoy my nice, sanitary square kilometer all by myself. Probably boil the water from the well. Until I die of starvation because I don't know how to hunt, or from an infected wound, or something.
In more plausible cases, we're talking about a population collapse where the deaths are either concentrated in cities, spread out over several years, or both. If they're concentrated in cities, maybe avoid the cities for three to six months until they finish rotting. If they're spread out over several years, those who die later will be able to bury those who die earlier; it only takes a few man-hours of labor to dig a grave.
Burn the bodies, let scavengers have at them, or put them in a river and let those downstream deal with them. Probably easier to burn or let scavengers have at them than bury them. Fewer calories burned on your part compared to digging.
Regarding hunting, if you're near a river and it isn't piling up with bodies from upstream, you can take up fishing. It's easier to learn from scratch on your own and requires less effort.
> 510 million km², in the worst case, that's a sudden event producing 14 bodies per km², about 25 meters from one body to the next.
That's the area of the planet; you only get that distribution if the event also redistributes the bodies evenly over the entire globe, including oceans. (Though I'm not sure how you go from 14/km^2 to 25 meter separation?)
> If they're spread out over several years, those who die later will be able to bury those who die earlier; it only takes a few man-hours of labor to dig a grave.
During the covid pandemic, which was around a single percentage point of the world population over a few years, there were reports in the UK and the USA of mass graves, normal death procedures being overwhelmed.
Global supply chain collapse is kinda the "mass starvation because farms can't get fertiliser to grow enough crops, nor ship them to cities" scenario. If you can't hunt or fish, you're probably one of the corpses (very few people will have the land to grow their own crops, especially without fertiliser).
You are right; the correct number was 148.94 million km², which produces 47 corpses per km² of land.
> Though I'm not sure how you go from 14/km^2 to 25 meter separation?
√14 ≈ 4 and I somehow managed to think that 1000m ÷ 4 = 25m. In fact that calculation should have given 250m, and √47 ≈ 7, and 1000m ÷ 7 ≈ 140m. So we're talking about on the order of a city block between corpses.
> there were reports in the UK and the USA of mass graves, normal death procedures being overwhelmed.
Yeah, but it wasn't a question of corpses rotting in the streets despite all the survivors digging graves full-time; it was just a question of the usual number of gravediggers being unable to cope. If people had just dug graves themselves for their family members in their yards, the way they do for pets, it wouldn't have been a problem; but that was prohibited.
Anyway, I think people who worry about health risks from corpse pileups due to society collapsing are really worrying about the wrong thing. The corpses won't be piled up; at most, they'll be so far apart that you could walk for days without seeing one unless you're someplace especially flat or with an especially high density of corpses, and in realistic scenarios, they'll just be buried like normal, mostly over years or decades, not left out to rot en masse.
Doom is useful. The reason you stockpile food or grow a crop in spring is to avoid the doom that would come winter, if you didn't. Fearing a possible bad future is fairly fundamental for survival.
I assume there's some significant hard-wiring for this type of emotion, with some people having a different baseline for their sensitivity to it. I also suspect the environment might push regional genetic population to have different sensitivities, depending on how harsh/unstable the environment is. I say "unstable" because I see doom as anxiety about the possibility. For example, in a region with a consistent harsh winter, doom has less use because piling food up is just routine, everyone does it, it's obvious. But, in an unstable environment, with a winter that's only sometimes harsh, you need to fear that possibility of a harsh winter. You're driven by the anxiety of what might be. You stockpile food even though you don't need it now: you're irrational in the instantaneous context, but rational for the long term context. It's a neat future looking emotion that probably evolved closely with intelligence.
I have a small farm, solar, a pond and a ton of farming/gardening books. I for one would LOVE a PipBoy-like cyberdeck with a snapshot of the internet from 2025 when I' hunkered down with my dogs, sheep and goats.
I am not into this end-of-the-world collapse thing at all, but I like reading about the strong focus on simplicity and support for limited hardware. There is a lot of talk about keeping things simple in software, but very few act on it. I don't need the threat of a global disaster to be interested in running operating systems like that.
Been doing some retro hobby (16-bit, real-mode, 80286) DOS development lately. It is refreshing to look at a system and be able to at least almost understand all the things going on. It might not be the simplest possible system, not the most elegant CPU design, but compared to the bloated monsters we use today it is very nice to work with. DOS is already stretching the limits of what I can keep in my head and reason about without getting completely lost in over-engineered (and leaky) abstraction layers.
Now I want a scavengers' guide to identify machines that might have a compatible microprocessor within them, because I haven't seen a Commodore or Apple II in a long time. Arcades with older cabinets obviously have them, but if most post-apocalyptic media are prophetic, they'll probably be occupied by a gang of young ne'er-do-wells. I suppose, thanks to the College Board, Ti-83s (Z80) are still quite common in the US. Are there toys, medical equipment, or vending machines that still use these chips?
I wonder if ESP32s and Arduinos might be more commonly found, though I could see the argument that places with those newfangled chips may be more likely to become radioactive craters in some scenarios.
The Z80 is still an insanely popular microcontroller, that can be found it so, so many devices. Head down to your local Walmart's toy aisle open some of the electronics there (Post-collapse, of course) and you'll certainly find a few.
That's not as easy as just dropping a full computer on your desk, but having a low power processor that's easy to find replacements for would be useful. That is, of course, if you spent the time pre-collapse to learn how to make a useful system out of those components, which I suspect is the real goal of Collapse OS.
Can you name two such devices? Because the only thing I've ever found a Z80 in is an S1 MP3 player. I've found ARMs, 8051s, and weird Epson microcontrollers they seem to have never published the instruction set for, but never a Z80.
According to [0], TI were launching new graphic calculators with Z80s as late as 2016 - the TI-84 Plus CE-T Python edition (which is still available on Amazon UK[1][2]!)
Not a hugely compelling argument for "the z80 is still a popular microcontroller", mind.
Well, Z80-compatible but more efficient processors, sure. That pretty well counts because you can still run your own software on them, such as KnightOS, but KnightOS can't compile itself, and I don't think CollapseOS will run on a TI-84.
They only stopped making them last year. Someone was buying them, enough to keep the production lines going, and not to run CP/M.
TI uses them in one of their calculator lines. I've seen gobs of them as embedded controllers in invarious industrial systems (I have family in manufacturing). I understand a lot of coin-op games (think pinball or pachinko) use them. I've seen them in appliance controller boards (don't recall the brand).
Thanks! That sounds pretty plausible, except that the chips in TI's calculators were low-power Z80 clones, of which there are plenty still in production. Still, "various industrial control systems" is a far cry from "Head down to your local Walmart's toy aisle (...) and you'll certainly find a few."
except that the chips in TI's calculators were low-power Z80 clones
So? That's like saying "there are no 8051s because Intel doesn't make them", even tho there are millions if not billions of clones made every year (since you'll undoubtedly say "well I haven't seen one", if your car tells you when your tire pressure is low, there's 4 8051s).
If we're talking about the ability to repurpose salvaged chips for new purposes, it barely matters what the instruction set is. What matters enormously about the 8051 in particular is that (1) by default it runs code from mask ROM you can't change, unlike the 8751, and (2) it has a pin called "external access" which lets you force it to run code from external SRAM or EPROM. Neither (1) nor (2) is generally true of the clones. What matters most then is whether you can figure out how to get it to run your code and toggle its pins. Can you load code via JTAG? UPDI? Changing an external ROM chip? Maybe it boots from an SPI Flash? That's what matters most.
(It also matters whether you need things like an external crystal or three power rails.)
I mean, it's convenient when you can use an existing compiler, or at least can find documentation for the instruction set; and 8-bit instruction sets and address buses are constraining enough that they can really make it hard to do things. But these are not nearly as important as being able to get your code running on the chip at all.
So, no, instruction-set-compatible clones (like the ones I mentioned I found in https://news.ycombinator.com/item?id=43488079, the day before your comment) are not interchangeable with Intel 8051s in the context of improvising computational capabilities out of garbage. Pinouts matter. Programming waveforms matter.
With respect to the Z80-clone TI calculators, in https://news.ycombinator.com/item?id=43488344 Virgil explained that they can in fact run CollapseOS, but can't yet self-host because CollapseOS can't yet access their Flash. If you want to use them to control motors or solenoids or something, you still need some pinout information, which I'm not sure we have.
Talk about missing the point, although you accidentally stumbled over it while ranting away...it doesn't matter if it's a clone or an original if you can still hack it.
EDIT: oh, and in case the website goes over quota, try https://new.collapseos.org/ . I haven't thrown the DNS switch yet, but I expect my meager Fastmail hosting will bust today.
I've been meaning to give this a try for a while - I have several of the "supported" systems and I've designed a couple of my own simple Z80 computers over the past few years. Being able to program AVRs from a scavenged e-waste Z80 system seems strangely compelling to me. There's a LOT of AVRs in e-waste.
But, if you absolutely needed to depend on it, I think that other technology might be better for the kinds of things we normally use microcontrollers for. If I need to control electronics where something that does this for a minute, then waits for that so it can do something else until this other thing, it's hard to imagine that a would-be apocalypse-ee would want to use a microcontroller rather than a more easily improvised mechanical cam timer [0]
Could be just about anything, from thermostats to CPAP machines to SFP fiber transceivers. Anything that needs to be just a little bit smart (but usually not internet connected) can have one. For hobbyists, the ATMega328P was the go-to for about 15 years, before ESP32 and Pi Pico took over more recently. I'm sure Arduino helped a lot with the AVR's proliferation.
There are no one-time programmable versions of AVR as far as I know, so they can all have their internal Flash reprogrammed.
Awesome, thanks! I know them mostly from hobbyist contexts, and I noted that Atmel kind of stopped updating them after the ATMega328PU (and then failed as a company), so I wondered if they were kind of a failed product line, because there weren't enough hobbyists to sustain the product line. It's nice to hear I was wrong!
>The page you have tried to access is not available because the owner of the file you are trying to access has exceeded our short term bandwidth limits. Please try again shortly.
Sorry about the mess, I didn't anticipate the load. The website has been moved, but DNS propagation must occur. https://new.collapseos.org/ might work if your DNS still points to the old server.
Check out BunnyCDN. I got a few, no-frills sites running on it. They're cheap. They also have a paid, permanent cache. It's absorbing both burst and AI crawler traffic for me right now.
> under what conditions do you believe the civilization you're living under will collapse? [...] Oh, you just haven't made that list yet? [...] Or you could take the question from the other side and list the conditions that are necessary for your civilization to continue to exist. That's an interesting list too.
I've always dismissed collapse talk as "batshit crazy" as the author says. but he raises good points here. maybe I'm not batshit crazy enough.
In the wired article the author says he thinks climate change will disrupt trade routes, but obviously society would route around the damage as long as it remained profitable to do so. The only scenario in which this hypothetical makes sense would be mass causality events like pandemics or wars that prevent travel and trade.
So we're talking about a hypothetical situation in which global trade has halted almost completely for some reason, and has been stopped for a very, very long time. This means that the majority of the worlds population are either dead or dying, which means that YOU will be either dead or dying as well.
Even if we accept the premise (a tall ask) AND we assume you will happen to be one of the survivors because you're "special", wouldn't it make more sense to prep by building a cargo ship so trade can resume sooner than it does to build apocalypse proof micro-controller code?
Don't be so Amerigo-centrist. Europe and the Mediterranean Sea have been trading since litteral millenia. Guess why? civilization was born there because the tribes could share knowledge and goods like crazy.
Meanwhile, America didn't even exist for the Romans.
I'm not well informed enough to have an opinion at that scale (few are). However, the potential death of Pax Americana implies that trade routes will be different in the future, not that people will inexplicably stop trading with one another completely. In fact, War is often a great stimulator of trade.
America long predates the Roman Empire. Hell, Americans had already invented writing before the Roman Empire, maybe before the Roman Republic: https://en.wikipedia.org/wiki/Zapotec_script
I don't think that's correct. They didn't have the knowledge to produce a typical ARM CPU, or, for that matter, any CPU; they didn't know what computers were, or why they were important, nor did they have the quantum theory or materials science necessary to fabricate useful silicon chips. Probably a collapse would lose the knowledge locked up inside TSMC, Samsung, and Intel. But we'd still know about zone refining, ion implantation, self-aligning gates, hafnia high-κ dielectrics, RISC, superscalar processors, cache hierarchies, etc.
If we forget about typical ARM CPUs for the moment, and just look at ARM CPUs in general, the ARM 2 was supposedly 27000 transistors according to https://en.wikipedia.org/wiki/Transistor_count. If you had to hand-solder 27000 SOT23 transistors onto PCBs at 10 seconds per transistor, it would take you a couple of weeks of full-time work to build one by hand, and probably another week or two to find and fix the assembly errors. It would be maybe a square meter or two of PCBs. At today's labor prices such a CPU would cost on the order of US$5000. At today's 1.3¢ per MOSFET (LBSS84LT1G and 2N7002 from JLCPCB's basic parts list a few years ago), we're talking about US$400 of transistors.
(Incidentally, Chuck Moore's MuP21 chip was 9000 transistors, so we know how to make an acceptably convenient-to-program chip in a lot less space than the ARM. It just does less computation per cycle. A big chunk of the ARM 2 was the multiplier, which Moore left out.)
It probably wouldn't run at more than 5 million instructions per second (maybe 2 VAX MIPS, slower than a 386), and because it's built out of discrete power transistors, it would use a lot more power than the original ARM. But it would run ARM code, and the supply chain only needs to provide two types of MOSFET, PCBs, and solder.
US$5400 for a 2-VAX-MIPS CPU is not a competitive price for computing power in today's world, and if you want to drive the cost down, you need to automate, specialize, and probably diversify the supply chain. If you were building it out of 74HC00-class chips, for example, you'd probably need a dozen or so SKUs, but each chip would be equivalent to about 20 transistors, so you'd only need about 1400 chips, currently costing about 10¢ each, cutting your parts price to about US$140 and your assembly time to probably a day or two of work, so maybe US$500 including debugging. And your clock rates would be higher and power usage lower, because a gate input built out of 2N7002 and similar power MOSFETs will have a gate capacitance around 60pF, while a 74HC08 is more like 6pF. We're down to US$640, which is still far from economically competitive but sure looks a lot better.
The 74HC08 and family are CMOS clones of the SN7400 series launched by Texas Instruments in 01966, at a time when most of the world electronics supply chain (both providing their materials and buying the chips to put into products) was inside the US. It couldn't have happened in Cameroon or Paraguay for a variety of reasons, one of which was that they weren't sufficiently prosperous due to a lack of international trade. But that's somewhat incidental—what matters for the feasibility is that the supply chain had the money it needed, not where that money came from. Unlike the SR-71 project, they didn't have to import titanium from Russia; unlike the US ten years later, they didn't have to import energy from Saudi Arabia.
In his garage, using surplus machinery and wafers from the existing semiconductor supply chain, Sam Zeloof has reached what he says is the equivalent of Intel's 10μm process from 01971 http://sam.zeloof.xyz/category/semiconductor/ in his garage.
On this basis, it seems to me that making something like the 74HC08 from raw materials is something that a dozen or so people could manage, as long as they had existing equipment. It wouldn't even require a whole city, much less a worldwide supply chain.
So why don't we see this? Why is it not happening if it's possible? Well, we're still talking about building something with 80386-like performance for US$700 or so. This isn't currently a profitable product, because LCSC will sell you a WCH RISC-V microcontroller that's several times that fast for 14¢ in quantity 500 (specifically https://www.lcsc.com/product-detail/Microcontrollers-MCU-MPU...), and it includes RAM, Flash, and several peripherals.
If you want to build something like the actual ARM2 chip from 01986, you'll need to increase transistor density by another factor of 25 over what Zeloof has done and get to a 2μm process, slightly better than the process used for the 8086 and 68000: https://en.wikipedia.org/wiki/List_of_semiconductor_scale_ex...
Now, as it happens, typical ARM CPUs today are 48MHz, and Dennard scaling gets you to 25–50MHz at around 800nm, like the Intel 80486 from 01989. So to make a typical ARM CPU, you don't have to catch up to TSMC's 6nm process. You can get by with an 800nm process.
So you might need the work of hundreds or even thousands of people to be able to make something like a typical ARM CPU, and it would probably take them a year or two of full-time work. This works out to an NRE cost on the order of US$50 million. Recouping that NRE cost at 14¢ per chip, assuming a 7¢ cost of goods sold, would require you to sell 700 million chips. And, using an antiquated process like that, you aren't going to be able to match WCH's power consumption numbers, so you probably aren't going to be able to dominate the market to such an extent, especially if you're paying more for your raw materials and machinery.
So it's possible, but it's unprofitable, because the worldwide supply chain can make a better product for a lower cost than this hypothetical Silicon River Rouge plant.
Make no mistake, though: if the worldwide supply chain were to vanish, typical ARM CPUs would be back in less than a decade. We're currently watching the PRC play through this in real time with SMIC. The USA kneecapped 天河-2, the top supercomputer on the TOP500 list, in 02015, and has been fervently attempting to cut off the PRC's semiconductor industry from the world supply chain ever since, on the theory that the US government should have jurisdiction over which companies inside the PRC are allowed to sell to the PRC's military forces. They haven't quite caught up, but with the HiSilicon CPUs used in Huawei's Mate60 cellphones, they've reached 7nm: https://www.youtube.com/watch?v=08myo1UdTZ8
I know that my own comment won't add real value to this conversation, but I wanted to take the time to say it anyhow: This kind of comment is the reason I come to HN. Thank you for taking the time to share your knowledge with us.
It is mostly handwringing about unrealistic scenarios as these types generally assume that all militaries, emergency response groups, every book, factory, and source of expertise completely disappears and it's up to you, the lone hacker on a farm or something to become the source of expertise through a squirrel-cache of carefully curated information. That's very unlikely to happen simultaneously for every single country and if it does it's probably an extinction event.
Targeting smartphones seems like a highly realistic way to make something like this work. The average westerner has like three of them lying around in desk drawers and they already have a number of useful peripherals attached. It is much less obvious, especially to the layman, how to turn the microcontroller in the coffee machine into something like a useful gate control device.
Yes, it's kind of a LARP situation, but imagine a future scenario where some hacker (who also is physically resilient and well-protected in the face of apocalypse) has to figure out how to boot or get some system operating that might control solar panels. Not knowing the architecture - can you boot it up? Can you analyze existing binaries, ports, and other features and get it crudely operating? This sounds like a helluva of a video game.
My way of securely accessing HTTP-only sites these days is to check if the site's been archived. Not sure if it's changed since, but here's a snapshot of the page on Jan 11th 2025[0].
I think it's really cool to create minimalist OSes. Something about perfboard projects and old, slow CPUs tickles my buttons. But I'm struggling to come up with a use case for this project in the End Times. In theory, this allows me to flash a ROM connected to an old through-hole, 8/16-bit CPU like a Z80, 8086, 6809 and 6502. I guess my issue is why and how would I do that during the end of the world?
I can't think of a way to come into possession of MPUs like that without intentionally buying them ahead of time. And if I'm going to stockpile those, I might as well stockpile a more capable MCU or MPU instead and flash it with something else. 99.9% of what I'd want to do with minimalist computers in the apocalyptic wasteland would be just fine without an OS. Bare-bones MCUs work spectacularly for control systems, wireless cryptosystems, data logging, etc.
Maybe I didn't look hard enough in the README [1], but I don't see how I'd bootstrap a system like this without already having a much more capable system on standby. Which comes back to the issue of... why?
Collapse OS is fully self-hosting. Once you have such a system, you can improve it and deploy it elsewhere from within. But yes, your initial deployment will come from a POSIX machine. This is why I talk about two stages of collapse.
That's kind of a bummer, but still neat to have built-in self-hosting. I think I've seen videos of perfboard computers that allow manual data entry with pushbuttons and manual clocks. I could see that being extended to punch cards. (Not trying to be flippant; I think that could be an interesting extension to this sort of project. Have a bucket of parts, a perfboard, and some paper? Let's flash an operating system.)
It seems like any serious endeavor on this front would focus on arm-m0 , risc-v, and Xtensa ESP cores. Those are the ones that can be recovered by the billions. You can recover tens of 1980s era level computers out of most any consumer device these days, even lightbulbs often have one.
For $0.1 I can buy an MCU that can bit bang a keyboard , mouse, sound, VGA, with 2x the memory and 96 times the processing power as my old 6502 based computer. An esp32 is much, much more capable, like better than an old pentium machine and has wifi, usb, Bluetooth, etc…. And costs 0.7-2$ on a module. They can be found in home automation lightbulbs, among other things.
Espressos has shipped over a billion esp32 chips since the platform launched.
Sure, we should have a 6502 based solution, as it has a lot of software out there and a minimal transistor count, making it possible to hand-build. But for a long time we will be digging up esp32s and they are much more useful.
"By "scavenge-friendly electronic parts", I mean parts that can be assembled with low-tech tools. I mostly mean parts with a "through hole" mounting type (in opposition with "surface mount").
"But I do tons of hobbyist electronics with surface mount!", some could say. Yeah, sure, but how do you wire it? You order a PCB from OSH Park? That's not very scavenge-friendly.
" - https://new.collapseos.org/why.html
Surely the easiest way to get a useful OS running on 8 bit CPUs would be to start with an existing one like CP/M that already has an ecosystem of applications and hardware designs including networking.
Perhaps I should drag my Osborne out of the cellar and see if the floppies still work.
"This web page is being served by a completely home-built computer: Bill Buzbee's Magic-1 HomebrewCPU. Magic-1 doesn't use an off-the-shelf CPU. Instead, its custom CPU is built out of ~200 74 series TTL chips."
"Magic-1 is running a new port of Minix 2.0.4, compiled with a retargeted LCC portable C compiler. The physical connection to the internet is done using a native interface based on Wiznet's w5300 TCP/IP stack."
While I hate to condone using TTL rather than CMOS, this is extremely cool!
The CPU is documented at https://homebrewcpu.com/. Unfortunately the TCP/IP information was only posted on Google Plus, which has now been memory-holed by Google.
This misses the point entirely. If you want access to digital tools after a collapse, then you design a (resilient, hard to break, easy to repair) transformer that can convert ANY kind of electricity into e.g. 12v DC. Pair it with a paper manual that describes a few ways to generate power from raw materials (diy batteries, ways to use copper wire if you can get some, etc).
Old laptops or old boards with crts. Sure power is an issue but with your transformer (which is present in most 80s computers) and large, easy to replace and often even make components, you stand a lot more chance of that stuff surviving decades of use. No chance with laptops; maybe if you pack 100 x220 thinkpads? Those you can swap parts out. Still not robust as some of the 80s monsters I have which got wet (roof leak), too hot and had parts break but still work 40 years later. Some with no fixes at all. Those crt monitors are very robust as well, easy to repair-ish (you can use them even with a lot of damage). I have enough computers, diy and farming experience to probably be alive and kicking for a while if this happens, but I think I am not so interested to try.
The point is that the computer is the easy part. It's the fun part to think about, but it's not important for achieving the goal.
The hard part is powering it, when the infrastructure to generate clean electricity is gone. What will you plug your transformer into? So solve that, create robust sources for electric power, and the rest can be solved with a few bulk laptop purchases off ebay.
Not just power generation but also the calorific and opportunity cost of working on that (or indeed anything that does not immediately pay off in energy terms). Or else you die before your compiler is done ;)
If you have more than one survivor, you quickly need to learn to trade and cooperate to maximize your energy return, or else you all die, one by one.
This. Skill sharing among a small tribe is the most effective way to survive a catastrophe, and I am concerned modern societies are poorly conditioned to do it well.
Speaking from a North American perspective, kids are educated in how to succeed in a national/global economy, not how to build small communities and develop/share useful skills. TBH, the latter feels "obsolete" nowadays. Maybe that's a problem.
Right: now you have ~29V and a bunch of amps. With other trash combos, you have other voltages. And that's assuming you have a working voltmeter to even know what you have. Different trash will give you a different output, and what will you do with that?
Plug it into the 'universal transformer' I was talking about, and you're in business. Known power output (that won't fry your precious electronics) and you don't have to care much what the input is.
If you want to build electronics, such as your proposed 12-volt buck-boost converter, you'll find your job immensely easier if you can build them with microcontrollers. That goes double for test equipment. So I don't think it misses the point for that reason.
The problem with claims of imminent societal collapse is they treat "collapse" like an event, with the world neatly divided into before and after. Historically, societal collapse is much more fuzzy. No neat single cause. No single moment when things flipped. You can only truly know the collapse has happened well in retrospect.
In terms of computing, this means that any strategy looking at a before/after collapse framing is missing the entire interesting middle part - how do we transition from this high-tech to low-tech world over the course of decades. I seriously doubt it can happen by anticipating what computing we might need and just leapfrogging there.
Same author has a sort of "missing link" version that is better suited to modern PCs. Doesn't answer your philosophical questions about civilizations etc. but the goal isn't to start building Z80s out of transistors anytime soon.
I think the vision here is more likely to play out, if any were to. Not many fabs that can make modern chips, and the biggest one is under the business end of Chinese artillery so. Gotta make do with whatever we have even if new supply dries up.
https://duskos.org/
Its's the same author.
Uh, wait, I've properly read your comment now.
Still, I miss a ZMachine interpreter for DuskOS. And networking. Gopher, IRC and email (among maybe some NNTP client) are a must.
Plausible, but there really may not be a middle part longer than a few hours or days, depending on the mechanism (and I'm not even considering nuclear war here). I recall reading a first-hand account of surviving in Sarajevo during the war, and one of the things that most struck the author was how quickly -a mere couple of days- everything changed from going about the day as normal to apocalyptic survival-mode hellscape. And not because of specific battle events, but the breakdown of supply chains.
We really have no way to predict it. It is even likely to be different in different locales.
He kinda deals with that - he has also built another OS called DuskOS, which is when we still have modern computers lying around, but can't make more of them. It's the OS you would use prior to devolving all the way to CollapseOS
I feel like he has considered everything. He also has AbacOS which is an abacus-based operating system, for when we run out of energy, then HamstOS for a hamster-wheel powered operating system
HamstOS is just the microcontroller for the power supply. Pros: it is voice activated and fault tolerant. Cons: only responds to voice prompts beginning "Hey Mr. Binky". Also, it secretly hates you.
No, he doesn't.
I don’t think it’s the focus of this, but we do now live in a time where the collapse could potentially happen in an hour.
Or worse, decades in the future when historians look back, they might determine that collapse started before 2025 (perhaps 9/11, 2008 financial collapse, covid, the re-emergence of populist fascism, AI, microplastics, climate change ... take your pick).
Which means we might be living in collapse now, we just don't know it yet. "The Long Emergency" as James Howard Kunstler put it.
Like falling into a black hole; you don't notice as it happens, only looking back with more context.
Oh I see now, thank you for the helpful analogy! :)
Empires that collapsed not by military conquest didn’t know that they had collapsed until years later.
Is there any Empire that collapsed without military conquest? You can argue the collapse predated the conquest, but there’s always a conquest to seal the deal.
The British Empire didn't collapse due to military conquest by the Germans, it fell due to economic conquest by the United States (see the Atlantic Charter and U.S. intervention in the Suez Crisis).
Hasn't Kunstler been predicting the imminent end of oil supplies since the 1990s?
When a professional prognosticator's prognostications continue to be falsified by actual events, year after endless year, at some point don't people stop listening to them?
"When a professional prognosticator's prognostications continue to be falsified by actual events, year after endless year, at some point don't people stop listening to them?"
Yet people still listen to Musk about when Tesla will have autonomous driving.
we'll be terraforming mars soon I just know it!
> When a professional prognosticator's prognostications continue to be falsified by actual events, year after endless year, at some point don't people stop listening to them?
Funnily enough - not whatsoever! And the benefits of this amazing fact are reaped by economists* the world over, year after year.
* And, to be fair, journalists, politicians, various flavours of youtuber, etc, etc.
what do you think about the idea, though?
The creation of Fox News
Why do you think that?
I'm assuming you're talking about the internet, networked services, etc, because that's the only conceivable way a collapse could happen that quickly... except all modern infrastructure we rely on like that has backups and redundancy built in, and largely separated failure domains, so that such a failure would never be able to happen that quickly.
I assume they mean nuclear war.
Yes, exactly. Even if some fabs survived, the ability to supply and staff them would not last long. The world's ability to manufacture advanced computers would disappear.
https://en.m.wikipedia.org/wiki/Cybergeddon
Even if the collapse is a fuzzy decades one I think it would also have an ever increasing probability of the remainder in under an hour as something entirely forgotten fails.
That is DuskOS, from the same creator apparently.
The frog will boil slowly and discontinuously. The net effects are that by 2030:
- Industrial food will gradually become extremely expensive and homesteading will become more popular. Crop failures and famines will be routine as food security and international trade wanes.
- Unemployment will soar with automation and overall decreased purchasing power.
- Crime and theft will soar as police decline (Austin TX is already understaffed by 300 officers and doesn't respond to property crimes until 48 hours later)
- Civil/national/world wars.
- 100M's of climate refugees migrating across and between continents. If you think scapegoating of "illegal" immigrants is bad now, just wait until all forms of hate in the forms of racism, xenophobia, and classism become turbocharged. Expect militarized borders guarded by killer robots.
[dead]
[dead]
In the past couple of years I've worked on a Ghidra extension that can export any subset of a program as a relocatable object file. I built that because I tried to reverse-engineer/decompile a video game and had unconventional opinions on how to proceed about it.
I didn't design it for a post-collapse scenario, but one can salvage program parts from binary artifacts and create new programs with it, just like how a mechanic in a Mad Max world can scavenge car parts from a scrapyard and create new contraptions. It deconstructs the classical compile-assemble-link workflow in a manner that tends to cause migraines to sane persons.
I've even managed to slightly bend the rules around ABI and sling binary code across similar-ish platforms (PlayStation to Linux MIPS and Linux x86 to Windows x86), but to really take this to its logical conclusion I'd need a way to glue together incompatible ABIs or even ISAs, in order to create program chimeras from just about any random set of binaries.
I did this once years ago (on Mac OS classic). I had an mp3 player I liked for its low CPU usage, but didn't like the UI. I figured out the entry points for the mp3 decoding routines and turned it basically into a library to call from my own UI.
Virtual machines/emulators are one extreme, recreating the environment it ran in so no human examination of the particular program is necessary. The approach you describe is at the other end, using bits of programs directly in other code to do basically black-box functions that aren't worth figuring out and coding properly.
This is a technique used in "living off the land" in infosec circles. An attacker might not have root privileges, but there might be a bit of code in this program that does one thing with escalated privilege, and a bit of code in that program that does another thing with escalated privilege, and if you have enough of these you can stitch together a "gadget", a series of jumps or calls to these bits of privileged code, to get a shell or do what you want.
This is a bit different. In your case, you're reusing code and data laying around in a running process to do your bidding from within.
Here, I'm actually stealing code and data from one or more executables and turning these back into object files for further use. Think Doctor Frankenstein, but with program parts instead of human parts.
I only operate on dead patients in a manner of speaking, but you could delink stuff from a running process too I suppose. I don't think it would be useful in the context of return-oriented programming, since all the code you can work with is already loaded in memory, there's no need to delink it back to relocatable code first.
I worked on one of those too: https://github.com/cncNet/petool
Neat! Looks like the closest equivalent in your tooling would be pe2obj, which repacks a PE executable as a COFF object file.
Unless I'm mistaken, it doesn't seem to do anything in particular w.r.t. relocations, which is the tricky part about delinking. My educated guess it that repacking might be enough for purely self-contained position-independent code and data, but anything that contains an absolute address will lead to dangling references to the original program's address space.
My tooling recovers relocatable section bytes and recreates relocation tables in order to export object files that are actually relocatable, regardless of their contents.
It's been a long time, but we manually annotated where the relations should be, and other edits, with the patch sections.
What a neat idea.
We could use emulators to stitch together code from binaries compiled for particular CPUs.
You've tickled more than one person's brain here :D
Please consider writing up something about this binary mashup toolkit. You have taken an unusual path and shown how far one can go... it's worthy of sharing more widely.
Good luck!
The only unusual part about my toolkit is the delinker. Since it outputs relocatable object files, you can then make use of them with conventional tooling.
I've mostly used a linker to build new programs with the parts I've exported. Some of my users use objdiff to compare the output of their decompilation efforts against recreated object files. Others use objcopy or homemade tools to further modify the object files (mostly tweaking the symbol table) prior to reuse. One can generate assembly listings with objdump, although Ghidra already gives you that.
Ironically, the hardest part about using my delinker I think (besides figuring out errors when things go wrong) is just realizing and imagining what you can do with it. It's rather counter-intuitive at first because this goes against anything one might learn in CS 101, but once you get it then you can do all kinds of heretical things with it.
To take this to the next level, beyond adding support for more ISAs and object file exporters, I'd need to generate debugging symbols in order to make debugging the pilfered code a less painful experience. There are plenty of ways delinking can go wrong if the Ghidra database is incorrect or incomplete and troubleshooting undefined behavior from outer space can get tedious.
But in the context of a post-collapse scenario, what one would be really after is some kind of linker that can stitch together a working program from just about any random bunch of object files, regardless of ABI and ISA. My experiments on cross-delinking remained rather tame (same ISA, similar ABI) because gluing all that mess by hand gets really complicated really fast.
Fascinating. I like it! Thanks for sharing.
Is that extension publicly available?
You can find it here: https://github.com/boricj/ghidra-delinker-extension
I've demonstrated a bunch of use-cases on my blog, ranging from porting software to pervasive binary patching and pilfering application code into libraries, all done on artifacts without source code available.
I have one user in particular who successfully used it on a huge (7 MiB) program in a span of a couple of weeks, most of them spent fixing bugs on my end. They then proceeded to crank up the insanity to 11 by splitting that link-time optimized artifact into pieces and swapping parts out willy-nilly. That is theoretically impossible because LTO turns the entire program into a single translation unit subject to inter-procedure optimizations, so functions will exhibit nonstandard calling conventions you can't replicate with freshly built source code, but I guess anything's possible when you binary patch __usercall into MSVC.
I should get back to it once work is less hectic. Delinking is an exquisitely intricate puzzle to solve algorithmically and I've managed to make that esoteric but powerful reverse-engineering technique not only scalable to megabytes worth of code, but also within reach of everyday reverse-engineers.
Wow this one of the most interesting things I’ve come across. Definitely could learn a lot by tinkering with this.
Thanks!
That’s insanely impressive. Your blog is a gold mine, thank you for sharing
I've been meaning to write more articles and not just about delinking or reverse-engineering. I've been hoarding some lower-grade cursed stuff, like a git proxy/cache written in a hundred lines of bash, or a Jenkins plugin that exposes a Debian source repository as a SCM provider and enables one to trivially create a Debian package build server with it (complete with dependency management).
Guess I'd better unbrick my Chromebook first. I still don't know how changing its battery managed to bork an unmodded, non-developer mode Chromebook so thoroughly that the official recovery claims it doesn't know what model it is ("no defined device identities matched this device.").
If "our global supply chain [will] collapse before we reach 2030," why is having a some type of computer a priority?
I feel these kinds of projects may be fun, but they're essentially just LARPing. Though this one seems to have a more reasonable concept than most (e.g. "My raspbery pi-based "cyberdeck" with offline wikipedia will help!" No, you need a fucking farming manual on paper.).
Beyond LARPing the feasibility of building new computers post societal collapse, the website's proposal for collapse being so imminent is... weakly justified?
I'm not an optimist in today's societal climate, but the rationale [1] stems from peak oil and cultural bankruptcy. Peak oil is (was?) the concern of limited oil production in the face of static or increasing demand. The 2019 peak from big oil wasn't about declining production, but declining demand [2]. Which is good, if the ideal is long-lasting migration to renewables.
I won't try to predict the future w/r/t what current societal trends mean for the long-term success of global supply chains, but I would be greatly surprised if cultural bankruptcy alone causes the complete collapse of society in the next 5-10 years.
[1] http://collapseos.org/civ.html
[2] https://www.bp.com/content/dam/bp/business-sites/en/global/c...
I think full scale collapse is unlikely. I'd rate balkanization / large scale internet disruptions as pretty likely to happen within the next few years due to a combination of:
- lightly protected and globally reachable attack surface
- increasing geopolitical tensions
- the bizarre tendency to put everything possible online, i.e. IoT devices that always need internet, resources that rely on CDNs to function, other weird dependencies.
You're thinking of a PC. This project doesn't address that type of computer at all. 8-bit computers programmed in Forth suggests very strongly that this is aimed at control engineering, and area where having a few computers makes a big difference to what you can do in an industrial process. It might also be useful for simple radio communication.
In addition to a controller, being able to do non-trivial calculations will always be useful.
Solar powered micro-controllers might be the way to go that could hook up to crude automated farm machines. Depending on the life span of batteries, solar powered ebook readers to store those farming manuals and other guides, solar powered computers for spreadsheets and planning software, and solar powered cb ham radios for communications with built in encryption keys.
In a societal collapse, the only thing that you list that I think would be worth the effort would be "solar powered cb ham radios for communications."
Crude automated [solar powered] farm machines - would probably be useless. Animal power is probably the way to go. Or steam. Go to a threshing bee sometime: I've seen a tractor that runs on wood.
Solar powered ebook readers to store those farming manuals and other guides - the life span of batteries would be short, get that shit on paper fast.
Solar powered computers for spreadsheets and planning software - plan in your head or use a paper spreadsheet.
Computers might be everywhere today, and you personally might not know how to do anything without a computer, but practically no one had a computer 40-45 years ago, and literally no one had a computer when society was last at a "collapse" level of technology. COMPUTERS ARE NOT NECESSARY.
Just because computers weren't around 40-50 years ago doesn't mean that computers won't be very handy to have around in a post-collapse world. Technology is very much path-dependent: the future does not look like the past, and incorporates everything that has happened up to that point. The world after the collapse of the Roman Empire did not look at all like the world before the Roman Republic, and incorporated many of the institutions and infrastructure left behind by the Roman Empire. It continued to use Roman coinage, for example, the Latin language, Roman roads, Roman provincial government, and so on.
The point of having computers is simply that they perform certain tasks orders of magnitude faster than humans. They're a tool, no more and no less. Before computers, a "calculator" was a person with paper and a slide rule, and you needed hundreds of them to do something like compute artillery trajectories, army logistics, machine tool curves, explosive lensing, sending rockets into space, etc. Managing to keep just one solar-powered calculator working for 10 years after a collapse frees up all those people to do things like farming. Keeping a solar-powered electric tractor working frees up all those farmers, and frees up the animals for eating.
IMHO this project is at least operating under the right principles, i.e. make the software work on scavenged parts, control your dependencies, be efficient with your computations, focus on things you can't do with humans.
> The world after the collapse of the Roman Empire did not look at all like the world before the Roman Republic, and incorporated many of the institutions and infrastructure left behind by the Roman Empire.
Our modern institutions and infrastructure depend on impossibly-complex, precariously-fragile world-spanning supply chains that rely on untold quantities of highly-skilled labor whose own training and employment is dependent upon having enough pre-existing material prosperity that 90% of the population is exempt from needing to grow their own food.
Meanwhile, the supply chain for the pre-Roman and post-Roman worlds were not very different. They were producing tin in Britain in 2000 BC, and they were producing tin in Britain in 1000 AD. Crucially, the top of the production pyramid (finished goods) was still close to the bottom of it (raw materials harvestable with minimal material dependencies) without a hundred zillion intervening layers of middlemen.
> Meanwhile, the supply chain for the pre-Roman and post-Roman worlds were not very different.
This isn't true! We know of huge differences between who was producing what goods and where between Roman and post-Roman Britain. To give one example: ceramic production came to a complete halt, and people essentially had to make do with whatever pre-exiting ceramics they had had beforehand. Sure, an agricultural worker living on their own land off in the countryside might not have noticed a huge difference -- but someone who had been living by a legionary fortress, or one of the primary imperial administrative centers, or in one of the burgeoning villas, certainly would have had to make significant changes across the period.
Yes, I'm guilty of painting with an overbroad brush here in an attempt to emphasize the difference in scale between then and now. It's not the case that the collapse of Roman authority had no effect on the people of the former territories; it certainly led to an indisputable loss of living conditions across the board, including in industrial output. But my point is that, in the event of a modern collapse, we aren't going to revert to some "checkpoint" of irreversible technological progress; we could just as likely revert to the living conditions of a denizen of the remnants of the Eastern empire as of 600 AD (and that might be an optimistic outcome!). Technological progress is not a one-way street, is my meaning, and from our lofty perch, we are entirely capable of crashing hard to Earth.
And yet a consumer durable is a consumer durable regardless of whether the manufacturer stays in business, at least before the modern practice of giving everything an Internet connection and making it phone home to keep working (which CollapseOS explicitly avoids). The Mac LC that I got in 1991 would still boot up in 2012. The CD-ROMs that I burned in the late 90s, I was able to transfer to external hard disk in 2021. My solar panels and PowerWall continue to work when the power and Internet goes down.
Post-collapse society will look very different from modern Information-age society, and will definitely have a lot more people growing their own food. Knowing how to identify plants, and the care instructions (sun/soil/water/space requirements) for each variety you're growing, and how other people have handled problems like pests and rot, can save you several years of failed harvests. Several years of failed harvests is likely the difference between surviving and not surviving.
I'm not intending to denigrate CollapseOS; they appear to be deliberately taking precautions for a specific degree of technological catastrophe, and it seems worthwhile for someone to prepare for that, regardless of how likely one thinks it may be.
> Just because computers weren't around 40-50 years ago doesn't mean that computers won't be very handy to have around in a post-collapse world.
They won't be handy to you or me, because we'd need to put our efforts into securing basic needs. You won't have the luxury to spend your time scavenging parts to build an 8-bit computer to do anything, then spending a bunch more time programming it. Even if you did, how would give you and advantage in acquiring food, shelter, or fuel over much simpler solutions using more basic technologies like paper?
Computers are the kind of thing people with a lot of surplus food spend their time with.
> The point of having computers is simply that they perform certain tasks orders of magnitude faster than humans. They're a tool, no more and no less. Before computers, a "calculator" was a person with paper and a slide rule, and you needed hundreds of them to do something like compute artillery trajectories, army logistics, machine tool curves, explosive lensing, sending rockets into space, etc.
Computers are useful for those tasks, but those are tasks only giant organizations like governments need to do. That's not you in a post-collapse world.
> Managing to keep just one solar-powered calculator working for 10 years after a collapse frees up all those people to do things like farming.
I think you have that backwards. No one's going to skip needed farming work and starve so they can go compute artillery trajectories. If they need to farm, they'll go without the artillery computations.
> Keeping a solar-powered electric tractor working frees up all those farmers, and frees up the animals for eating.
I address that up-thread, but solar-powered electric tractors are a fantasy. Even if such a thing existed, it would wear out, break down, and become irreparable long before technological civilization could be rebooted, so you might as well assume it doesn't exist in your planning.
Also, I don't think you're thinking things through: an animal can both be used to do work and (later) be eaten. If you're very poor, which you would be after some kind of civilization collapse, you don't eat animals very often.
Having a government in a box when everyone around you is scrounging for food makes you king, particularly if you also managed to save a couple militarized drones through the collapse. That's a pretty enviable position to be in.
The point, as with every capital investment, is to make more efficient the labor of the people who are securing those basic needs, so that you can free them up for work progressively higher on the value chain.
During the collapse itself, the way to do this is pretty easy: you kill the people who have food, shelter, or fuel but are not aligned with you, and give it to people who are aligned with you. And then once you have gotten everyone aligned with you, you increase the efficiency of the people who are doing the work. Saving even just one working tractor can cut the labor requirements from farming enough to support a village from several hundred people to one or two people. You will not have petrol in a post-collapse world, so better hope it's an electric tractor, or drop a scavenged electric motor + EV battery into an existing tractor. Use scavenged solar panels for power, there's plenty of that where I am.
All this requires that you know how things work, so you can trace out what to connect to what and repurpose electronic controls and open up the innards of the stuff you find abandoned on the street, and that's where having a computer and a lot of downloaded datasheets and physical/electronic/mechanical/chemical principles available will help.
> Having a government in a box when everyone around you is scrounging for food makes you king, particularly if you also managed to save a couple militarized drones through the collapse. That's a pretty enviable position to be in.
Come the fuck on. A fucking 8-bit computer (even a fucking 64-bit computer) is not a fucking "government in a box." And where the fuck are you going to get your "couple militarized drones"? Assuming they're not suicide drones (where "a couple" is not much), how long will they last? How useless would they be without spare parts, maintenance, and ammunition?
We live in the fucking real world, not some videogame where you can find a goddamn robot in a cave still functioning after 500 years and lethal enough for a boss-battle.
> You will not have petrol in a post-collapse world, so better hope it's an electric tractor, or drop a scavenged electric motor + EV battery into an existing tractor. Use scavenged solar panels for power, there's plenty of that where I am.
Look: if they don't have petrol, they won't have battery factories either. Batteries wear out. Your fantasy electric tractor will be just as useless a petrol one in short order.
There is middle ground between individuals and governments… I may not need automatic accounting and inventory via spreadsheets at a small scale, but being able to model the next 3 days of weather based on local conditions without any expectation of online communications could come in pretty handy
> I may not need automatic accounting and inventory via spreadsheets at a small scale, but being able to model the next 3 days of weather based on local conditions without any expectation of online communications could come in pretty handy
Alright. You have a computer in your possession that is vastly more powerful than an 8-bit machine built from scavenged parts.
1. Do you actually "model the next 3 days of weather based on local conditions without any expectation of online communications" with it?
2. If not, do you know how to built the required sensor suite and write the software to do that?
I feel like you're misunderstanding computers as magic boxes that can do some useful thing with little effort. But this is supposed to be a form of software engineers, building a weather forecasting system would be hard to do even with full access to a university library, Digikey, and the money to work on it full time. But we're talking about doing it with scavenged components while you're hungry looking for food.
Weather forecasts get to be useful when you have samples from a wide range of locations. A few weather stations on maybe a few acres of land wouldn't really get you decent weather predictions. You wouldn't know about some disturbance in the upper atmosphere leading to the jet stream pushing that cold winter blast further South than typical leading to the freeze that destroys your crops. You wouldn't know about that hurricane growing off the shore.
I 100% agree that computers are not necessary. But regardless of that, the ability to program microcontrollers is still a superpower and if you can have it, you have a hell of an edge.
> But regardless of that, the ability to program microcontrollers is still a superpower and if you can have it, you have a hell of an edge.
I really disagree with that. It will give practically no edge. It's a specialist skill that's only really useful in the context of an already computerized society for mass production or big one-offs.
If you have a collapse, I think the assumption is there would little to no mass production of advanced goods (hence, the scavenging concept). Then you're left with big one-offs, which are things large organizations like governments build and not all the time.
I dunno, microcontrollers could help in this zombie apocalypse world we're imagining.
I would think even in small scale having things like a 3d printers, CNC machines, networked video surveillance and alarm systems, solar arrays etc would be very beneficial.
Absolutely though the top priorities would be much simpler things though like food, clean water and shelter.
Process control seems like the big one. Precise tools for precise measurements and monitoring are pretty high up the tech tree, so if you can say "we saved a box of Arduinos and sensors from the Before Times", you can get those capabilities back sooner, and potentially use them as references for tools built with more renewable resources.
In a societal collapse, the only thing that you list that I think would be worth the effort would be "solar powered cb ham radios for communications."
Considering how much potentially invaluable info is only/mostly/only easily available as a PDF, I'm thinking a working ereader would be of nontrivial value.
Now...if there was only a way to crunch a PDF on an 8-bit processor I recovered from my washing machine...
- pdftotext from poppler-tools under Linux/BSD
- wv/odt2txt and friends
- cp/m can read TXT files generated from gnuplot
- A Forth with Starting Forth it's hugely valuable, ditto with a Math Book like the Calculus from Spivak.
Not a collapse, but a network attach on infra makes most modern OSes unusable, they need to be constantly updated. If you can salvage some older machine with DuskOS+networking, simple gopher and IRC clients/servers will work with really low bandwidth (2/3 KBPS and less).
- pdftotext from poppler-tools under Linux/BSD
Based on xpdf. Probably not 8-bit capable.
- pdftotext from poppler-tools under Linux/BSD
From OpenOffice. Probably not 8-bit capable.
Point to a PDF to something useful converter that runs in 64K bytes and can handle the 'PDF as a wrapper around some non-text image of the document' and we can talk. Seriously...I'd be fascinated.
- cp/m can read TXT files generated from gnuplot
Not sure how that helps. And can you port gnuplot to run in 8-bit/64k?
* a network attach on infra makes most modern OSes unusable, *
Ridiculous, unless your definition of 'usable' is 'unless I can get to TwitFaceTubeIn and watch cat videos ima gonna die!'. If civilization collapses and the network goes away tomorrow, my Debian 12, FreeBSD 14 and NetBSD 10 machines will work exactly as well as it does today until I can't power it and/or the hardware dies (sans email and web, of course). Yeah, the windows 10/11 things will bitch and moan constantly, and I assume MacOS too, but even with degraded functionality, it's far from 'unusable'. And I'll be able to load Linux or BSD on them so no worries.
they need to be constantly updated
No, they don't. Updates come in 2 broad categories: security fixes and feature release. Post-collapse and no network makes security much less urgent, and no new features is the new normal...get used to it. I have gear that runs HP/UX 10 (last support in 2003); still runs fine and delivering significant value.
And that ignores DOS, Win3, XP and such, which are still (disturbingly) common.
will work with really low bandwidth
You mean....with a network?
I meant, well, not 8 bit capable, but you can send the generated text files to them.
On 'gnuplot' for cp/m... doing a simple chart from X and Y values in TSV paired in two colums can be done either from forth or pascal or whatever you have to read two arrays.
I'm not stating a solution to read future PDF files. Forget the future ones; what I mean it's to 'convert' the ones we currently have.
I'm at a Spanish pubnix/tilde, a public unix server. Here I have a script which converts -with the help of a Cron job and sfeed-, RSS feeds into plain text to be readable over gopher. These can be read even from the crustiest DOS machines in Latin America and Windows 95/98/XP machines. I even pointed a working Retrozilla release, and it's spreading out quickly.
They are at least able to read news too with a gopher/gemini client and News Waffle, with a client written in TCL/TK saving tons of bandwidth. The port with IronTCL will work in the spot on XP machines. Download, decompress, run 'launch.bat' (lanzar.bar in Spanish). The ZIP file weights 15MB. No SSE2 it's required. Neither tons of RAM.
Compare that to a Chrome install. And Chrome requeriments.
The 2nd/3rd world might not be under an apocalipse, but they don't have the reliability of the first world. And lots of folks adored the News Waffle service saving a 95% of the bandwidth.
Instead of the apocalipse, think about 3rd world guys or the rural America. Somethink like https://lite.cnn.com and https://neuters.de will work even under natural disasters with really reduced bandwidth data. Or https://telae.net for Google Maps searchs.
gopher://magical.fish has news feeds, an English to French/Spanish and so on translator , good links to blogs, games and even TPB search. These can be run on any machine, or Lagrange under Android. And, yes, it might work better under a potential earthquake/flood than the web.
I was opining about reading or converting PDFs on an 8-bit processor. And you're...I dunno what this is but it's certainly not a response to anything I said.
I read the book "The Knowledge: How to Rebuild Our World from Scratch", and to some extent that also felt like LARPing, but I enjoyed it nonetheless.
It also left me wanting more. It has pretty extensive references at the end, I wonder if anyone's put together a collection of all the referenced materials?
Did you read the entire post?
>Some people doubt that computers will stay relevant after a civilizational collapse. I mostly agree. The drastic simplification of our society will mean that we have a lot less information to manage. We'll be a lot more busy with more pressing activities than processing data.
>However, the goal of Collapse OS is not to save computing, but electronics. Managing electricity will stay immensely useful.
I think thats a notable goal. We will conduct business with pen and paper but given that even in a collapse scenario there will be lots of remnants of the current era lying around might as well try and make use of it. Its one of the reason I really don't like the people collecting e-waste in bulk and then melting down the chips for tiny scraps of gold. So much effort went into making those chips there has got to be a better way to preserve these things for usage later.
Civilization doesn't even have to collapse for a project like this to be useful. Like you said, there's lots of e-waste. If you happened to live in a place that ended up with a lot of this stuff but didn't have a lot of infrastructure, you could possibly build up some convenient services with something like this. I like the idea of building software to make hardware less reliant on external resources in general. Over time, it could be useful to have more resilient electronics, because we seem to be designing machines to be more reliant on specific networked infrastructure every year.
What services? For most modern services the big cost inputs are things like human labor, utilities, and real estate. Reusing obsolete hardware doesn't gain anything. It's likely to be a net negative if it takes up more space, uses more electricity, and requires more maintenance.
A lot of functions of electronics don't require tons of processing power or efficiency. Microcontrollers can be used for just about anything. General purpose computing can be put to whatever purpose you can image.
Things supported by this OS like the Z80, 8086, 6502 etc. use around 5-10W. Using simple parts to control complicated machines is a standard operation, and even advanced electronics tend to use a lot of parts using older manufacturing techniques because it's more efficient to keep the old processes running.
Here's a fun article with some context about old processors like this still in production: https://hackaday.com/2022/12/01/ask-hackaday-when-it-comes-t...
If you're running a tractor, sure, 5 watts is not a big deal. But there are a lot of hypothetical post-collapse circumstances where such a high power usage would be prohibitive. Consider, for example, the kinds of radio stations you'd need for the kinds of weather and telecommunications uses I discussed in https://news.ycombinator.com/item?id=43484415, which benefit from being placed on inaccessible mountaintops and running unattended for years on end.
5 watts will drain a 100-amp-hour car battery in 10 days and is basically infeasible to get from improvised batteries made with common solid metals. Current mainstream microcontrollers like an ATSAMD20 are not only much nicer to program but can use under 20 μW, twenty thousand times less. A CR2032 coin cell (220mAh, 3V) can provide 20 μW for about 4 years. But the most the coin cell can provide at all is about 500 μW, so to run a 5-watt computer you'd need 10,000 coin cells. Totally impractical.
And batteries are a huge source of unreliability. What if you make your computing device more reliable by eliminating the battery? Then you need a way to power it, perhaps winding a watchspring or charging a supercapacitor. Consider winding up such a device by pulling a cord like the one you'd use to start a chainsaw. That's about 100 newtons over about a meter, so 100 joules. That energy will run a 5W Z80 machine for 20 seconds, so you have to yank the cord three times a minute, or more because of friction. That yank will run a 20 μW microcontroller for two months.
I agree with your point! Old electronics aren't going to be appropriate for every situation, and modern alternatives are superior for lots of situations. But that doesn't mean that it isn't worth maintaining projects to keep the old ones useful. Plenty of people are still using technology developed thousands of years ago even when there are modern alternatives that are thousands of times more efficient. It just suits their situation better. Putting one of these things in something like a tractor or a dam or anything that has enough energy to spare is exactly the use case. And the relative simplicity of old technology can be a benefit if someone is trying to apply it to a new situation with limited resources or knowledge.
Well, I disagree with yours!
What cases are you thinking of when you say "Plenty of people are still using technology developed thousands of years ago even when there are modern alternatives that are thousands of times more efficient"? I considered hand sewing, cultivation with digging sticks instead of tractors, cooking over wood fires, walking, execution by stoning, handwriting, and several other possibilities, but none of them fit your description. In most cases the modern alternatives are less efficient but easier to use, but in every case I can think of where the efficiency ratio reaches a thousand or more in favor of the new technology, the thousands-of-years-old technology is abandoned, except by tiny minorities who are either impoverished or deliberately engaging in creative anachronisms.
I don't think "the relative simplicity of old technology" is a good argument for attempting to control your tractor with a Z80 instead of an ATSAMD20. You have to hook up the Z80 to external memory chips (both RAM and ROM) and an external clock crystal, supply it with 5 volts (regulated with, I think, ±2% precision), provide it with much more current (which means bypassing it with bigger capacitors, which pushes you towards scarcer, shorter-lived, less-reliable electrolytics), and program it in assembly language or Forth. The ATSAMD20 has RAM, ROM, and clock on chip and can run on anywhere from 1.62 to 3.63 volts, and you can program it in C or MicroPython. (C compilers for the Z80 do exist but for most tasks performance is prohibitively poor.) You can regulate the ATSAMD20's voltage adequately with a couple of LEDs and a resistor, or in many cases just a resistor divider consisting of a pencil lead or a potentiometer.
It would be pragmatically useful to use a Z80 if you have an existing Z80 codebase, or if you're familiar with the Z80 but not anything current, or if you have Z80 documentation but not documentation for anything current, or if you can get a Z80 but not anything current. (One particular case of this last is if the microcontrollers you have access to are all mask-programmed and don't have an "external access" pin like the 8048, 8051, and 80C196 family to force them to execute code from external memory. In that case the fact that the Z80 has no built-in code memory is an advantage instead of a disadvantage. But, if you can get Flash-programmed microcontrollers, you can generally reprogram their Flash.)
Incidentally, the Z80 itself "only" uses about 500 milliwatts, and there are Z80 clones that run on somewhat less power and require less extensive external supporting circuitry. (Boston Scientific's pacemakers run on a Z80 softcore in an FPGA, for example, so they don't have to take the risk of writing new firmware.) But the Z80's other drawbacks remain.
The other draw of an established "old architecture" is that it's fairly fixed and sourcable.
There are a bazillion Z80s and 8051s, and many of them are in convenient packages like DIP. You can probably scavenge some from your nearest landfill using a butane torch to desolder them from some defunct electronics.
In contrast, there are a trillion flavours of modern MCUs, not all drop-in interchangeable. If your code and tooling is designed for an ATSAMD20, great, but I only have a bag of CH32V305s. Moreover, you're moving towards finer pitches and more complex mounting-- going from DIP to TSSOP to BGA mounting, I'd expect every level represents a significant dropoff of how many devices can be successfully removed and remounted by low-skill scavengers.
I suppose the calculus is different if you're designing for "scavenge parts from old games consoles" versus proactively preparing a hermetically sealed "care package" of parts pre-selected for maximum usability.
It's a good point that older hardware is less diverse. The dizzying number of SKUs with different pinouts, different voltage requirements, etc., is potentially a real barrier to salvage. I have a 68000 and a bunch of PALs I pried out of sockets in some old lab equipment; not even desoldering was needed. And it's pretty common for old microprocessors to have clearly distinguishable address and data buses, with external memory. And I think I've mentioned the lovely "external access" pin on the 8048, 8051, and 80C196 family, though on the 80c196 it's active low.
On the other hand, old microcontrollers are a lot more likely to be mask-programmed or OTP PROM programmed, and most of them don't have an EA pin. And they have a dizzying array of NIH instruction sets and weird debugging protocols, or, often, no debugging protocol ("buy an ICE, you cheapskate"). And they're likely to have really low speeds and tiny memory.
Most current microcontrollers use Flash, and most of them are ARMs supporting OCD. A lot of others support JTAG or UPDI. And SMD parts can usually be salvaged by either hot air or heating the board up on a hotplate and then banging it on a bucket of water. Some people use butane torches to heat the PCB but when I tried that my lungs were unhappy for the rest of the day.
I was excited to learn recently that current Lattice iCE40 FPGAs have the equivalent of the 8051's EA pin. If you hold the SPI_SS pin low at startup (or reset) it quietly waits for an SPI master to load a configuration into it over SPI, ignoring its nonvolatile configuration memory. And most other FPGAs always load their configuration from a serial Flash chip.
The biggest thing favoring recent chips for salvage, though, is just that they outnumber the obsolete ones by maybe 100 to 1. People are putting 48-megahertz reflashable 32-bit ARMs in disposable vapes and USB chargers. It's just unbelievable.
In terms of hoarding "care packages", there is probably a sweet spot of diversity. I don't think you gain much from architectural diversity, so you should probably standardize on either Thumb1 ARM or RISC-V. But there are some tradeoffs around things like power consumption, compute power, RAM size, available peripherals, floating point, GPIO count, physical size, and cost, that suggest that you probably want to stock at least a few different part numbers. But more part numbers means more pinouts, more errata, more board designs, etc.
I appreciate the thought and detail you put into these responses. That's beyond the scope of what I anticipated discussing.
The types of things I had in mind are old techniques that people use for processing materials, like running a primitive forge or extracting energy from burning plant material or manual labor. What's the energy efficiency difference between generating electricity with a hand crank vs. a nuclear reactor? Even if you take into account all the inputs it takes to build and run the reactor, the overall output to input energy ratio is much higher, but it relies on a lot of infrastructure to get to that point. The type of efficiency I'm thinking of is precisely the energy required to maintain and run something vs. the work you get out of it.
In the same way, while old computers are much less efficient, models like these that have been manufactured for decades and exist all over might end up being a better fit in some cases, even with less efficiency. I can appreciate that the integration of components in newer machines like the ATSAMD20 can reduce complexity in many ways, but projects like CollapseOS are specifically meant to create code that can handle low-level complexity and make these things easier to use and maintain.
The Z80 voltage is 5V+/-5%, so right around what you were thinking. Considering the precision required for voltage regulation required is smart, but if you were having to replace crystals, they are simple and low frequency, 2-16Mhz, and lots have been produced, and once again the fact that it uses parts that have been produced for decades and widely distributed may be an advantage.
Your point about documentation is a good one. It does require more complicated programming, but there are plenty of paper books out there (also digitally archived) that in many situations might be easier to locate because they have been so widely distributed over time. If I look at archive.org for ATSAMD20 I come up empty, but Z80 gives me tons of results like this: https://archive.org/details/Programming_the_Z-80_2nd_Edition...
Anyway, thank you again for taking so much time to respond so thoughfully. You make great points, but I'm still convinced that it's worthwhile to make old hardware useful and resilient in situations where people have limited access to resources, people who may still want to deploy some forms of automation using what's available.
Projects like this one will hopefully never be used for their intended purpose, but they may form a basis for other interesting uses of technology and finding ways to take advantage of available computing resources even as machines become more complicated.
In my sibling comment about the overall systems aspects of the situation, I asserted that there was in fact enormously more information available for how to program in the 32-bit ARM assembly used by the ATSAMD20 than in Z80 assembly. This is an overview of that information, starting, as you did, from the Internet Archive's texts collection.
Searching the Archive instead for [arm thumb programming] I find https://archive.org/details/armassemblylangu0000muha https://archive.org/details/digitaldesigncom0000harr_f4w3 https://archive.org/details/armassemblyforem0000lewi https://archive.org/details/SCE-ARMref-Jul1996 (freely available!) https://archive.org/details/armassemblylangu0000hohl https://archive.org/details/armsystemarchite0000furb https://archive.org/details/learningcomputer0000upto https://archive.org/details/raspberrypiuserg0000upto_i5z7 etc.
But the Archive isn't the best place to look. The most compact guide to ARM assembly language I've found is chapter 2 of "Archimedes Operating System: A Dabhand Guide" https://www.pagetable.com/docs/Archimedes%20Operating%20Syst..., which is 13 pages, though it doesn't cover Thumb and more recently introduced instructions. Also worth mentioning is the VLSI Inc. datasheet for the ARM3/VL86C020 https://www.chiark.greenend.org.uk/~theom/riscos/docs/ARM3-d... sections 1 to 3 (pp. 1-3 (7/56) to 3-67 (45/56)), though it doesn't cover Thumb and also includes some stuff that's not true of more recent processors. These are basically reference material like the ARM architectural reference manual I linked above from the Archive; learning how to program the CPU from them would be a great challenge.
There's a lovely short tutorial at https://www.coranac.com/tonc/text/asm.htm as well (43 pages), and another at https://www.mikrocontroller.net/articles/ARM-ASM-Tutorial (109 pages). And https://azeria-labs.com/writing-arm-assembly-part-1/ et seq. is probably the most popular ARM tutorial. None of these is as well written as Raymond Chen's introductory Thumb material: https://devblogs.microsoft.com/oldnewthing/20210615-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210616-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210617-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210625-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210624-46/?p=10... https://devblogs.microsoft.com/oldnewthing/20210531-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210601-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210602-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210603-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210604-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210607-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210608-00/?p=10... (I'd link an index page but I couldn't find one.) Chen covers most of the pragmatics of using the Thumb instruction set well.
There's an ARM Thumb assembler in μLisp (which can itself run on embedded ARMs) at https://github.com/technoblogy/lisp-arm-assembler, which of course explains all the instruction encodings, documented at http://forum.ulisp.com/t/an-arm-assembler-written-in-lisp/12.... Lots of free software already runs on the chip, including FreeRTOS.
https://mcuoneclipse.com/2016/08/14/arm-cortex-m-interrupts-... covers the Cortex-M interrupt system, and lcamtuf has written an excellent tutorial for getting the related ATSAMS70J21 up and running https://lcamtuf.substack.com/p/mcu-land-part-3-baby-steps-wi....
Stack Overflow has 12641 questions tagged [arm] https://stackoverflow.com/questions/tagged/arm, as opposed to 197 for [z80]. Most of these are included in the Kiwix ZIM files of SO like https://download.kiwix.org/zim/stack_exchange/stackoverflow.... (see https://library.kiwix.org/?lang=eng&q=&category=stack_exchan...).
I also appreciate your responses! I especially appreciate the correction about the Z80's power supply requirements.
> What's the energy efficiency difference between generating electricity with a hand crank vs. a nuclear reactor?
A hand crank is about 95% efficient. An electromechanical generator is about 90% efficient. Your muscles are about 25% efficient. Putting it together, the energy efficiency of generating electricity with a hand crank is about 21%. Nuclear reactors are about 40% efficient, though that goes down to about 4% if you include the energy cost of building the power plant, enriching the fuel, etc. The advantages of the nuclear reactor are that it's more convenient (requiring less human attention per joule) and that it can be fueled by uranium rather than potatoes.
> Even if you take into account all the inputs it takes to build and run the reactor, the overall output to input energy ratio is much higher. (...) The type of efficiency I'm thinking of is precisely the energy required to maintain and run something vs. the work you get out of it.
The term for that ratio, which I guess is a sort of efficiency, is "ERoEI" or "EROI". https://en.wikipedia.org/wiki/Energy_return_on_investment#Nu... says nuclear power plants have ERoEI of 20–81 (that is, 20 to 81 joules of output for every joule of input, an "efficiency" of 2000% to 8100%). A hand crank is fueled by people eating biomass and doing work at energy efficiencies within about a factor of 2 of the best power plants. Biomass ERoEI varies but is generally estimated to be in the range of 3–30. So ERoEI might improve by a factor of 30 or so at best (≈81 ÷ 3) in going from hand crank to nuclear, and possibly get slightly worse. It definitely doesn't change by factors of a thousand or more.
Even if it were, I don't think hand-crank-generated electricity is used by "plenty of people".
> projects like CollapseOS are specifically meant to create code that can handle low-level complexity and make these things easier to use and maintain.
I don't think CollapseOS really helps you with debugging the EMI on your RAM bus or reducing your power-supply ripple, and I don't think "ease of use" is one of its major goals. Anti-goals, maybe. Hopefully Virgil will correct me on that if he disagrees.
> if you were having to replace crystals, they are simple and low frequency, 2-16Mhz, and lots have been produced, and once again the fact that it uses parts that have been produced for decades and widely distributed may be an advantage.
I don't think a widely-distributed crystal makes assembly or maintenance easier than using an on-chip RC oscillator instead of a crystal. It does have real advantages for timing precision, but you can use an external crystal with most modern microcontrollers just as easily as with a Z80, the only drawback being that the cheaper ones are rather short on pins. Sacrificing two pins of a 6-pin ATTiny13 to your clock really reduces its usefulness by a lot.
> If I look at archive.org for ATSAMD20 I come up empty, but Z80 gives me tons of results like...
Oh, that's because you're looking for the part number rather than the CPU architecture. If you don't know that the ATSAMD20 is a Cortex-M0(+) running the ARM Thumb1 instruction set, you are going to have a difficult time programming it, because you won't know how to set up your C compiler.
There is in fact enormously more information available for how to program in 32-bit ARM assembly than in Z80 assembly, because it's the architecture used by the Acorn, the Newton, the Raspberry Pi, almost every Android phone ever made, and old iPhones. See my forthcoming sibling comment for information about ARM programming.
Aside from being a much better compilation target for high-level languages like C, ARM assembly is much, much easier than Z80 assembly. And embedded ARMs support a debugging interface called OCD which dramatically simplifies the task of debugging broken firmware.
> models like [Z80s and 6502s] that have been manufactured for decades and exist all over might end up being a better fit
There are definitely situations where Z80s or 6502s, or entire computers already containing them, are more easily available than current ARM microcontrollers. (For example, if you're at my cousin's house—he's a collector of obsolete computers.) However, it's difficult to overstate how much more ubiquitous ARM microcontrollers are. The heyday of the Z80 and 6502 ended in about 01985, at which point a computer using one still cost about US$2000 and only a few million such computers were sold per year. The most popular 6502 machine was the Commodore 64, whose total lifetime production was 12 million units. The most popular 8080-family machine (supporting a few Z80 instructions) was probably the Gameboy, with 119 million units. We can probably round up the total of deployed 8080 and 6502 family machines to 1 billion, most of which are now in landfills.
By contrast, we find ARMs in things like not just the Gameboy Advance but the Anker PowerPort Atom PD 2 USB-C charger http://web.archive.org/web/20250101181745/https://forresthel... and disposable vapes https://ripitapart.com/2024/04/20/dispo-adventures-episode-1... https://old.reddit.com/r/embedded/comments/1e6iz4a/chinese_c... — and, as of 02021, ARM tells us 200 billion ARMs had been shipped https://newsroom.arm.com/blog/200bn-arm-chips and were then being produced at 900 ARMs per second.
That means about as many ARMs were being produced every two weeks as 8080 and 6502 machines in history, a speed of production which has probably only accelerated since then. Most of those are embedded microcontrollers, and I think that most of those microcontrollers are reflashable.
Other microcontroller architectures like the AVR are also both more pleasant to program and more abundant than Z80s and 6502s. They also feature simpler and more consistent sets of peripherals than typical Z80 and 6502 machines, in part because the CPU itself is so fast that a lot of the work these obsolete chips need special-purpose hardware for can instead be done in software.
So, I think that, if you want something useful and resilient in situations where people have limited access to resources, people who may still want to deploy some forms of automation using what's available, you should focus on ARM microcontrollers. Z80s and 6502s are rarely available, much less useful, fragile rather than resilient, inflexible, and unnecessarily difficult to use.
> though that goes down to about 4% if you include the energy cost of building the power plant, enriching the fuel, etc.
Rereading this, I don't know in what sense it could be true.
What I was thinking of was that the cost of energy from a nuclear power plant is on the order of ten times as many dollars as the cost of the fuel, largely as a result of the costs of building it, which represents a sort of inefficiency. However, what's being consumed inefficiently there isn't energy; it's things like concrete, steel, human attention, bulldozer time, human lives, etc., collectively "money".
If, as implied by my 4% figure, what was being consumed by the plant construction were actually 22.5x as much energy as comes out of the plant over its lifetime, rather than money, its ERoEI would be about 0.044. It would require the lifetime output of twenty or thirty 100-megawatt power plants to construct a single 100-megawatt nuclear power plant. That is not the case. In fact, as I explained later down in the same comment, the ERoEI of nuclear energy is generally accepted to be in the range of about 10 to 100.
This is some quality information!
About the return on investment, the methodology is interesting, and I’m surprised that a hand crank to nuclear would increase so little in efficiency. But although the direct comparison of EROI might be small, I wonder about this part from that article:
“It is in part for these fully encompassed systems reasons, that in the conclusions of Murphy and Hall's paper in 2010, an EROI of 5 by their extended methodology is considered necessary to reach the minimum threshold of sustainability,[22] while a value of 12–13 by Hall's methodology is considered the minimum value necessary for technological progress and a society supporting high art.”
So different values of EROI can yield vastly different civilizational results, the difference between base sustainability and a society with high art and technology. The direct energy outputs might not be thousands of times different, but the information output of different EROI levels could be considered thousands of times different. Without a massive efficiency increase, society over the last few thousand years got much more complex in its output. I’m not trying to change terms here just to win an argument but trying to qualify the final results of different capacities of harnessing energy and technology.
I think this gets to the heart of the different arguments we’re making. I’m not in any way arguing that these old architectures are more common in total quantity than ARM. That difference in production is only going to increase. I wouldn’t have known the specific difference, but your data is great for understanding the scope.
My argument is that projects meant to make technology that has been manufactured for a long period of time and has been widely distributed more useful and sustainable are worthwhile, even when we have more common and efficient alternatives. This doesn’t in any way contradict your point about ARM architecture being more common or useful, and I’d be fully in favor of someone extending this kind of project to ARM.
In response to some of the other points: using an external crystal is just an example of how you could use available parts to maintain the Z80 if it needed fixing but you had limited resources. In overall terms, it might be easier to throw away an ARM microcontroller and find 100 replacements for it than even trying to use an external crystal for either one, but again I’m not saying it’s a specific advantage to the Z80 that you could attach a common crystal, just something that might happen in a resource-constrained situation using available parts. Better than the kid in Snowpiercer sitting and spinning the broken train parts at least.
Also, let me clarify the archive.org part. I wasn’t trying to demonstrate the best process for getting info. I just picked that because they have lots of scanned books to simulate someone who needed to look up how to program a part they found. I know it’s using ARM, but the reason I mentioned that had to do with the distribution of paper books on the subject and how they’re organized. The book I linked to starts with very basic concepts for someone who has never programmed before and moves quickly into the Z80, all in one old book, because it was printed in a simpler time when no prior knowledge was assumed.
There are plenty of paper books on ARM too, and probably easier to find, but now that architectures are becoming more complicated, you’re more likely to find sources online that require access to a specific server and have specialized information requiring a certain familiarity with programming and the tools needed for it. More is assumed of the reader.
If you were able to find that one book, you could probably get pretty far in using the Z80 without any familiarity with complex tools. Again, ARM is of course popular and well-documented, but the old Z80 stuff is still out there and simple enough to understand and even analyze with your bare eyes in more detail than you could analyze an ARM microcontroller without some very specific tools.
So all that info about ARM is excellent, but this isn’t necessarily a competition. It’s someone’s passion project who chose a few old, simple, and still-in-production technologies to develop a resilient and translatable operating system for. It makes sense to start with the earlier technology because it’s simpler and less proprietary, but it would also make sense to extend it to modern architectures like ARM or RISC-V. I wouldn’t be surprised if sometime in the future some person or AI did just that. This project just serves as a nice starting point for an idea on resilient electronics.
What's your point? A lot of simple devices are still being manufactured with cheap microcontrollers. Most of them don't even have an OS as such. If society collapses it's not like people are going to scavenge the microcontroller out of their washing machine and use it to reboot civilization.
In https://news.ycombinator.com/item?id=43484415 I outlined some extremely advantageous uses for automatic computation even in unlikely deep collapse situations, for most of which the microcontroller out of your washing machine (or, as I mention in https://news.ycombinator.com/item?id=43487644, your disposable vape or USB charger) is more than sufficient if you can manage to reprogram it.
Even if your objectives are humbler than "rebooting civilization" (an objective I think Virgil opposes), you might still want to, for example, predict the weather, communicate with faraway family members, automatically irrigate plants and build other automatic control systems, do engineering and surveying calculations, encrypt communications, learn prices in markets that are more than a day's travel away, hold and transmit cryptocurrencies, search databases, record and play back music and voice conversations, tell time, set an alarm, carry around photographs and books in a compact form, and duplicate them.
Even a washing-machine microcontroller is enormously more capable of these tasks than an unaided human, though, for tasks requiring bulk data storage, it would need some kind of storage medium such as an SD card.
A pressing need for processing data post-collapse is weather forecasting. Knowing the upcoming weather can help avoid crop failures from planting at the wrong time. Of course the data aggregation would also be very challenging as you need data from remote sites for good forecasting.
It might also be nice to know where (and when) groundwater is safe to drink
our descendents will perhaps not be thrilled to learn about the "time bombs" we have left them, steadily inching into aquifers or up into surface water. that is of course if they are not too distracted locating water of even dubious potability to care
They did weather forecasting before computers (that was literally my grandpa's job during WWII).
Weather forecasting before computers already relied on rapid telecommunications of low-bandwidth digital data (temperature, pressure, humidity, wind speed and direction, and precipitation) from a network of weather stations. Digital telecommunications is something that computers and radios can provide at enormously lower cost than networks of telegraph cables. See https://news.ycombinator.com/item?id=43484415 for details.
Did your grandpa do it at the same scale, speed, and with the same accuracy as modern day weather forecasting?
> Did your grandpa do it at the same scale, speed, and with the same accuracy as modern day weather forecasting?
No. But if you're in a "post collapse" situation, building 8-bit computers from scavenged parts to run FORTH, you aren't either.
I think the goal is that on the scale of grandpa <-> super computer, you end up somewhere in the middle, rather than back to grandpa.
Even if it is at the speed, scale, and accuracy of grandpa, offloading the labor is valuable.
Even a little data from remote sites can provide a huge advantage for forecasting. Temperature, humidity, and air pressure, roughly three bytes, four times a day: 0.001 bps per weather station. (Precipitation and wind speed and direction are pretty useful, but worse cost-benefit.) And collection of that data is very much less labor-intensive when a microcontroller and radio does it for you.
Other kinds of very-low-bit-rate telecommunications messages that are still extremely valuable:
"Lucretia gravely ill. Hurry."
"I-44 mile 451: bandits."
"Corn $55 at Salem."
"Trump died."
"Springfield captured."
"General Taylor signed ceasefire."
"Livingstone found alive."
The first of these inspired Morse to invent the telegraph; she died before the mail reached him. None of them are over 500 bits even in ASCII, and probably each could be encoded in under 100 bits with some attention to coding, some much less. 100 bits over, say, 2 hours, requires a channel capacity of 0.014 bits per second.
Even without advanced compression algorithms, you could easily imagime the corn message being, say, "<figs>!$05000<ltrs>ZCSXV" in ITA2 "Baudot" code: 14 5-bit characters, 70 bits.
Information theory shows that there's no such thing as being out of communication range; it's just a question of what the bit rate of the channel is. But reducing that to practice requires digital signal processing, which is many orders of magnitude more difficult if you are doing it with pencil and paper. It also benefits greatly from precise timekeeping, which quartz resonator crystals make cheap, reliable, robust, and lightweight.
Encryption is another case where an amount of computation that is small for a microcontroller can be very valuable, even if you have to transmit the encrypted message by carving it into wood with your stone knife.
The Bitcoin blockchain in its current form requires higher bandwidth than a weather station network, but still a tiny amount by current internet standards, about 12kbps originally, I think about 26kbps with segwit. Bitcoin (or an alternative with a longer block time) could potentially provide a way to transmit not just prices but actual payments under adverse circumstances. It does require that participants have enough computing power to sign transactions; I think it should be relatively resilient against imbalances of computation power among participants, as long as no 51% attack becomes feasible through collusion.
I wrote, "<figs>!$05000<ltrs>ZCSXV". This should have read, "<figs>!$05500<ltrs>ZCSXV".
Also, see https://news.ycombinator.com/item?id=43487785 for a list of other end-uses for which even a microcontroller would provide an enormous advantage over no electronics at all.
Bitcoin will be useless in that case. Half of the techbros wouldn't even survive the early 90's. In the 80's, forget something like properly learning Forth with Starting Forth and doing something useful.
Bitcoin already withstood a rapid withdrawal of more than half of the mining power over about a month, that time the PRC outlawed Bitcoin mining. And it also survived the relatively sudden collapse of Mt. Gox, which accounted for significantly more than half the trading at the time. And it survived its price collapsing by more than half in 24 hours, from over US$8000 to under US$4000. It seems a have pretty good survival characteristics.
In an environment where there isn't a world hegemon to run something like the post-Bretton-Woods system, international payments, if they are to happen at all, need to be settled somehow. The approach used for the 3000 years up to and including the Bretton Woods period was shipping gold and silver across oceans in boats. Before that, the Mediterranean international economy was apparently a gift economy, while intranational trade in Mesopotamia used clay bills of deposit.
In a hypothetical post-collapse future without a Singularity, there may be much less international trade. But I hope it's obvious that international trade covers a spectrum from slightly advantageous to overwhelmingly advantageous, so it is unlikely to disappear altogether. And Bitcoin has overwhelming advantages over ocean shipping of precious metals. For example, it can't be stolen in transit or lost in a shipwreck, and the latency of a payment is about half an hour rather than six weeks.
And all the blockchain requires to stay alive is about 26 kilobits per second of bisection bandwidth.
> Did you read the entire post?
I did. That's why I said it was a "more reasonable concept than most."
> why is having some type of computer a priority.
Mechanization and Automation.
It creates time you would normally have to spend actively on labor. Age of energy and all.
The main inputs for this though would be quite difficult to bootstrap. I'm talking about magnet wire enamel and electrical isolation.
You need that to make motors and actuators. You need computers for the control systems.
I never bothered with fancy acid-free paper. Modern paper has good longevity, but to be safe I use non-recycled, non-whitened (91 brightness) paper.
Guide with prong fasteners (I prefer white tape, full-size pages, double/triple`inside a sheet protector as covers): https://www.youtube.com/watch?v=KVnpnHWcE04
A different technique with brad fasteners: https://www.youtube.com/watch?v=vD3vWZ0I85g
Lots of fancy book-binding tutorials out there, but I suspect most people don't realize how simple a paperback can actually be.
> With an adjustable hole punch, prong fasteners, (optionally) duct tape for the spine, and (optionally) sheet protectors for the front and back covers, you can crank out a survival library of (open-source) books as fast as pages come out your Brother laser.
This is actually one of the few "post apocalyptic" computer ideas that actually makes some sense to me. Though it would probably still make more sense to pre-print that library than wait until conditions are difficult to do the printing.
Unless your plan is assume availability of printer paper, you'd still need to store the same volume of blank paper as your library, and you'd be stuck trying to run a printer when things like electricity are much less available.
Basically it's cliff-notes from my shot at an "MVP of home hardcopy backups." No surprise, these simpler techniques (often seen when corporations had internal printing for employee handbooks and such) are better suited to home processing with minimal equipment. All you need is a three-hole punch.
It's not about achieving a "perfect" bookbinding (a real term), or for people who do bookbinding as a hobby. Instead it's a fast/easy/cheap technique for people who just want a hardcopy backup, without needing a special bookbinding vice in the house.
Three ring binders were my obvious first choice, but they're surprisingly costly, somewhat awkward to use, prone to tearing pages, and usually take more space on the shelf.
Hope that explains it better. Cheers
I'm still WAY better off with my solar panels and ALL the books on an external hard drive. I can't think of ANY situation where a print out of Wikipedia would be better than a digital version.
Wikipedia is an extreme example, since it's highly impractical to print the whole thing. OTOH printing your "Top 20" survival books is quick and affords a nice measure of system-level dissimilar redundancy (not to mention valuable barter goods :D).
> I can't think of ANY situation where a print out of Wikipedia would be better than a digital version.
An EMP destroys your electronics and makes accessing the digital version impossible?
In this scenario we have the print version AND the digital version.
You could also say in the middle of a house file with the paper on fire the digital version would be better but it's pointless to invent around the assumptions.
But the claim was that there is no scenario at all where print would be better than digital.
> I'm still WAY better off with my solar panels and ALL the books on an external hard drive. I can't think of ANY situation where a print out of Wikipedia would be better than a digital version.
I just gave a scenario where a print out of Wikipedia would be better than a digital version.
Of the bookbinding tips I gave, "obliterate your digital copy" was (and I didn't think I had to explain this) not one of the suggested steps. ;-)
Hence backup, not format shift.
> I'm still WAY better off with my solar panels and ALL the books on an external hard drive.
No you're not. One component is your setup gets fried and you lose access to everything.
Paper is its own reader device: it's far more resilient, because it has much few dependencies for use. Just think about digital archiving vs paper archiving for a bit, and it becomes clear.
> I can't think of ANY situation where a print out of Wikipedia would be better than a digital version.
Wikipedia would be of little practical value in a post collapse situation. And frankly, it's pretty terrible now for anything besides satisfying idle curiosity.
Whenever I read these kinds of collapse posts I wonder: where do the bodies go in this scenario? Because if that kind of collapse happens the management of seven billion corpses becomes pressing.
For french speaking people, I recommend "Ravage" by Barjavel. It's not realistic by any means, but it's a nice take on this very issue :)
The serious take: live in the countryside. This is an urban problem.
Depending on the source of their un-aliveness you have a few options:
1. they rot on the face of the ground and the carrion eaters will take care of it
2. they are already cinder/charcoal and the plants will take care of it
Given a land area of 510 million km², in the worst case, that's a sudden event producing 14 bodies per km², about 25 meters from one body to the next. If I were one of a small number of survivors in that situation, I'd dig a mass grave, dump 14 bodies into it, and enjoy my nice, sanitary square kilometer all by myself. Probably boil the water from the well. Until I die of starvation because I don't know how to hunt, or from an infected wound, or something.
In more plausible cases, we're talking about a population collapse where the deaths are either concentrated in cities, spread out over several years, or both. If they're concentrated in cities, maybe avoid the cities for three to six months until they finish rotting. If they're spread out over several years, those who die later will be able to bury those who die earlier; it only takes a few man-hours of labor to dig a grave.
Burn the bodies, let scavengers have at them, or put them in a river and let those downstream deal with them. Probably easier to burn or let scavengers have at them than bury them. Fewer calories burned on your part compared to digging.
Regarding hunting, if you're near a river and it isn't piling up with bodies from upstream, you can take up fishing. It's easier to learn from scratch on your own and requires less effort.
> 510 million km², in the worst case, that's a sudden event producing 14 bodies per km², about 25 meters from one body to the next.
That's the area of the planet; you only get that distribution if the event also redistributes the bodies evenly over the entire globe, including oceans. (Though I'm not sure how you go from 14/km^2 to 25 meter separation?)
> If they're spread out over several years, those who die later will be able to bury those who die earlier; it only takes a few man-hours of labor to dig a grave.
During the covid pandemic, which was around a single percentage point of the world population over a few years, there were reports in the UK and the USA of mass graves, normal death procedures being overwhelmed.
Global supply chain collapse is kinda the "mass starvation because farms can't get fertiliser to grow enough crops, nor ship them to cities" scenario. If you can't hunt or fish, you're probably one of the corpses (very few people will have the land to grow their own crops, especially without fertiliser).
You are right; the correct number was 148.94 million km², which produces 47 corpses per km² of land.
> Though I'm not sure how you go from 14/km^2 to 25 meter separation?
√14 ≈ 4 and I somehow managed to think that 1000m ÷ 4 = 25m. In fact that calculation should have given 250m, and √47 ≈ 7, and 1000m ÷ 7 ≈ 140m. So we're talking about on the order of a city block between corpses.
> there were reports in the UK and the USA of mass graves, normal death procedures being overwhelmed.
Yeah, but it wasn't a question of corpses rotting in the streets despite all the survivors digging graves full-time; it was just a question of the usual number of gravediggers being unable to cope. If people had just dug graves themselves for their family members in their yards, the way they do for pets, it wouldn't have been a problem; but that was prohibited.
Anyway, I think people who worry about health risks from corpse pileups due to society collapsing are really worrying about the wrong thing. The corpses won't be piled up; at most, they'll be so far apart that you could walk for days without seeing one unless you're someplace especially flat or with an especially high density of corpses, and in realistic scenarios, they'll just be buried like normal, mostly over years or decades, not left out to rot en masse.
I’m most likely going to be one of the corpses, so not my problem.
Some people are afraid the world will end, others are afraid it won't. The latter type actively enjoy thinking about this sort of thing.
I think we'll see more of it as the boomers continue to age. Some folks mistake their own looming end for the end of all things.
Doom is useful. The reason you stockpile food or grow a crop in spring is to avoid the doom that would come winter, if you didn't. Fearing a possible bad future is fairly fundamental for survival.
I assume there's some significant hard-wiring for this type of emotion, with some people having a different baseline for their sensitivity to it. I also suspect the environment might push regional genetic population to have different sensitivities, depending on how harsh/unstable the environment is. I say "unstable" because I see doom as anxiety about the possibility. For example, in a region with a consistent harsh winter, doom has less use because piling food up is just routine, everyone does it, it's obvious. But, in an unstable environment, with a winter that's only sometimes harsh, you need to fear that possibility of a harsh winter. You're driven by the anxiety of what might be. You stockpile food even though you don't need it now: you're irrational in the instantaneous context, but rational for the long term context. It's a neat future looking emotion that probably evolved closely with intelligence.
I have a small farm, solar, a pond and a ton of farming/gardening books. I for one would LOVE a PipBoy-like cyberdeck with a snapshot of the internet from 2025 when I' hunkered down with my dogs, sheep and goats.
[dead]
I am not into this end-of-the-world collapse thing at all, but I like reading about the strong focus on simplicity and support for limited hardware. There is a lot of talk about keeping things simple in software, but very few act on it. I don't need the threat of a global disaster to be interested in running operating systems like that.
Been doing some retro hobby (16-bit, real-mode, 80286) DOS development lately. It is refreshing to look at a system and be able to at least almost understand all the things going on. It might not be the simplest possible system, not the most elegant CPU design, but compared to the bloated monsters we use today it is very nice to work with. DOS is already stretching the limits of what I can keep in my head and reason about without getting completely lost in over-engineered (and leaky) abstraction layers.
Now I want a scavengers' guide to identify machines that might have a compatible microprocessor within them, because I haven't seen a Commodore or Apple II in a long time. Arcades with older cabinets obviously have them, but if most post-apocalyptic media are prophetic, they'll probably be occupied by a gang of young ne'er-do-wells. I suppose, thanks to the College Board, Ti-83s (Z80) are still quite common in the US. Are there toys, medical equipment, or vending machines that still use these chips?
I wonder if ESP32s and Arduinos might be more commonly found, though I could see the argument that places with those newfangled chips may be more likely to become radioactive craters in some scenarios.
I tried to do something like that in my free time. ARMs are everywhere, PICs are very common as well: https://github.com/ninakali/chip_scavenger/tree/main/src
Hey, this is fantastic, thanks!
The Z80 is still an insanely popular microcontroller, that can be found it so, so many devices. Head down to your local Walmart's toy aisle open some of the electronics there (Post-collapse, of course) and you'll certainly find a few.
That's not as easy as just dropping a full computer on your desk, but having a low power processor that's easy to find replacements for would be useful. That is, of course, if you spent the time pre-collapse to learn how to make a useful system out of those components, which I suspect is the real goal of Collapse OS.
Can you name two such devices? Because the only thing I've ever found a Z80 in is an S1 MP3 player. I've found ARMs, 8051s, and weird Epson microcontrollers they seem to have never published the instruction set for, but never a Z80.
According to [0], TI were launching new graphic calculators with Z80s as late as 2016 - the TI-84 Plus CE-T Python edition (which is still available on Amazon UK[1][2]!)
Not a hugely compelling argument for "the z80 is still a popular microcontroller", mind.
[0] https://en.wikipedia.org/wiki/Comparison_of_Texas_Instrument...
[1] https://www.amazon.co.uk/Texas-Instruments-TI-84-CE-T-Python...
[2] Mild cheating going on though because it runs the Python on an ARM copro.
Well, Z80-compatible but more efficient processors, sure. That pretty well counts because you can still run your own software on them, such as KnightOS, but KnightOS can't compile itself, and I don't think CollapseOS will run on a TI-84.
Collapse OS runs on the TI-84+ but doesn't implement mass storage on its flash yet, so it's not self-hosting on that target.
Thank you for the correction!
They only stopped making them last year. Someone was buying them, enough to keep the production lines going, and not to run CP/M.
TI uses them in one of their calculator lines. I've seen gobs of them as embedded controllers in invarious industrial systems (I have family in manufacturing). I understand a lot of coin-op games (think pinball or pachinko) use them. I've seen them in appliance controller boards (don't recall the brand).
Thanks! That sounds pretty plausible, except that the chips in TI's calculators were low-power Z80 clones, of which there are plenty still in production. Still, "various industrial control systems" is a far cry from "Head down to your local Walmart's toy aisle (...) and you'll certainly find a few."
except that the chips in TI's calculators were low-power Z80 clones
So? That's like saying "there are no 8051s because Intel doesn't make them", even tho there are millions if not billions of clones made every year (since you'll undoubtedly say "well I haven't seen one", if your car tells you when your tire pressure is low, there's 4 8051s).
If we're talking about the ability to repurpose salvaged chips for new purposes, it barely matters what the instruction set is. What matters enormously about the 8051 in particular is that (1) by default it runs code from mask ROM you can't change, unlike the 8751, and (2) it has a pin called "external access" which lets you force it to run code from external SRAM or EPROM. Neither (1) nor (2) is generally true of the clones. What matters most then is whether you can figure out how to get it to run your code and toggle its pins. Can you load code via JTAG? UPDI? Changing an external ROM chip? Maybe it boots from an SPI Flash? That's what matters most.
(It also matters whether you need things like an external crystal or three power rails.)
I mean, it's convenient when you can use an existing compiler, or at least can find documentation for the instruction set; and 8-bit instruction sets and address buses are constraining enough that they can really make it hard to do things. But these are not nearly as important as being able to get your code running on the chip at all.
So, no, instruction-set-compatible clones (like the ones I mentioned I found in https://news.ycombinator.com/item?id=43488079, the day before your comment) are not interchangeable with Intel 8051s in the context of improvising computational capabilities out of garbage. Pinouts matter. Programming waveforms matter.
With respect to the Z80-clone TI calculators, in https://news.ycombinator.com/item?id=43488344 Virgil explained that they can in fact run CollapseOS, but can't yet self-host because CollapseOS can't yet access their Flash. If you want to use them to control motors or solenoids or something, you still need some pinout information, which I'm not sure we have.
Talk about missing the point, although you accidentally stumbled over it while ranting away...it doesn't matter if it's a clone or an original if you can still hack it.
What point were you trying to make? I may need it explained in a simpler way.
This link might be coming up in the context of https://news.ycombinator.com/item?id=43481590 which went a bit under the HN radar.
EDIT: oh, and in case the website goes over quota, try https://new.collapseos.org/ . I haven't thrown the DNS switch yet, but I expect my meager Fastmail hosting will bust today.
It's a nice article from Wired in the linked thread.
And from your website:
> What made me turn to the "yup, we're fucked" camp was "Comment tout peut s'effondrer" by Pablo Servigne, Éditions du Seuil, 2015.
That'll make for some light reading next time I head up north. Thanks for the recommendation.
I've been meaning to give this a try for a while - I have several of the "supported" systems and I've designed a couple of my own simple Z80 computers over the past few years. Being able to program AVRs from a scavenged e-waste Z80 system seems strangely compelling to me. There's a LOT of AVRs in e-waste.
But, if you absolutely needed to depend on it, I think that other technology might be better for the kinds of things we normally use microcontrollers for. If I need to control electronics where something that does this for a minute, then waits for that so it can do something else until this other thing, it's hard to imagine that a would-be apocalypse-ee would want to use a microcontroller rather than a more easily improvised mechanical cam timer [0]
0 - https://en.wikipedia.org/wiki/Cam_timer
I'm interested to hear what kinds of things you find AVRs in! Are they reflashable?
Could be just about anything, from thermostats to CPAP machines to SFP fiber transceivers. Anything that needs to be just a little bit smart (but usually not internet connected) can have one. For hobbyists, the ATMega328P was the go-to for about 15 years, before ESP32 and Pi Pico took over more recently. I'm sure Arduino helped a lot with the AVR's proliferation.
There are no one-time programmable versions of AVR as far as I know, so they can all have their internal Flash reprogrammed.
Awesome, thanks! I know them mostly from hobbyist contexts, and I noted that Atmel kind of stopped updating them after the ATMega328PU (and then failed as a company), so I wondered if they were kind of a failed product line, because there weren't enough hobbyists to sustain the product line. It's nice to hear I was wrong!
Appropriately enough, the website is no longer accessible due to rate limiting, and the Internet Archive is down due to a power outage.
https://x.com/internetarchive/status/1905030204357214335
> Bandwidth Restricted
>The page you have tried to access is not available because the owner of the file you are trying to access has exceeded our short term bandwidth limits. Please try again shortly.
It seems it collapsed!
Well the load balancer is still up, but its just doing a redirect loop now.
I can't imagine a single 302 request getting redirected 10x in a loop to the redirect limit per visitor is good for bandwidth.
Sorry about the mess, I didn't anticipate the load. The website has been moved, but DNS propagation must occur. https://new.collapseos.org/ might work if your DNS still points to the old server.
Check out BunnyCDN. I got a few, no-frills sites running on it. They're cheap. They also have a paid, permanent cache. It's absorbing both burst and AI crawler traffic for me right now.
this line got me:
> under what conditions do you believe the civilization you're living under will collapse? [...] Oh, you just haven't made that list yet? [...] Or you could take the question from the other side and list the conditions that are necessary for your civilization to continue to exist. That's an interesting list too.
I've always dismissed collapse talk as "batshit crazy" as the author says. but he raises good points here. maybe I'm not batshit crazy enough.
It still seems pretty crazy to me.
In the wired article the author says he thinks climate change will disrupt trade routes, but obviously society would route around the damage as long as it remained profitable to do so. The only scenario in which this hypothetical makes sense would be mass causality events like pandemics or wars that prevent travel and trade.
So we're talking about a hypothetical situation in which global trade has halted almost completely for some reason, and has been stopped for a very, very long time. This means that the majority of the worlds population are either dead or dying, which means that YOU will be either dead or dying as well.
Even if we accept the premise (a tall ask) AND we assume you will happen to be one of the survivors because you're "special", wouldn't it make more sense to prep by building a cargo ship so trade can resume sooner than it does to build apocalypse proof micro-controller code?
Trade routes as we know them today are made possible by Pax Americana, which is fading as we speak.
Don't be so Amerigo-centrist. Europe and the Mediterranean Sea have been trading since litteral millenia. Guess why? civilization was born there because the tribes could share knowledge and goods like crazy.
Meanwhile, America didn't even exist for the Romans.
I'm not well informed enough to have an opinion at that scale (few are). However, the potential death of Pax Americana implies that trade routes will be different in the future, not that people will inexplicably stop trading with one another completely. In fact, War is often a great stimulator of trade.
There was a lot of trade in the centuries up to WWII, though.
Yes, but not of the sophistication required to produce a typical ARM CPU.
You understimate what the Roman, Spanish and British empires could accomplish for its day.
They sucked at product development.
Well, we created America. At first the EULA looked right, but it didn't apply to all the consumers, such as non-WASP people, specially for Black ones.
Later they were community patches but there was a violent litigious case, until the 60's, where the consumer rights were almost fully respected.
But now they made a hostile bid against the company and they are enshitifficating America like the old times.
America long predates the Roman Empire. Hell, Americans had already invented writing before the Roman Empire, maybe before the Roman Republic: https://en.wikipedia.org/wiki/Zapotec_script
I don't think that's correct. They didn't have the knowledge to produce a typical ARM CPU, or, for that matter, any CPU; they didn't know what computers were, or why they were important, nor did they have the quantum theory or materials science necessary to fabricate useful silicon chips. Probably a collapse would lose the knowledge locked up inside TSMC, Samsung, and Intel. But we'd still know about zone refining, ion implantation, self-aligning gates, hafnia high-κ dielectrics, RISC, superscalar processors, cache hierarchies, etc.
If we forget about typical ARM CPUs for the moment, and just look at ARM CPUs in general, the ARM 2 was supposedly 27000 transistors according to https://en.wikipedia.org/wiki/Transistor_count. If you had to hand-solder 27000 SOT23 transistors onto PCBs at 10 seconds per transistor, it would take you a couple of weeks of full-time work to build one by hand, and probably another week or two to find and fix the assembly errors. It would be maybe a square meter or two of PCBs. At today's labor prices such a CPU would cost on the order of US$5000. At today's 1.3¢ per MOSFET (LBSS84LT1G and 2N7002 from JLCPCB's basic parts list a few years ago), we're talking about US$400 of transistors.
(Incidentally, Chuck Moore's MuP21 chip was 9000 transistors, so we know how to make an acceptably convenient-to-program chip in a lot less space than the ARM. It just does less computation per cycle. A big chunk of the ARM 2 was the multiplier, which Moore left out.)
It probably wouldn't run at more than 5 million instructions per second (maybe 2 VAX MIPS, slower than a 386), and because it's built out of discrete power transistors, it would use a lot more power than the original ARM. But it would run ARM code, and the supply chain only needs to provide two types of MOSFET, PCBs, and solder.
US$5400 for a 2-VAX-MIPS CPU is not a competitive price for computing power in today's world, and if you want to drive the cost down, you need to automate, specialize, and probably diversify the supply chain. If you were building it out of 74HC00-class chips, for example, you'd probably need a dozen or so SKUs, but each chip would be equivalent to about 20 transistors, so you'd only need about 1400 chips, currently costing about 10¢ each, cutting your parts price to about US$140 and your assembly time to probably a day or two of work, so maybe US$500 including debugging. And your clock rates would be higher and power usage lower, because a gate input built out of 2N7002 and similar power MOSFETs will have a gate capacitance around 60pF, while a 74HC08 is more like 6pF. We're down to US$640, which is still far from economically competitive but sure looks a lot better.
The 74HC08 and family are CMOS clones of the SN7400 series launched by Texas Instruments in 01966, at a time when most of the world electronics supply chain (both providing their materials and buying the chips to put into products) was inside the US. It couldn't have happened in Cameroon or Paraguay for a variety of reasons, one of which was that they weren't sufficiently prosperous due to a lack of international trade. But that's somewhat incidental—what matters for the feasibility is that the supply chain had the money it needed, not where that money came from. Unlike the SR-71 project, they didn't have to import titanium from Russia; unlike the US ten years later, they didn't have to import energy from Saudi Arabia.
In his garage, using surplus machinery and wafers from the existing semiconductor supply chain, Sam Zeloof has reached what he says is the equivalent of Intel's 10μm process from 01971 http://sam.zeloof.xyz/category/semiconductor/ in his garage.
On this basis, it seems to me that making something like the 74HC08 from raw materials is something that a dozen or so people could manage, as long as they had existing equipment. It wouldn't even require a whole city, much less a worldwide supply chain.
So why don't we see this? Why is it not happening if it's possible? Well, we're still talking about building something with 80386-like performance for US$700 or so. This isn't currently a profitable product, because LCSC will sell you a WCH RISC-V microcontroller that's several times that fast for 14¢ in quantity 500 (specifically https://www.lcsc.com/product-detail/Microcontrollers-MCU-MPU...), and it includes RAM, Flash, and several peripherals.
If you want to build something like the actual ARM2 chip from 01986, you'll need to increase transistor density by another factor of 25 over what Zeloof has done and get to a 2μm process, slightly better than the process used for the 8086 and 68000: https://en.wikipedia.org/wiki/List_of_semiconductor_scale_ex...
Now, as it happens, typical ARM CPUs today are 48MHz, and Dennard scaling gets you to 25–50MHz at around 800nm, like the Intel 80486 from 01989. So to make a typical ARM CPU, you don't have to catch up to TSMC's 6nm process. You can get by with an 800nm process. So you might need the work of hundreds or even thousands of people to be able to make something like a typical ARM CPU, and it would probably take them a year or two of full-time work. This works out to an NRE cost on the order of US$50 million. Recouping that NRE cost at 14¢ per chip, assuming a 7¢ cost of goods sold, would require you to sell 700 million chips. And, using an antiquated process like that, you aren't going to be able to match WCH's power consumption numbers, so you probably aren't going to be able to dominate the market to such an extent, especially if you're paying more for your raw materials and machinery.
So it's possible, but it's unprofitable, because the worldwide supply chain can make a better product for a lower cost than this hypothetical Silicon River Rouge plant.
Make no mistake, though: if the worldwide supply chain were to vanish, typical ARM CPUs would be back in less than a decade. We're currently watching the PRC play through this in real time with SMIC. The USA kneecapped 天河-2, the top supercomputer on the TOP500 list, in 02015, and has been fervently attempting to cut off the PRC's semiconductor industry from the world supply chain ever since, on the theory that the US government should have jurisdiction over which companies inside the PRC are allowed to sell to the PRC's military forces. They haven't quite caught up, but with the HiSilicon CPUs used in Huawei's Mate60 cellphones, they've reached 7nm: https://www.youtube.com/watch?v=08myo1UdTZ8
I know that my own comment won't add real value to this conversation, but I wanted to take the time to say it anyhow: This kind of comment is the reason I come to HN. Thank you for taking the time to share your knowledge with us.
Aw, thanks! I hope that what I wrote was almost entirely correct, but HN unfortunately only allows two weeks for people to post corrections.
> mass causality events like pandemics or wars
Luckily neither of those things has happened in the last few years.
It is mostly handwringing about unrealistic scenarios as these types generally assume that all militaries, emergency response groups, every book, factory, and source of expertise completely disappears and it's up to you, the lone hacker on a farm or something to become the source of expertise through a squirrel-cache of carefully curated information. That's very unlikely to happen simultaneously for every single country and if it does it's probably an extinction event.
[dead]
Targeting smartphones seems like a highly realistic way to make something like this work. The average westerner has like three of them lying around in desk drawers and they already have a number of useful peripherals attached. It is much less obvious, especially to the layman, how to turn the microcontroller in the coffee machine into something like a useful gate control device.
Wow. I love this.
Yes, it's kind of a LARP situation, but imagine a future scenario where some hacker (who also is physically resilient and well-protected in the face of apocalypse) has to figure out how to boot or get some system operating that might control solar panels. Not knowing the architecture - can you boot it up? Can you analyze existing binaries, ports, and other features and get it crudely operating? This sounds like a helluva of a video game.
you might enjoy Caves of Qud
Well they're certainly walking the walk by serving this website over plaintext HTTP/1.1
My way of securely accessing HTTP-only sites these days is to check if the site's been archived. Not sure if it's changed since, but here's a snapshot of the page on Jan 11th 2025[0].
[0] https://archive.md/Q207v
That's actually a pretty good idea, thanks.
FWIW I'm a bit torn on this. I think most websites should host both HTTP/2/3 with TLS, but also plaintext HTTP/1.1 as a fallback.
I think it's really cool to create minimalist OSes. Something about perfboard projects and old, slow CPUs tickles my buttons. But I'm struggling to come up with a use case for this project in the End Times. In theory, this allows me to flash a ROM connected to an old through-hole, 8/16-bit CPU like a Z80, 8086, 6809 and 6502. I guess my issue is why and how would I do that during the end of the world?
I can't think of a way to come into possession of MPUs like that without intentionally buying them ahead of time. And if I'm going to stockpile those, I might as well stockpile a more capable MCU or MPU instead and flash it with something else. 99.9% of what I'd want to do with minimalist computers in the apocalyptic wasteland would be just fine without an OS. Bare-bones MCUs work spectacularly for control systems, wireless cryptosystems, data logging, etc.
Maybe I didn't look hard enough in the README [1], but I don't see how I'd bootstrap a system like this without already having a much more capable system on standby. Which comes back to the issue of... why?
[1] https://git.sr.ht/~vdupras/collapseos
Collapse OS is fully self-hosting. Once you have such a system, you can improve it and deploy it elsewhere from within. But yes, your initial deployment will come from a POSIX machine. This is why I talk about two stages of collapse.
That's kind of a bummer, but still neat to have built-in self-hosting. I think I've seen videos of perfboard computers that allow manual data entry with pushbuttons and manual clocks. I could see that being extended to punch cards. (Not trying to be flippant; I think that could be an interesting extension to this sort of project. Have a bucket of parts, a perfboard, and some paper? Let's flash an operating system.)
It seems like any serious endeavor on this front would focus on arm-m0 , risc-v, and Xtensa ESP cores. Those are the ones that can be recovered by the billions. You can recover tens of 1980s era level computers out of most any consumer device these days, even lightbulbs often have one.
For $0.1 I can buy an MCU that can bit bang a keyboard , mouse, sound, VGA, with 2x the memory and 96 times the processing power as my old 6502 based computer. An esp32 is much, much more capable, like better than an old pentium machine and has wifi, usb, Bluetooth, etc…. And costs 0.7-2$ on a module. They can be found in home automation lightbulbs, among other things.
Espressos has shipped over a billion esp32 chips since the platform launched.
Sure, we should have a 6502 based solution, as it has a lot of software out there and a minimal transistor count, making it possible to hand-build. But for a long time we will be digging up esp32s and they are much more useful.
This is genius.
https://git.sr.ht/~vdupras/duskos/tree/master/item/fs/doc/de...
Where are we going to find engineers that know Forth post-collapse?
"By "scavenge-friendly electronic parts", I mean parts that can be assembled with low-tech tools. I mostly mean parts with a "through hole" mounting type (in opposition with "surface mount").
"But I do tons of hobbyist electronics with surface mount!", some could say. Yeah, sure, but how do you wire it? You order a PCB from OSH Park? That's not very scavenge-friendly. " - https://new.collapseos.org/why.html
Not that I totally disagree with this but see clay PCBs, another post-supply-chain-collapse electronics project https://media.ccc.de/v/38c3-clay-pcb#t=1689
That looks really cool, but as we see from the size of the traces, I doubt you can fit a modern surface mount chip on that.
Surely the easiest way to get a useful OS running on 8 bit CPUs would be to start with an existing one like CP/M that already has an ecosystem of applications and hardware designs including networking.
Perhaps I should drag my Osborne out of the cellar and see if the floppies still work.
Very cool! I asked about something similar in 2010 on HN: https://news.ycombinator.com/item?id=1396876
Funny to see how the comments haven't shifted (and have!) in the past 15 years.
Will it run on my Pip-Boy 3000 ?
I always knew FORTH would be the chosen one when all else fails.
I find this to be less unhinged than say TempleOS, but it still seems unnecessary to focus on outdated languages and ancient CPU's.
If/when civilization collapses, we will have zero problem scavenging for x86 CPUs.
Also, I forgot. The CollapseOS author and kragen might love it:
https://magic-1.org
Thanks! I don't remember if I've seen it before.
In case it's down for others: https://web.archive.org/web/20250221070009/http://magic-1.or...
"This web page is being served by a completely home-built computer: Bill Buzbee's Magic-1 HomebrewCPU. Magic-1 doesn't use an off-the-shelf CPU. Instead, its custom CPU is built out of ~200 74 series TTL chips."
"Magic-1 is running a new port of Minix 2.0.4, compiled with a retargeted LCC portable C compiler. The physical connection to the internet is done using a native interface based on Wiznet's w5300 TCP/IP stack."
While I hate to condone using TTL rather than CMOS, this is extremely cool!
The CPU is documented at https://homebrewcpu.com/. Unfortunately the TCP/IP information was only posted on Google Plus, which has now been memory-holed by Google.
There's a clone by Aidil Jazmi documented at https://www.aidilj.com/homemadecpu/.
Related. Others?
Running CollapseOS on an Esp8266 - https://news.ycombinator.com/item?id=38645124 - Dec 2023 (1 comment)
DuskOS: Successor to CollapseOS - https://news.ycombinator.com/item?id=36688676 - July 2023 (4 comments)
Collapse OS – Why? - https://news.ycombinator.com/item?id=35672677 - April 2023 (1 comment)
Collapse OS: Winter is coming - https://news.ycombinator.com/item?id=33207852 - Oct 2022 (2 comments)
Collapse OS - https://news.ycombinator.com/item?id=31340518 - May 2022 (8 comments)
Collapse OS Status: Completed - https://news.ycombinator.com/item?id=26922146 - April 2021 (2 comments)
Collapse OS – bootstrap post-collapse technology - https://news.ycombinator.com/item?id=25910108 - Jan 2021 (116 comments)
Collapse OS Web Emulators - https://news.ycombinator.com/item?id=24138496 - Aug 2020 (1 comment)
Collapse OS, an OS for When the Unthinkable Happens - https://news.ycombinator.com/item?id=23535720 - June 2020 (2 comments)
Collapse OS - https://news.ycombinator.com/item?id=23453575 - June 2020 (15 comments)
Collapse OS – Why Forth? - https://news.ycombinator.com/item?id=23450287 - June 2020 (166 comments)
Collapse OS – Why? - https://news.ycombinator.com/item?id=22901002 - April 2020 (3 comments)
'Collapse OS' Is an Open Source Operating System for the Post-Apocalypse - https://news.ycombinator.com/item?id=21815588 - Dec 2019 (3 comments)
Collapse OS - https://news.ycombinator.com/item?id=21182628 - Oct 2019 (303 comments)
This misses the point entirely. If you want access to digital tools after a collapse, then you design a (resilient, hard to break, easy to repair) transformer that can convert ANY kind of electricity into e.g. 12v DC. Pair it with a paper manual that describes a few ways to generate power from raw materials (diy batteries, ways to use copper wire if you can get some, etc).
Then keep some laptops in a waterproof box.
Old laptops or old boards with crts. Sure power is an issue but with your transformer (which is present in most 80s computers) and large, easy to replace and often even make components, you stand a lot more chance of that stuff surviving decades of use. No chance with laptops; maybe if you pack 100 x220 thinkpads? Those you can swap parts out. Still not robust as some of the 80s monsters I have which got wet (roof leak), too hot and had parts break but still work 40 years later. Some with no fixes at all. Those crt monitors are very robust as well, easy to repair-ish (you can use them even with a lot of damage). I have enough computers, diy and farming experience to probably be alive and kicking for a while if this happens, but I think I am not so interested to try.
The point is that the computer is the easy part. It's the fun part to think about, but it's not important for achieving the goal.
The hard part is powering it, when the infrastructure to generate clean electricity is gone. What will you plug your transformer into? So solve that, create robust sources for electric power, and the rest can be solved with a few bulk laptop purchases off ebay.
Not just power generation but also the calorific and opportunity cost of working on that (or indeed anything that does not immediately pay off in energy terms). Or else you die before your compiler is done ;)
If you have more than one survivor, you quickly need to learn to trade and cooperate to maximize your energy return, or else you all die, one by one.
This. Skill sharing among a small tribe is the most effective way to survive a catastrophe, and I am concerned modern societies are poorly conditioned to do it well.
Speaking from a North American perspective, kids are educated in how to succeed in a national/global economy, not how to build small communities and develop/share useful skills. TBH, the latter feels "obsolete" nowadays. Maybe that's a problem.
Generating power from trash is easier than you think: https://hackaday.com/2014/09/01/hydropower-from-a-washing-ma...
Right: now you have ~29V and a bunch of amps. With other trash combos, you have other voltages. And that's assuming you have a working voltmeter to even know what you have. Different trash will give you a different output, and what will you do with that?
Plug it into the 'universal transformer' I was talking about, and you're in business. Known power output (that won't fry your precious electronics) and you don't have to care much what the input is.
If you want to build electronics, such as your proposed 12-volt buck-boost converter, you'll find your job immensely easier if you can build them with microcontrollers. That goes double for test equipment. So I don't think it misses the point for that reason.
Breaking: hn commenters aghast and appalled that they aren't the target audience of a project.
Does it run on an IBM 5100?
You are John Titor and I claim my $5!
[dead]
[dead]
[dead]
[flagged]