Really, don't do this, it's a portability and safety nightmare (aside from C not being memory safe already).
C programmers are better off with either of these two techniques:
* Use __attribute__((cleanup)). It's available in GCC and Clang, and we hope will be added to the C spec one day. This is widely used by open source software, eg. in systemd.
(I didn't include using reference counting, since although that is also widely used, I've seen it cause so many bugs, plus it interacts badly with how modern CPUs work.)
My C programs never consumed gigs of memory. So I (like many others I assume) made a memory manager and never freed anything. You'd ask it for memory and it kept a list of various sizes it allocated and returned what you needed to be re-used. Freeing and allocating is slow, and error prone, so just avoid it!
When something would normally be freed, it calls the memory manager's version of free(), which zeroes the memory and adds it back into the appropriate available list.
Isn't that just stacking your own allocator on top of the libc allocator, the same way the libc allocator is stacked on top of the OS's page mappings? It's often a sensible idea, of course, but I wouldn't describe it as "never freeing things", just substituting libc's malloc()/free() for your own. It's not like the libc allocator is doing something radically different from keeping lists of available and allocated memory.
You're just describing every allocator in the world, except many (most?) skip the zeroing part.
libc already does that. What is it that yours is adding?
I'd say 25 years ago you could write your own naive allocator, make just a couple of assumptions for your use case, and beat libc. But no more.
One of the selling points of Java in the 90s was the compacting part. Because in the 90s fragmentation was a much bigger problem than it is today. Today the libc allocators have advanced by maybe tens of thousands of PhDs worth of theory and practice. Oh, and we have 64bit virtual address space, which helps with some (but not all) of the problems with memory fragmentation.
What are your issues with the memory requirements being small? One of the programs was a MUD that consumed a couple hundred megabytes, and I never had issues with it.
I mentioned gigabytes because of how mine specifically worked. It allocated chunks in powers of 2, so there was some % of memory that wasn't being used. For instance, If you only need 20 bytes for a string, you got back a pointer for a chunk of 32 bytes. Being just a game, and side project, I never gave it much thought, so I'm curious to hear your input.
Yeah it's reasonable. Unfortunately if you do it, and you run tools like Coverity, it'll produces reams of complaints about how you're leaking memory :-( There was one project which was genuinely a short-lived program that never needed to free memory, but in the end I gave in and added free() statements everywhere. Otherwise we could never have got it into RHEL.
Last time I looked this was (golang-like) function scoped, not { } scoped, which means it's a bad idea. My feedback was the committee should simply standardize the existing attribute / behaviour, as that is widely used already.
defer is nice, but I really want the cleanup attribute since it could in theory by applied to the return type of a function. In other words you could have malloc return a pointer with the cleanup attribute that automatically frees it at end of scope if it's non-NULL. (And if you want to persist the pointer just assign to a different variable and zero out the one malloc gave you.)
> In other words you could have malloc return a pointer with the cleanup attribute that automatically frees it at end of scope if it's non-NULL.
That is not, as far as I know, how __attribute__((cleanup)) works. It just invokes the callback when the value goes out of scope. So you can't have malloc return an implicitly cleanup'd pointer unless malloc is a macro, in which case you can do the same with a defer block.
I desperately hope it is not added, Meneide throws a tantrum like he did about his Rust conference talk, and he leaves the C committee forever. He is a malign influence that is on record as saying he dislikes C and wants it to be replaced by Rust.
> It's available in GCC and Clang, and we hope will be added to the C spec one day. This is widely used by open source software, eg. in systemd.
It’s odd that the suggestion for a feature lacking in C is to use a non standard but well used supported path. c’s main selling point (IMO) is that it _is_ a standard, and relying on compiler vendor extensions kind of defeats the purpose of that.
Defer was almost part of the C23 standard, but didn't quite make it. They've slimmed and changed a few things, and it looks like it very much might be part of the next.
It's so widely used by OS software that you're likely using already, that it's unlikely to be removed and much more likely to be standardized. This is in fact how standardization ought to work - standardize the proven best practices.
I agree. But if we follow that logic then any compiler specific feature of either or clang is fair game, even if it’s not standard. MSVC doesn’t support it when compiling in C mode, for example.
> relying on compiler vendor extensions kind of defeats the purpose of that.
Let's be honest, how many compilers are available, and how many of those would you actually use?
The answer isn't more than 4 and the 2 compilers you are most likely to use among those already support this and probably won't stop supporting without a good alternative.
I like standardisation, but you have to be realistic when it helps you without a large real cost other than fighting your ideals for getting this into the standard first.
For me, the point of writing something in C is portability. There were C compilers 30 years ago, there are C compilers now, and there will almost certainly be C compilers 30 years from now. If I want to write a good, portable library that's going to be useful for a long time, I'll do it in C. This is, at least, the standard in many gamedev circles (see: libsdl, libfreetype, the stb_* libraries). Under that expectation, I write to a standard, not a compiler.
For the cases in linked doc, does adding -std=gnu17 to packages not suffice?
I would consider the union initializer change (require adding -fzero-init-padding-bits=unions for old behavior) much more hidden and dangerous, which is not directly related to ISO C23 standard.
It's true that it does, yes. However that would still require changes to the build system. In any case for the vast majority of the packages we decided to fix (if you think this is a fix!) the code.
For accessing any post-1970s operating system feature (e.g. async IO or virtual memory) you already cannot use standard C anymore (and POSIX is not the C stdlib).
The libraries you listed are all full of platform-specific code, and also have plenty of compiler-specific code behind ifdefs (for instance the stb headers have MSVC specific declspec declarations in them).
E.g. there is hardly any real-world C code out there that is 'pure standard C', if the code compiles on different compilers and for different target platforms then that's because the code specifically supports those compilers and target platforms.
My argument is that using these non-standard extensions to do important things like memory management in a C library is malpractice—it effectively locks down the library to specific C compilers. I'm sure that's fine if you're writing to clang specifically, but at that point, you can just write C++. libfreetype & stb_* are used and continue to be used because they can be relied on to be portable, and using compiler-specific extensions (without ifdefs) defeats that. If I relied on a clang-specific `defer`, I'm preventing my library from possibly being compiled via a future C compiler, let alone the compilers that exist now. To me, that's the point of writing C instead of C++ for a library (unless you're just a fan of the simplicity, which is more of an ideological, opinion-based reason).
If I touch C it has to have control over allocations, memory layout, and wrapping low level code into functions I can call from other languages.
I'd target the latest C standard and won't even care to know how many old, niche compilers I'm leaving out. These are vastly different uses for C and obviously your gaols drastically change your standard or compiler targeted.
That sounds like the kind of code that you want to be done with and never touch again. You can only dream of it not being buggy or catching up with the newest standard.
No it is more like the 10000 lines of code running in your washing machine, you will probably be updating it in the next year revision of the product.
It is quite common for this code to have all variables be global and just not have any heap allocations at all. Sometimes you don't even have variables in the stack either (besides the globals).
If this is the argument then the actual standardisation is useless. I primarily use windows so I’m affected by one of the major compilers that doesn’t support this feature. This is no different to saying “chrome supports feature X, and realistically has y% market share so don’t let the fact that other browsers exist get in the way”.
Call it a posix extension, fair enough. But if your reason for writing C is that it’s portable, don’t go relying on non portable vendor specific extensions.
It's not, and even if there's something in the standard, your compiler of choice might not support it yet.
It's the same thing with the web and browser vendors, there's a constant mismatch, browsers propose and implement things and they may get standardized, and the standard dictates new requirements which might get implemented by all vendors.
The point of standardisation is defining behaviour for the things that are implemented as exploratory improvements and should be implemented on the more conservative compilers.
It's your choice whether to target the standard or a few selected compilers, there's a cost for both options between being late to improvements vs the possibility of needing to revisit your code around each of the "extensions" you decided to depend on.
If in certain projects portability is somehow of upmost importance, then any discussion around looking through the standard's black box to reach out for new stuff is kind of useless.
Standard C is only the least common denominator that compiler vendors agreed on, and the C standard committee works 'reactively' by mostly standardizing features that have been in common use as non-standard extensions - sometimes for decades before standardization happens (this is probably the main difference to the C++ committee).
The *actual* power and flexibility of C lies in the non-standard, vendor-specific language extensions.
Correct. You can use it in a simple way to free memory, but we've also used it to create scoped locks[1].
This being C, it's not without its problems. You cannot use it for values that you want to return from the function (as you don't want those to be freed), so any such variables cannot be automatically cleaned up on error paths either. Also there's no automated checking (it's not Rust!)
Note it's {...} scoped, not function scoped, which makes it more useful than Golang's defer.
Even with scope-based defer, you can accomplish conditional defers easily enough. In a sane language, where conditions are expressions, you could just do:
defer if complex_nested_condition { cleanup() } else { noop() }
In Go, you could do:
defer func(run bool) {
if !run { return }
}(condition)
Which admittedly wastes stack space with a noop function in the false case, but whatever.
I feel like the number of times I've needed conditional defers is almost zero, while the number of times I've had to make a new function to ensure scoping is correct is huge.
Of especial note, 'mu.Lock(), defer mu.Unlock()' not being scope-based is the largest source of deadlocks in code. People don't use 'defer' because the scoping rules are wrong, code panics before the manual unlock call, and then the program is deadlocked forever.
Around 1994, when I was a nerdy, homeschooled 8th grader teaching myself coding I came up with something I was inordinately proud of. I had gotten a book on PowerPC assembly so using Metrowerks CodeWarrior on my brand-new PowerMac 7100 I wrote a C function with inlined assembly that I called debugf. As I recall, it had the same signature as printf, called sprintf and then passed that resulting string to DebugStr. But the part I was proud of was that it erased itself from the stack so when the debugger popped up it was pointing to the line where you called debugf. I'm still proud of it :-).
I maintain a combined error and resource state per thread.
It is first argument to all functions.
If no errors, function proceeds - if error, function instead simply immediately returns.
When allocating resource, resource is recorded in a btree in the state.
When in a function an error occurs, error is recorded in state; after this point no code executes, because all code runs only if no errors.
At end of function is boilerplate error, which is added to error state if an error has occurred. So for example if we try to open file and out of disk, we first get error "fopen failed no disk", then second error "opening file failed", and then all parent functions in the current call stack will submit their errors, and you get a call stack.
Program then proceeds to exit(), and immediately before exit frees all resources (and in correct order) as recorded in btree, and prints error stack.
IMHO trying to emulate smart pointers in C is fixing a problem that shouldn't exist in the first place, and is also a problem in C++ code that uses smart pointers for memory management of individual objects.
Objects often come in batches of the same type and similar maximum lifetime, so let's make use of that.
Instead of tracking the individual lifetimes of thousands of objects it is often possible to group thousands of objects into just a handful of lifetime buckets.
Then use one arena allocator per lifetime bucket, and at the end of the 'bucket lifetime' discard the entire arena with all items in it (which of course assumes that there are no destructors to be called).
And suddenly you reduced a tricky problem (manually keeping track of thousands of lifetimes) to a trivial problem (manually keeping track of only a handful lifetimes).
And for the doubters: Zig demonstrates quite nicely that this approach works well also for big code bases, at least when the stdlib is built around that idea.
It's a clever use of assembler, but in production code, it's much better to use a bounded model checker, like CBMC, to verify memory safety across all possible execution paths.
That's sad. Having migrated from C++ to golang a few years ago, I find defer vastly inferior to C++ destructors. Rust did it right with its drop trait, I think it's a much better approach
The proposed C defer is scope-based unlike Go. So in the spirit of OP article, you can basically hand roll not only C++ destructor as defer {obj.dtor()} but also Rust Drop as defer {obj.notmoved() ? Drop()}
I mean, you just write all your scopes as `(func() { })()` in go, and it works out fine.
Adding `func() {}()` scopes won't break existing code usually, though if you use 'break' or 'continue' you might have to make some changes to make it compile, like so:
Although that is true, the author has expounded at lengths on the unsuitability of RAII to the C programming langage, and as a big fan of RAII the explanations were convincing.
Smart pointers in C often feel like trying to force a square peg into a round hole. They’re powerful, but without native language support like C++, they can lead to more complexity than they solve.
I've heard enough "C is superior to C++" arguments from game developers who then go and use header structs for inheritance, or X macros, enums, and switch statements for virtual functions, to know that more complexity isn't an issue as long as people feel clever and validated.
I don't think they do those things for validation. They do those things for control. Even c++ game developers create there own entire standard library replacements.
Highjacking the return address can only be done if you know you actually have a return address, and a reliable way to get to that return address. Function inlining can change that, adding local variables could change that, omitting frame pointer, etc.
It would also need to be a function that will truly be implemented as one following the ABI, which usually happens when the function is exported. Often times, internal functions won't follow the platform ABI exactly.
Just changing the compiler version is probably enough to break anything like this.
Save the return address highjacking stuff for assembly code.
---
Meanwhile, I personally have written C code that does mess with the stack pointer. It's GBA homebrew, so the program won't quit or finish execution, and resetting the stack pointer has the effect of giving you a little more stack memory.
Note that this will probably cause branch prediction misses, just like thread switching does - modern CPUs have a return address predictor which is just a simple stack. I don’t think you can avoid this without compiler support.
2) I recently discovered the implementation of free_on_exit won't work if called directly from main if gcc aligns the stack. In this case, main adds padding between the saved eip and the saved ebp, (example). I think this can be fixed some tweaking, and will update this article when it is fixed.
I do not believe the article was updated, suggesting that the "tweaking" was far more complex than the author expected...
...which doesn't surprise me, because the overall tone is one of a clever but far-less-experienced-than-they-think programmer having what they think is a flash of insight and realizing thereby they can solve simply a problem that has plagued the industry and community for decades.
The naive assumption is that shared_ptr is always better than manual tracking. It's not. Tracking and cleaning up resources individually is a burden at scale.
Not to mention that any future CPU microcode update released in order to mitigate some serious CVE might break the entire product you've been shipping, just because it relied on some stack manipulation wizardry.
Or rather, given that every relevant C compiler is also a c++ compiler, just compile as c++ and use std::unique_ptr? I love C but I just can't understand the mental gymnastics of people that prefer this kind of hacks compared to just using C++
There's a whole heap of incompatibilities that you can hit, that will prevent a lot of non-trivial C programs from compiling under C++. Things like character literals being a char in C++ and an int in C. Or C allowing designated initialisers for arrays, but C++ not.
There's a lot of either "I like to pretend C is simple and simple is good" or "C++ has things I don't like, so I will refuse to also use the things I do like out of spite". You see it all over the place here whenever C or C++ comes up.
initialize all resource pointers to NULL;
attempt all allocations;
if all pointers are non-NULL, do the thing (typically calling another routine)
free all non-NULL pointers
realloc(ptr, 0) nicely handles allocations and possible-NULL deallocations
if you must have a `free_on_exit()`
(for example, if you allocate a variable number of pointers in a loop)
then build your own defer stack registering pointers using memory that you allocate
I haven't used it personally yet, but it addresses the same issue with a different approach, also related to stack-like lifetimes.
I've used simple reference counting before, also somewhat relevant in this context, and which skeeto also has a nice post about: https://nullprogram.com/blog/2015/02/17/
Wow, an actual, purposeful, and quite general return-pointer-smashing gadget, built right into the program itself. Just what any program written in C needs.
Pointers and memory management is not hard. Stop this childish lie.
You allocate memory, and you remember to give it back. Failing to do that is not "difficult", it's lazy, it's not maintaining what you are doing.
This entire "memory safety" nonsense is market propaganda to cripple younger, inexperienced programmers, selling them complexity nonsense when all you need to do is remember you allocated, and deallocate when you are done.
Seriously, don't play the role of an idiot. That's what this entire line of reasoning is: idiot talk. Just remember to deallocate, just design systems that manage their memory correctly, do not "wing it", design and follow that design.
We have 50 years of experience of code telling us that, no, programmers are not consistently capable of avoiding memory safety just by being good about it. Saying that it's just a failing of lesser programmers is the height of extreme arrogance, since I guarantee you that you've written memory safety vulnerabilities if you've written any significant amount of C code.
The problem is not that the rules are hard to follow--they're actually quite easy! The problem is that, to follow the rules, you need to set up certain invariants, and in any appreciably large codebase, remembering all of the invariants is challenging. As an example, here's a recent memory safety vulnerability I accidentally created:
int map_value(map_t *map, void *key) {
// This returns a pointer to the internals of map, so it's invalidated
// any time map is changed. Therefore, don't change the map while this
// value is live.
int *value = add_to_map(map, key, default_value());
// ... 300 lines of code later...
// oops, need to recurse, this invalidated value...
int inner = map_value(value, f(key));
// ... 300 lines of code later...
// Hi, this is now a use-after-free!
*value = 5;
return value;
}
It's not that I'm too stupid to figure out how to avoid use-after-frees, it's that in the course of refactoring, I broke an invariant I forgot I needed. And the advantage of a language like Rust is that it bops me on the head when I do this.
I one worked on a commercial product, a mix of C++ calling our own C libraries, that had 4000+ LOC in a single case statement. This was one of my first jobs out of school so I was shocked.
As someone who has been designing, writing, and operating high-performance systems for decades, I can guarantee you that it does not boil down to "laziness".
Everyone starts with the best of intentions. malloc() and free() pairs. Then inevitable complexity comes in - the function gets split to multiple, then across modules, and maybe even across systems/services (for other shareable resources).
The mental overhead of ensuring the releases grows. It _is_ hard, and that's most definitely not a lie beyond any trivial implementation.
Surprisingly "just design systems that manage their memory correctly", as you said, is a very legitimate solution. It just so happens that those systems need good language support, to offload a good chunk of the complexity from the programmer's brain to the machine.
No, it is laziness, at the system architecture level. I've been doing this for decades too, in major corporations, writing the big name services that millions to billions of people use. The system architects are lazy, they do not want to do the accounting - that is all it is, just accounting of the resources one has and their current states, integrating that accounting tracking system into the environment - but few to none do, because it creates hard accountability, which they do not want. A soup of complexity is better for them, it grows their staff.
I've been playing this game long enough to see the fatal flaws built in, which grows complexity, staff, and the magnitude of the failures.
I agree with the grandparent mostly because the article doesn't have any real world applications.
Forgetting to free memory that is allocated and then used inside of a function is the rarest kind of memory management bug that I have run into in large code bases. It's frequently obvious if you read the function and are following good practices by making the code clean and easy to read / follow.
The ones that bite are typically a pointer embedded in some object in the middle of a complicated data structure that persists after a function returns. Reference counting may or may not be involved. It may be a cache of some sort to reduce CPU overhead from recomputing some expensive operation. It's rarely a chunk of memory that has just been allocated. To actually recover the lost memory in those cases is going to need something more complicated like garbage collection.
But garbage collection is really hard to retrofit into C when libraries are involved as who knows what kind of pointer manipulation madness exists inside other people's code.
What would be really interesting is if someone made a C compiler that replaced pointers with fat pointers that could be used to track references and ensure they are valid before dereferencing. Sure, it would be an ABI bump equivalent to implementing a new architecture along with plenty of fixups in legacy C code, but we've done that before. The security pendulum has swung over to the point that rebuilding the world would be considered worthwhile as compared to where we stood 10-15 years ago. It'd certainly be a lot of work to get that working compared to a simple hack per the Fine Article, but it would have real value.
I'm advocating not to wing it, design up front and then follow that design. When the design is found lacking, redesign with the entire system in mind. Basically, all I'm saying is do not take short cuts, they are not short cuts. Your project may finish faster, but you are harming yourself as a developer.
I had initially written a very snarky comment here, but this one [1] actually expresses my view quite well in a respectful way, I would answer with this. I guess the discussion can continue there as well.
because it seems like `kv.mp_binaryData` will now have a different size than it had before. That is, there will be a mismatch. Though it should not affect the `free` call.
I hope I'm missing something because I just dealt with the code for around 5 minutes.
That m_binarySize should have been removed, nothing referenced it beyond it's own code. I knew it did not affect the free() call and left it. That entire KVS lib is a fine example of KISS, it's so small I can hold it in my head, and issues like that m_binarySize field are just left because they end up being nops.
Yes, and the general trend of falling traffic fatalities is because people are driving better, right? Nobody's perfect, most people are far from perfect, and if it's possible to automate things that let you do better, we should do that
Beware of automation that negates understanding. At some point, changes or maintenance requirements will need to revisit the situation. If it is wrapped in some time consuming complexity, it will just be thrown out.
I really consider the down votes to be people that want to wing it, not be serious developers, and follow the parrot horde that marketing creates. This is obvious if you really understand what you are doing as a developer, which you should.
I downvoted because in my mind you are winging it. "Just give it back" works well for simple cases, I suppose.
We observe that engineering teams struggle to write correct code without tools helping them. This is just an unavoidable fact. Even with tools that are unsound we still see oodles of memory safety bugs. This is true for small projects run by individuals up to massive projects with hundreds or thousands of developers. There are few activities as humbling as taking a project and throwing the sanitizers at it.
And bugs aren't "well you called malloc at the top of the function and forgot to call free at the bottom." Real systems have lifetime management that is vastly more complex than this and it is just not the case that telling people to not suck mitigates bugs.
I'm advocating to design, and then follow the design, and when the design is found lacking redesign to include the new understanding. This writing of software career is all about understanding, and automating that understanding. Due to market pressures, many companies try to make due with developers that take shortcuts, these shortcut takers the majority of developers today, skewing the intellectual foundations of the entire industry. Taking shortcuts does not negate the fact that taking a shortcut is short sheeting one's understanding of what is actually occurring in that situation. These shortcuts are lazy non-understandings, and that harms the project, it's architecture, and increases the cognitive load on maintenance. It's creating problems for others and bailing, hoping you're not trapped maintaining the complex mess.
And I'm telling you that designing an application with a coherent memory management plan still leads to teams producing errors and bugs that are effectively prevented with sound tools. Soundness is not a shortcut.
You can consider whatever you want. That doesn't make it accurate. The reason I downvoted was for unnecessary inflammatory language. Your point would have been better without it (more likely to be heard by the people you claim to be talking to, at a minimum).
If you're actually trying to talk to people, if you're not just here to say "I'm smart and you're stupid" to gratify your ego, then why talk in a way that makes other people less likely to listen?
Why do you use pointers and a high level language like C when you could just write assembly and load and unload all your instructions and data into registers directly. Why do you need functions? You could just use nothing but JMP instructions. There's a whole lot of stuff that C handles for you that is completely unnecessary if you really understood assembly and paid attention to what you're doing.
I'm from the era that when I was taught Assembly, half way through the class we'd written vi (the editor), and when finishing that one semester we had a working C compiler. When I write C, I drop into Assembly often, and tend to consider C a macro language over Assembly. It's not, but when you really understand, it is.
Really, don't do this, it's a portability and safety nightmare (aside from C not being memory safe already).
C programmers are better off with either of these two techniques:
* Use __attribute__((cleanup)). It's available in GCC and Clang, and we hope will be added to the C spec one day. This is widely used by open source software, eg. in systemd.
* Use a pool allocator like Samba's talloc (https://talloc.samba.org/talloc/doc/html/libtalloc__tutorial...) or Apache's APR.
(I didn't include using reference counting, since although that is also widely used, I've seen it cause so many bugs, plus it interacts badly with how modern CPUs work.)
My C programs never consumed gigs of memory. So I (like many others I assume) made a memory manager and never freed anything. You'd ask it for memory and it kept a list of various sizes it allocated and returned what you needed to be re-used. Freeing and allocating is slow, and error prone, so just avoid it!
Well, it's basically an implementation of a memory allocator.
But how did you determine what you could re-use? That's the hard problem, one that's equivalent to calling free() at the right time.
When something would normally be freed, it calls the memory manager's version of free(), which zeroes the memory and adds it back into the appropriate available list.
Isn't that just stacking your own allocator on top of the libc allocator, the same way the libc allocator is stacked on top of the OS's page mappings? It's often a sensible idea, of course, but I wouldn't describe it as "never freeing things", just substituting libc's malloc()/free() for your own. It's not like the libc allocator is doing something radically different from keeping lists of available and allocated memory.
You're just describing every allocator in the world, except many (most?) skip the zeroing part.
libc already does that. What is it that yours is adding?
I'd say 25 years ago you could write your own naive allocator, make just a couple of assumptions for your use case, and beat libc. But no more.
One of the selling points of Java in the 90s was the compacting part. Because in the 90s fragmentation was a much bigger problem than it is today. Today the libc allocators have advanced by maybe tens of thousands of PhDs worth of theory and practice. Oh, and we have 64bit virtual address space, which helps with some (but not all) of the problems with memory fragmentation.
See this post from Ian Lance Taylor about why Go didn't even bother with a compacting GC: https://groups.google.com/g/golang-nuts/c/KJiyv2mV2pU?pli=1
A venerable and completely reasonable approach for resource-constrained environments and/or programs with very small memory requirements (kilobytes).
What are your issues with the memory requirements being small? One of the programs was a MUD that consumed a couple hundred megabytes, and I never had issues with it.
I mentioned gigabytes because of how mine specifically worked. It allocated chunks in powers of 2, so there was some % of memory that wasn't being used. For instance, If you only need 20 bytes for a string, you got back a pointer for a chunk of 32 bytes. Being just a game, and side project, I never gave it much thought, so I'm curious to hear your input.
What makes you think that this approach is only useful for resource-constrained circumstances?
Yeah it's reasonable. Unfortunately if you do it, and you run tools like Coverity, it'll produces reams of complaints about how you're leaking memory :-( There was one project which was genuinely a short-lived program that never needed to free memory, but in the end I gave in and added free() statements everywhere. Otherwise we could never have got it into RHEL.
Why not just put all the free()s at the end of main() behind an #ifdef DEBUG or something?
> we hope will be added to the C spec one day
defer seems to be making significant progress (having a passionate and motivated advocate in Meneide, and a full TS)
Last time I looked this was (golang-like) function scoped, not { } scoped, which means it's a bad idea. My feedback was the committee should simply standardize the existing attribute / behaviour, as that is widely used already.
(EDIT: I'm wrong, see reply)
> Last time I looked this was (golang-like) function scoped, not { } scoped, which means it's a bad idea.
Might have been the previous attempt from years ago, because being block scoped (unlike go) literally has its own section in https://thephd.dev/c2y-the-defer-technical-specification-its...
defer is nice, but I really want the cleanup attribute since it could in theory by applied to the return type of a function. In other words you could have malloc return a pointer with the cleanup attribute that automatically frees it at end of scope if it's non-NULL. (And if you want to persist the pointer just assign to a different variable and zero out the one malloc gave you.)
> In other words you could have malloc return a pointer with the cleanup attribute that automatically frees it at end of scope if it's non-NULL.
That is not, as far as I know, how __attribute__((cleanup)) works. It just invokes the callback when the value goes out of scope. So you can't have malloc return an implicitly cleanup'd pointer unless malloc is a macro, in which case you can do the same with a defer block.
I desperately hope it is not added, Meneide throws a tantrum like he did about his Rust conference talk, and he leaves the C committee forever. He is a malign influence that is on record as saying he dislikes C and wants it to be replaced by Rust.
> It's available in GCC and Clang, and we hope will be added to the C spec one day. This is widely used by open source software, eg. in systemd.
It’s odd that the suggestion for a feature lacking in C is to use a non standard but well used supported path. c’s main selling point (IMO) is that it _is_ a standard, and relying on compiler vendor extensions kind of defeats the purpose of that.
Defer was almost part of the C23 standard, but didn't quite make it. They've slimmed and changed a few things, and it looks like it very much might be part of the next.
[0] https://thephd.dev/c2y-the-defer-technical-specification-its...
[1] https://thephd.dev/_vendor/future_cxx/technical%20specificat...
It's so widely used by OS software that you're likely using already, that it's unlikely to be removed and much more likely to be standardized. This is in fact how standardization ought to work - standardize the proven best practices.
I agree. But if we follow that logic then any compiler specific feature of either or clang is fair game, even if it’s not standard. MSVC doesn’t support it when compiling in C mode, for example.
> But if we follow that logic then any compiler specific feature of either or clang is fair game, even if it’s not standard.
Well, yeah...
How do you think Annex K got in?
> relying on compiler vendor extensions kind of defeats the purpose of that.
Let's be honest, how many compilers are available, and how many of those would you actually use?
The answer isn't more than 4 and the 2 compilers you are most likely to use among those already support this and probably won't stop supporting without a good alternative.
I like standardisation, but you have to be realistic when it helps you without a large real cost other than fighting your ideals for getting this into the standard first.
For me, the point of writing something in C is portability. There were C compilers 30 years ago, there are C compilers now, and there will almost certainly be C compilers 30 years from now. If I want to write a good, portable library that's going to be useful for a long time, I'll do it in C. This is, at least, the standard in many gamedev circles (see: libsdl, libfreetype, the stb_* libraries). Under that expectation, I write to a standard, not a compiler.
The bad news is that C23 broke a lot of existing code[1]. We had to do a lot of work in Fedora to fix the resulting mess. [1] https://gcc.gnu.org/gcc-15/porting_to.html#c23
For the cases in linked doc, does adding -std=gnu17 to packages not suffice?
I would consider the union initializer change (require adding -fzero-init-padding-bits=unions for old behavior) much more hidden and dangerous, which is not directly related to ISO C23 standard.
It's true that it does, yes. However that would still require changes to the build system. In any case for the vast majority of the packages we decided to fix (if you think this is a fix!) the code.
>if you think this is a fix
I would count it as doing maintenance work for the upstream, kudos for doing this!
For accessing any post-1970s operating system feature (e.g. async IO or virtual memory) you already cannot use standard C anymore (and POSIX is not the C stdlib).
The libraries you listed are all full of platform-specific code, and also have plenty of compiler-specific code behind ifdefs (for instance the stb headers have MSVC specific declspec declarations in them).
E.g. there is hardly any real-world C code out there that is 'pure standard C', if the code compiles on different compilers and for different target platforms then that's because the code specifically supports those compilers and target platforms.
My argument is that using these non-standard extensions to do important things like memory management in a C library is malpractice—it effectively locks down the library to specific C compilers. I'm sure that's fine if you're writing to clang specifically, but at that point, you can just write C++. libfreetype & stb_* are used and continue to be used because they can be relied on to be portable, and using compiler-specific extensions (without ifdefs) defeats that. If I relied on a clang-specific `defer`, I'm preventing my library from possibly being compiled via a future C compiler, let alone the compilers that exist now. To me, that's the point of writing C instead of C++ for a library (unless you're just a fan of the simplicity, which is more of an ideological, opinion-based reason).
If I touch C it has to have control over allocations, memory layout, and wrapping low level code into functions I can call from other languages.
I'd target the latest C standard and won't even care to know how many old, niche compilers I'm leaving out. These are vastly different uses for C and obviously your gaols drastically change your standard or compiler targeted.
There are quite a lot of embedded code that relies on obscure C compilers created and maintained by the CPU manufacturer.
But then again you are probably not doing a whole lot of heap management in embedded code.
That sounds like the kind of code that you want to be done with and never touch again. You can only dream of it not being buggy or catching up with the newest standard.
No it is more like the 10000 lines of code running in your washing machine, you will probably be updating it in the next year revision of the product.
It is quite common for this code to have all variables be global and just not have any heap allocations at all. Sometimes you don't even have variables in the stack either (besides the globals).
I much prefer the purely-mechanical washing machines for this reason.. Way less to go wrong..
If this is the argument then the actual standardisation is useless. I primarily use windows so I’m affected by one of the major compilers that doesn’t support this feature. This is no different to saying “chrome supports feature X, and realistically has y% market share so don’t let the fact that other browsers exist get in the way”.
Call it a posix extension, fair enough. But if your reason for writing C is that it’s portable, don’t go relying on non portable vendor specific extensions.
It's not, and even if there's something in the standard, your compiler of choice might not support it yet.
It's the same thing with the web and browser vendors, there's a constant mismatch, browsers propose and implement things and they may get standardized, and the standard dictates new requirements which might get implemented by all vendors.
The point of standardisation is defining behaviour for the things that are implemented as exploratory improvements and should be implemented on the more conservative compilers.
It's your choice whether to target the standard or a few selected compilers, there's a cost for both options between being late to improvements vs the possibility of needing to revisit your code around each of the "extensions" you decided to depend on.
If in certain projects portability is somehow of upmost importance, then any discussion around looking through the standard's black box to reach out for new stuff is kind of useless.
Standard C is only the least common denominator that compiler vendors agreed on, and the C standard committee works 'reactively' by mostly standardizing features that have been in common use as non-standard extensions - sometimes for decades before standardization happens (this is probably the main difference to the C++ committee).
The *actual* power and flexibility of C lies in the non-standard, vendor-specific language extensions.
C's main selling point is not standardisation. It was widely used before standardisation and only standardised because it was useful.
There's a complete implementation available at https://github.com/Snaipe/libcsptr
> __attribute__((cleanup))
Interesting. I'm not very proficient in C, this looks like some sort of finalizers for local variables?
Correct. You can use it in a simple way to free memory, but we've also used it to create scoped locks[1].
This being C, it's not without its problems. You cannot use it for values that you want to return from the function (as you don't want those to be freed), so any such variables cannot be automatically cleaned up on error paths either. Also there's no automated checking (it's not Rust!)
Note it's {...} scoped, not function scoped, which makes it more useful than Golang's defer.
[1] https://gitlab.com/nbdkit/nbdkit/-/blob/8b36e5a2ea331eed2a73...
> You cannot use it for values that you want to return from the function
I would say this is only half true. With some macro magic you can actually also return the values :)
https://github.com/systemd/systemd/blob/0201114bb7f347015ed4...
To be fair though, you probably meant without any such shenanigans.
While Go rules effectively prevents usage of defer in loops, it is useful occasionally to write:
Even with scope-based defer, you can accomplish conditional defers easily enough. In a sane language, where conditions are expressions, you could just do:
In Go, you could do: Which admittedly wastes stack space with a noop function in the false case, but whatever.I feel like the number of times I've needed conditional defers is almost zero, while the number of times I've had to make a new function to ensure scoping is correct is huge.
Of especial note, 'mu.Lock(), defer mu.Unlock()' not being scope-based is the largest source of deadlocks in code. People don't use 'defer' because the scoping rules are wrong, code panics before the manual unlock call, and then the program is deadlocked forever.
Around 1994, when I was a nerdy, homeschooled 8th grader teaching myself coding I came up with something I was inordinately proud of. I had gotten a book on PowerPC assembly so using Metrowerks CodeWarrior on my brand-new PowerMac 7100 I wrote a C function with inlined assembly that I called debugf. As I recall, it had the same signature as printf, called sprintf and then passed that resulting string to DebugStr. But the part I was proud of was that it erased itself from the stack so when the debugger popped up it was pointing to the line where you called debugf. I'm still proud of it :-).
I maintain a combined error and resource state per thread.
It is first argument to all functions.
If no errors, function proceeds - if error, function instead simply immediately returns.
When allocating resource, resource is recorded in a btree in the state.
When in a function an error occurs, error is recorded in state; after this point no code executes, because all code runs only if no errors.
At end of function is boilerplate error, which is added to error state if an error has occurred. So for example if we try to open file and out of disk, we first get error "fopen failed no disk", then second error "opening file failed", and then all parent functions in the current call stack will submit their errors, and you get a call stack.
Program then proceeds to exit(), and immediately before exit frees all resources (and in correct order) as recorded in btree, and prints error stack.
IMHO trying to emulate smart pointers in C is fixing a problem that shouldn't exist in the first place, and is also a problem in C++ code that uses smart pointers for memory management of individual objects.
Objects often come in batches of the same type and similar maximum lifetime, so let's make use of that.
Instead of tracking the individual lifetimes of thousands of objects it is often possible to group thousands of objects into just a handful of lifetime buckets.
Then use one arena allocator per lifetime bucket, and at the end of the 'bucket lifetime' discard the entire arena with all items in it (which of course assumes that there are no destructors to be called).
And suddenly you reduced a tricky problem (manually keeping track of thousands of lifetimes) to a trivial problem (manually keeping track of only a handful lifetimes).
And for the doubters: Zig demonstrates quite nicely that this approach works well also for big code bases, at least when the stdlib is built around that idea.
This is the way
It's a clever use of assembler, but in production code, it's much better to use a bounded model checker, like CBMC, to verify memory safety across all possible execution paths.
Oh well, maybe we'll soon have `defer`? [0]
[0] https://thephd.dev/c2y-the-defer-technical-specification-its...
That's sad. Having migrated from C++ to golang a few years ago, I find defer vastly inferior to C++ destructors. Rust did it right with its drop trait, I think it's a much better approach
The proposed C defer is scope-based unlike Go. So in the spirit of OP article, you can basically hand roll not only C++ destructor as defer {obj.dtor()} but also Rust Drop as defer {obj.notmoved() ? Drop()}
What do you mean defer isn't scope based in Go?
(not super experienced Go developer)
In Go, defers are function scoped not block scoped.
I mean, you just write all your scopes as `(func() { })()` in go, and it works out fine.
Adding `func() {}()` scopes won't break existing code usually, though if you use 'break' or 'continue' you might have to make some changes to make it compile, like so:
https://go.dev/play/p/_Gq4QYtyMmp
see, no other issues, works exactly like you'd expect
Right, that makes more sense.
mentioned in the article at [1]
[1] https://thephd.dev/c2y-the-defer-technical-specification-its...
Although that is true, the author has expounded at lengths on the unsuitability of RAII to the C programming langage, and as a big fan of RAII the explanations were convincing.
Even Zig which is extremely against "Hidden control flow" (so no operator overloading, etc.) added in the "defer" feature.
What's "drop trait" for C? There are no any traits in C.
Smart pointers in C often feel like trying to force a square peg into a round hole. They’re powerful, but without native language support like C++, they can lead to more complexity than they solve.
I've heard enough "C is superior to C++" arguments from game developers who then go and use header structs for inheritance, or X macros, enums, and switch statements for virtual functions, to know that more complexity isn't an issue as long as people feel clever and validated.
I don't think they do those things for validation. They do those things for control. Even c++ game developers create there own entire standard library replacements.
Highjacking the return address can only be done if you know you actually have a return address, and a reliable way to get to that return address. Function inlining can change that, adding local variables could change that, omitting frame pointer, etc.
It would also need to be a function that will truly be implemented as one following the ABI, which usually happens when the function is exported. Often times, internal functions won't follow the platform ABI exactly.
Just changing the compiler version is probably enough to break anything like this.
Save the return address highjacking stuff for assembly code.
---
Meanwhile, I personally have written C code that does mess with the stack pointer. It's GBA homebrew, so the program won't quit or finish execution, and resetting the stack pointer has the effect of giving you a little more stack memory.
Note that this will probably cause branch prediction misses, just like thread switching does - modern CPUs have a return address predictor which is just a simple stack. I don’t think you can avoid this without compiler support.
C is the LS engine of programming languages. People love to drop it in and mod it until it blows up.
Hacky and not really fit for production for more reasons than one, but clever and nice nonetheless. Good stuff.
Just C programmer's daily struggle to mimic a fraction of C++.
1) 2018
2) I recently discovered the implementation of free_on_exit won't work if called directly from main if gcc aligns the stack. In this case, main adds padding between the saved eip and the saved ebp, (example). I think this can be fixed some tweaking, and will update this article when it is fixed.
I do not believe the article was updated, suggesting that the "tweaking" was far more complex than the author expected...
...which doesn't surprise me, because the overall tone is one of a clever but far-less-experienced-than-they-think programmer having what they think is a flash of insight and realizing thereby they can solve simply a problem that has plagued the industry and community for decades.
The naive assumption is that shared_ptr is always better than manual tracking. It's not. Tracking and cleaning up resources individually is a burden at scale.
This article should of had the conclusion of this is why you should use arena allocator.
Not to mention that any future CPU microcode update released in order to mitigate some serious CVE might break the entire product you've been shipping, just because it relied on some stack manipulation wizardry.
Or rather, given that every relevant C compiler is also a c++ compiler, just compile as c++ and use std::unique_ptr? I love C but I just can't understand the mental gymnastics of people that prefer this kind of hacks compared to just using C++
Unfortunately, that's not true.
C is not 100% compatible with C++.
There's a whole heap of incompatibilities that you can hit, that will prevent a lot of non-trivial C programs from compiling under C++. Things like character literals being a char in C++ and an int in C. Or C allowing designated initialisers for arrays, but C++ not.
There's a lot of either "I like to pretend C is simple and simple is good" or "C++ has things I don't like, so I will refuse to also use the things I do like out of spite". You see it all over the place here whenever C or C++ comes up.
this is way overkill
the way i do this in C looks like
realloc(ptr, 0) nicely handles allocations and possible-NULL deallocationsmight as well free the NULL pointers as well - this is totally valid C and can simplify the code
if you must have a `free_on_exit()` (for example, if you allocate a variable number of pointers in a loop) then build your own defer stack registering pointers using memory that you allocate
See also the "arena allocator", which has been discussed here before: https://nullprogram.com/blog/2023/09/27/
I haven't used it personally yet, but it addresses the same issue with a different approach, also related to stack-like lifetimes.
I've used simple reference counting before, also somewhat relevant in this context, and which skeeto also has a nice post about: https://nullprogram.com/blog/2015/02/17/
I wonder how that affects compiler optimization
Let's just say there's a reason the author is compiling everything with -O0.
and stack protection cookies
Wow, an actual, purposeful, and quite general return-pointer-smashing gadget, built right into the program itself. Just what any program written in C needs.
Pointers and memory management is not hard. Stop this childish lie.
You allocate memory, and you remember to give it back. Failing to do that is not "difficult", it's lazy, it's not maintaining what you are doing.
This entire "memory safety" nonsense is market propaganda to cripple younger, inexperienced programmers, selling them complexity nonsense when all you need to do is remember you allocated, and deallocate when you are done.
Seriously, don't play the role of an idiot. That's what this entire line of reasoning is: idiot talk. Just remember to deallocate, just design systems that manage their memory correctly, do not "wing it", design and follow that design.
It is seriously easy when you do not wing it.
We have 50 years of experience of code telling us that, no, programmers are not consistently capable of avoiding memory safety just by being good about it. Saying that it's just a failing of lesser programmers is the height of extreme arrogance, since I guarantee you that you've written memory safety vulnerabilities if you've written any significant amount of C code.
The problem is not that the rules are hard to follow--they're actually quite easy! The problem is that, to follow the rules, you need to set up certain invariants, and in any appreciably large codebase, remembering all of the invariants is challenging. As an example, here's a recent memory safety vulnerability I accidentally created:
It's not that I'm too stupid to figure out how to avoid use-after-frees, it's that in the course of refactoring, I broke an invariant I forgot I needed. And the advantage of a language like Rust is that it bops me on the head when I do this.We have more than 50 years of experience in this, I've been coding for 50 years myself, and there was a huge industry when I started.
I still follow KISS, and if you're writing 600+ line functions, it is little surprise you forget things in the tightly written logic of code.
Is your function body really 600+ LOC?? If so, then I think I might have found your problem...
I one worked on a commercial product, a mix of C++ calling our own C libraries, that had 4000+ LOC in a single case statement. This was one of my first jobs out of school so I was shocked.
As someone who has been designing, writing, and operating high-performance systems for decades, I can guarantee you that it does not boil down to "laziness".
Everyone starts with the best of intentions. malloc() and free() pairs. Then inevitable complexity comes in - the function gets split to multiple, then across modules, and maybe even across systems/services (for other shareable resources).
The mental overhead of ensuring the releases grows. It _is_ hard, and that's most definitely not a lie beyond any trivial implementation.
Surprisingly "just design systems that manage their memory correctly", as you said, is a very legitimate solution. It just so happens that those systems need good language support, to offload a good chunk of the complexity from the programmer's brain to the machine.
No, it is laziness, at the system architecture level. I've been doing this for decades too, in major corporations, writing the big name services that millions to billions of people use. The system architects are lazy, they do not want to do the accounting - that is all it is, just accounting of the resources one has and their current states, integrating that accounting tracking system into the environment - but few to none do, because it creates hard accountability, which they do not want. A soup of complexity is better for them, it grows their staff.
I've been playing this game long enough to see the fatal flaws built in, which grows complexity, staff, and the magnitude of the failures.
I guess you are "the one". This means you won't fail this stuff and this discussion is not for you, it is for the rest of us who would.
https://rachelbythebay.com/w/2018/04/28/meta/
I agree with the grandparent mostly because the article doesn't have any real world applications.
Forgetting to free memory that is allocated and then used inside of a function is the rarest kind of memory management bug that I have run into in large code bases. It's frequently obvious if you read the function and are following good practices by making the code clean and easy to read / follow.
The ones that bite are typically a pointer embedded in some object in the middle of a complicated data structure that persists after a function returns. Reference counting may or may not be involved. It may be a cache of some sort to reduce CPU overhead from recomputing some expensive operation. It's rarely a chunk of memory that has just been allocated. To actually recover the lost memory in those cases is going to need something more complicated like garbage collection.
But garbage collection is really hard to retrofit into C when libraries are involved as who knows what kind of pointer manipulation madness exists inside other people's code.
What would be really interesting is if someone made a C compiler that replaced pointers with fat pointers that could be used to track references and ensure they are valid before dereferencing. Sure, it would be an ABI bump equivalent to implementing a new architecture along with plenty of fixups in legacy C code, but we've done that before. The security pendulum has swung over to the point that rebuilding the world would be considered worthwhile as compared to where we stood 10-15 years ago. It'd certainly be a lot of work to get that working compared to a simple hack per the Fine Article, but it would have real value.
Yep, okay, I mostly agree with you on this.
I'm advocating not to wing it, design up front and then follow that design. When the design is found lacking, redesign with the entire system in mind. Basically, all I'm saying is do not take short cuts, they are not short cuts. Your project may finish faster, but you are harming yourself as a developer.
I had initially written a very snarky comment here, but this one [1] actually expresses my view quite well in a respectful way, I would answer with this. I guess the discussion can continue there as well.
[1] https://news.ycombinator.com/item?id=43387334#43388305
You can have situations like
https://github.com/bsenftner/kvs/blob/master/kvs/kvs.cpp#L72...
where it's then not clear why there is no call to
`kv.m_binarySize = byte_size`
after calling
`kv.mp_binaryData = (uint8_t)malloc( sizeof(uint8_t) byte_size );`
because it seems like `kv.mp_binaryData` will now have a different size than it had before. That is, there will be a mismatch. Though it should not affect the `free` call.
I hope I'm missing something because I just dealt with the code for around 5 minutes.
That m_binarySize should have been removed, nothing referenced it beyond it's own code. I knew it did not affect the free() call and left it. That entire KVS lib is a fine example of KISS, it's so small I can hold it in my head, and issues like that m_binarySize field are just left because they end up being nops.
Yes, and the general trend of falling traffic fatalities is because people are driving better, right? Nobody's perfect, most people are far from perfect, and if it's possible to automate things that let you do better, we should do that
Beware of automation that negates understanding. At some point, changes or maintenance requirements will need to revisit the situation. If it is wrapped in some time consuming complexity, it will just be thrown out.
I really consider the down votes to be people that want to wing it, not be serious developers, and follow the parrot horde that marketing creates. This is obvious if you really understand what you are doing as a developer, which you should.
I downvoted because in my mind you are winging it. "Just give it back" works well for simple cases, I suppose.
We observe that engineering teams struggle to write correct code without tools helping them. This is just an unavoidable fact. Even with tools that are unsound we still see oodles of memory safety bugs. This is true for small projects run by individuals up to massive projects with hundreds or thousands of developers. There are few activities as humbling as taking a project and throwing the sanitizers at it.
And bugs aren't "well you called malloc at the top of the function and forgot to call free at the bottom." Real systems have lifetime management that is vastly more complex than this and it is just not the case that telling people to not suck mitigates bugs.
I'm advocating to design, and then follow the design, and when the design is found lacking redesign to include the new understanding. This writing of software career is all about understanding, and automating that understanding. Due to market pressures, many companies try to make due with developers that take shortcuts, these shortcut takers the majority of developers today, skewing the intellectual foundations of the entire industry. Taking shortcuts does not negate the fact that taking a shortcut is short sheeting one's understanding of what is actually occurring in that situation. These shortcuts are lazy non-understandings, and that harms the project, it's architecture, and increases the cognitive load on maintenance. It's creating problems for others and bailing, hoping you're not trapped maintaining the complex mess.
And I'm telling you that designing an application with a coherent memory management plan still leads to teams producing errors and bugs that are effectively prevented with sound tools. Soundness is not a shortcut.
You can consider whatever you want. That doesn't make it accurate. The reason I downvoted was for unnecessary inflammatory language. Your point would have been better without it (more likely to be heard by the people you claim to be talking to, at a minimum).
If you're actually trying to talk to people, if you're not just here to say "I'm smart and you're stupid" to gratify your ego, then why talk in a way that makes other people less likely to listen?
You are correct, and I seriously need to work on my language usage.
Well, see, I never give in to the temptation of using inflammatory language. Never... um, never today... um, so far... I think...
We've all been there. (Well, maybe dang hasn't. Most of the rest of us have, though.)
Why do you use pointers and a high level language like C when you could just write assembly and load and unload all your instructions and data into registers directly. Why do you need functions? You could just use nothing but JMP instructions. There's a whole lot of stuff that C handles for you that is completely unnecessary if you really understood assembly and paid attention to what you're doing.
I'm from the era that when I was taught Assembly, half way through the class we'd written vi (the editor), and when finishing that one semester we had a working C compiler. When I write C, I drop into Assembly often, and tend to consider C a macro language over Assembly. It's not, but when you really understand, it is.
And Rust developers drop down and manage memory directly when they need to and even inline assembly, sometimes.
Yes we know, all the world's problems can be solved with a rewrite in Rust.
Andre Malraux was right: "The 21st century will be religious or it will not be".
He just got the definition of religion wrong.
Performance art?