> If someone posts a patch or submits a PR to a codebase written in C, it is easier to review than any other mainstream language. There is no spooky at a distance. [..] Changes are local.
Lol, wut? What about about race conditions, null pointers indirectly propagated into functions that don't expect null, aliased pointers indirectly propagated into `restrict` functions, and the other non-local UB causes? Sadly, C's explicit control flow isn't enough to actually enable local reasoning in the way that Rust (and some other functional languages) do.
I agree that Go is decent at this. But it's still not perfect, due to "downcast from interface{}", implicit nullability, and similar fragile runtime business.
I largely agree with the rest of the post! Although Rust enables better local reasoning, it definitely has more complexity and a steeper learning curve. I don't need its manual memory management most of the time, either.
I program both C/C++ and Rust (the latter to a lesser degree currently). Rust is far superior to C in Error locality, if you write ideomatic Rust. Most of the types of errors I make in C would have been cought at compile time in Rust.
Aside from Rusts ownership model you can use the type system to enforce certain things. A typical example is that Rust uses different String types to force programmers to deal with the pitfalls. Turns out if you have a file name in an operating system it could not be a valid string, or you could have valid Unicode text that could not be a filename. Rust having different types for OS Strings and internal Unicode means when you want to go from one to the other you need to explicitly deal with the errors or choose a strategy how to handle them.
Now you could totally implement strings within Rust in a way that wouldn't force that conversion and programmers would then yolo their way through any conversion, provided they even knew about the issue. And the resulting error would not necessarily surface where it orginated. But that would be programming Rust like C.
In my experience many C libraries will just happily gulp up any input of any remotly valid shape as if it was valid data without many devs being even aware there were cases or conversions they would have had to deal with. You recognize exceptionally good C devs by the way they avoid those pitfalls.
And these skilled C devs are like seasoned mountaineers, they watch their every step carefully. But that doesn't mean the steep north face of the mountain is the safest, fastest or most ergonomic way to get to the summit. And if you believe that C is that, you should be nowhere near that language.
And had not GNU/FSF made C the official main language for FOSS software on their manifesto, by the time when C++ was already the main userspace language across Windows, OS/2, Mac OS, BeOS, that "It is the reason for C's endurance" would be much less than it already is nowadays, where it is mostly UNIX/POSIX, embedded and some OS ABIs.
There's so many good high-level languages to choose from, but when you need to go low-level, there's essentially only C, C++, Rust. Maybe Zig once it reaches 1.0.
What we need isn't Rust without the borrow checker. It's C with a borrow checker, and without all the usual footguns.
It is a bit unclear to me why somebody who rejects C++ because "I once spent an entire year in the heaven of C++, walking around in a glorious daze of std::vector and RAII, before one day snapping out of it and realizing that I was just spawning complexity that is unrelated to the problem at hand." (which I can absolutely agree with!) is picking Rust from all options. If there is a language that can rival C++ in terms of complexity, it is Rust.
I've seen this idea from a few people and I don't get it at all.
Rust is certainly not the simplest language you'll run into, but C++ is incredibly baroque, they're not really comparable on this axis.
One difference which is already important and I think will grow only more important over time is that Rust's Editions give it permission to go back and fix things, so it does - where in C++ it's like venturing into a hoarder's home when you trip over things which are abandoned in favour of a newer shinier alternative.
I am a fan of Rust but it’s definitely a terse language.
However there are definitely signs that they have thought about making it as readable as possible (by omitting implicit things unless they’re overwritten, like lifetimes).
I’m reminded also about a passage in a programming book I once read about “the right level of abstraction”. The best level of abstraction is the one that cuts to the meat of your problem the quickest - spending a significant amount of time rebuilding the same abstractions over and over (which, is unfortunately often the case in C/C++) is not actually more simple, even if the language specifications themselves are simpler.
C codebases in particular, to me, are nearly inscrutable unless I spend a good amount of time unpicking the layers of abstractions that people need to write to make something functional.
I still agree that Rust is a complex language, but I think that largely just means it’s frontloading. a lot of the understanding about certain abstractions.
Reading between the lines, the author is a Haskell fan. Haskell is another "complicated" language, but the complexity feels much different than the C++ complexity. Perhaps I would describe it as "complexity that improves expressiveness". If you like Haskell for its expressiveness but dislike C++ for it's complexity, I suspect Rust is a language you're going to like.
The two are incomparable in both quality and quantity. The complexity of Rust comes from the fact that it's solving complex problems. The complexity of C++ comes from a poorly thought out design and backwards-compatibility. (Not to slight the standards committee; they are smart people, and did the best with what they had.)
Anothere way of putting it is, if you didn't care about backwards-compatibility, you could greatly simplify C++ without losing anything. You can't say the same about Rust; the complexity of Rust is high-entropy, C++'s is low-entropy.
It's really not even remotely the same. C++ has literally >50 pages of specification on the topic of initialising values. All of these are inconsistent, not subject to any overarching or unifying rule, and you have to keep it all in mind to not run into bugs or problematic performance.
Swift is a great C++ and Rust alternative that doesn’t get enough attention outside of Apple platforms. It’s a performant, statically typed, compiled language that feels almost like a scripting language to write code in. It’s memory safe, cross platform, has a fantastic standard library, and has excellent concurrency capabilities. Even the non-Xcode tooling is maturing rapidly.
The big weak spot really is lack of community outside of Apple platforms.
Having developed a fair amount of expertise with both C++ and Rust, C++ is on a completely different level of complexity from Rust.
In Rust, for most users, the main source of complexity is struggling with the borrow checker, especially because you're likely to go through a phase where you're yelling at the borrow checker for complaining that your code violates lifetime rules when it clearly doesn't (only to work it out yourself and realize that, in fact, the compiler was right and you were wrong) [1]. Beyond this, the main issues I run into are Rust's auto-Deref seeming to kick in somewhat at random making me unsure of where I need to be explicit (but at least the error messages basically always tell you what the right answer is when you get it wrong) and to a much lesser degree issues around getting dyn traits working correctly.
By contrast C++ has just so much weird stuff. There's three or four subtly different kinds of initialization going on, and three or four subtly different kinds of type inference going on. You get things like `friend X;` and `friend class X;` having different meanings. Move semantics via rvalue references are clearly bolted on after the fact, and it's somewhat hard to reason about the right things to do. It has things like most-vexing parse. Understanding C++ better doesn't give you more confidence that things are correct; it gives you more trepidation as you know better how things can go awry.
[1] And the commonality of people going through this phase makes me skeptical of people who argue that you don't need the compiler bonking you on the head because the rules are easy to follow.
It's funny because I have completely the opposite stance. When I code in rust (mainly algorithm), I always struggle to change what I want to do to what rust allow me to do. And all this complexity has nothing to do with the problem.
>If there is a language that can rival C++ in terms of complexity
Fair, but this relative. C++ has 50 years of baggage it needs to support--and IMO the real complexity of C++ isn't the language, it's the ecosystem around it.
>There is an apocryphal story about Euler in elementary school solving all the math problems that the teacher gave to the class in a jiffy, so the teacher tells him to sum up the numbers to a thousand to get him to stop pestering for more. The expectation was that Euler would go through the numbers "imperatively", like C, summing them up. Instead, what Euler did was discover the summation formula and solved it "declaratively" like Haskell, in one go, as an equation.
I've heard this story be accounted to Gauss, not Euler.
The earliest reference is a biography of Gauss published a year after his death by a professor at Gauss' own university (Gottingen). The professor claims that the story was "often related in old age with amusement and relish" by Gauss. However, it describes the problem simply as "the summing of an arithmetic series", without mention of specific numbers (like 1-100). Also, it was posed to the entire classroom - presumably as a way to keep them busy for a couple of hours - rather than as an attempt to humiliate a precocious individual.
>To paraphrase Norvig's Latency numbers a programmer should know, if we imagine a computer that executes 1 CPU instruction every second, it would take it days to read from RAM.
It's a detail, but this is a little bit off. RAM latency is roughly around ~100ns, CPUs average a couple instructions per cycle and a few cycles per ns.
Then in the analogy, a stall on RAM is about a 10 minute wait; not quite as bad as losing entire days.
In current machines, that's way off depending on how you choose to count "1 CPU instruction" for the metaphor.
Take Apple's latest laptops. They have 16 CPU cores, 12 of those clocking at 4.5 GHz and able to decode/dispath up to 10 instructions per cycle. 4 of those clocking at 2.6 GHz, I'm not sure about their decode/dispatch width but let's assume 10. Those decoder widths don't translate to that many instructions-per-cycle in practice, but let's roll with it because the order of magnitude is close enough.
If the instructions are just right, that's 824 instructions per nanosecond. Or, roughly a million times faster than the 6502 in the Apple-II! Computers really have got faster, and we haven't even counted all the cores yet.
Scaling those to one per second, a RAM fetch taking 100ns would scale to 82400 seconds, which 22.8 hours, just short of a day.
Fine, but we forgot about the 40 GPU cores and the 16 ANE cores! More instructions per ns!
Now we're definitely into "days".
For the purpose of the metaphor, perhaps we should also count the multiple lanes of each vector instruction on the CPU, and lanes on the GPU cores, as if thery were separate processing instructions.
One way to measure that, which seems fair and useful to me, is to look at TOPS instead - tera operations per second. How many floating-point calculations can the processor complex do per second? I wasn't able to find good figures for the Apple M4 Max as a whole, only the ANE component, for which 38 TOPS is claimed. For various reasons tt's reasonable to estimate the GPU is the same order of magnitude in TOPS on those chips.
If you count 38 TOPS as equivalent to "CPU instructions" in the metaphor, then scale those to 1 per second, a RAM fetch taking 100ns scales to a whopping 43.9 days on a current laptop!
Very cool, I wish the author good luck! I've been writing a compiler in Rust for a few months now and I absolutely love it. The ways it solves most of the problems it addresses feel like the "right" way to do things.
There are some things that feel a little weird, like the fact that often when you want a more complex data structure you end up putting everything in a flat array/map and using indices as pointers. But I think I've gotten used to them, and I've come up with a few tricks to make it better (like creating a separate integer type for each "pointer" type I use, so that I can't accidentally index an object array with the wrong kind of index).
Rust is one of those languages that change how you think, like Haskell or Lisp or Forth. It won't be easy, but it's worth it.
The best way to use Rust is to circumvent use of the borrow-checker and lifetimes and use indices everywhere! Suddenly, it becomes more pleasant and easy to refactor :).
This hits close to home. TypeScript is also my language of choice for 90% of the software I write. I agree with the author that TypeScript is very close to the perfect level of abstraction, and I haven't seen another language with a type system that's nearly as enjoyable to use. Of course, TS (any by extension JS) obviously has its issues/complications. Bun solves a lot of the runtime-related issues/annoyances though.
For the other 10% software that is performance-sensitive or where I need to ship some binary, I haven't found a language that I'm "happy" with. Just like the author talks about, I basically bounce between Go and Rust depending on what it is. Go is too simple almost to a fault (give me type unions please). Rust is too expressive; I find myself debugging my knowledge of Rust rather than the program (also I think metaprogramming/macros are a mistake).
I think there's space in the programming language world for a slightly higher level Go-like language with more expressiveness.
I'm surprised ocaml doesn't have more market share here. Native, fast, robust type system, GC, less special syntax than rust, less obtuse than Haskell.
I recently came to a production Typescript codebase and it took minutes to compile. Strangely, it could not behave correctly without a linter rule `no-floating-promises` but the linter also took minutes to lint the codebase. It was an astounding exercise in patience. Faster linters like oxlint exist but they don't have a notion of cross-file types so `no-floating-promises` is impossible on them.
The worst part is that `no-floating-promises` is strange. Without it, Knex (some ORM toolkit in this codebase) can crash (segfault equivalent) the entire runtime on a codebase that compiles. With it, Knex's query builders will fail the lint.
It was confusing. The type system was sophisticated enough that I could generate a CamelCaseToSnakeCase<T> type but somehow too weak to ensure object borrow semantics. Programmers on the codebase would frequently forget to use `await` on something causing a later hidden crash until I added the `no-floating-promises` lint, at which point they had to suppress it on all their query builders.
One could argue that they should just have been writing SQL queries and I did, but it didn't take. So the entire experience was fairly nightmarish.
I very much enjoy reading and writing TS code. What I don't enjoy is the npm ecosystem (and accompanying mindset), and what I can't stand is trying to configure the damn thing. I've been doing this since TSC was first released, and just the other day I wasted hours trying to make a simple ts-node command line program work with file-extension-free imports and no weird disagreements between the ts-node runner and the language server used by the editor.
And then gave up in disgust.
Look, I'm no genius, not by a long shot. But I am both competent and experienced. If I can't make these things work just by messing with it and googling around, it's too damned hard.
> To paraphrase Norvig's Latency numbers a programmer should know, if we imagine a computer that executes 1 CPU instruction every second, it would take it _days_ to read from RAM.
I think the author probably misread the numbers. If CPU executed 1 instruction every second, it would take just 1—2 minutes to read from uncached RAM, no need to be overly dramatic.
Overall, this reads to me like a very young programmer trying to convince themselves to learn Rust because he heard it's cool, not an objective evaluation. And I'm totally on board with that, whatever convinces you, just learn new things!
> “Rust, from what I've heard, has a similar abstraction level as TypeScript, perhaps even closer to Haskell but that's good, I could do with a bit more help from the compiler. But it requires me to manage memory and lifetimes, which I think is something the compiler should do for me.”
The Rust compiler does manage memory and lifetimes. It just manages them statically at compile-time. If your code can’t be guaranteed to be memory-safe under Rust’s rules, it won’t compile and you need to change it.
Rust is an amazing language once you get over the initial mental hurdle. An important thing to go in with: 99% of programs should not require you to manage lifetimes (‘a notation) If you find yourself doing this and aren’t writing a inner loop high performance library, back up and find another way. Usually this entails using a Mutex or Arc (or other alternatives based on the scenario) to provide interior mutability or multiple references. This statement might not make sense now but write it down for when it will.
I use Rust now for everything from CLIs to APIs and feel more productive in it end to end than python even.
I've written some Gleam as an exploration, and I liked it, but the "not widely used" thing is a concern. I need there to be some well maintained libraries for enterprise stuff.
I know this is not the fault of the language, and that is unfortunate.
> While I can jump through hoops to compile JavaScript into a binary, such wouldn't feel "solid". And the very point of writing a native program in the first place is to make it feel solid
You can use Bun to compile to native binaries without jumping through hoops. It's not mature, but it works well enough that we use it at work.
Odin has been really growing on me lately as a language that checks all of those boxes. String types, first class allocators, built in tests, a batteries included philosophy, and ease of use are some of the things that really drew me towards it.
I really wanted to like rust and I wrote a few different small toy projects in it. At some point knowledge of the language becomes a blocker rather than knowledge the problem space, but this is a skill issue that I'm sure would lessen the more I used it.
What really set me off was how every project turned into a grocery list of crates that you need to pull in in order to do anything. It started to feel embarrassing to say that I was doing systems programming when any topic I would google in rust would lead me to a stack overflow saying to install a crate and use that. There seemed to be an anti-DIY approach in the community that finally drew me away.
We copied the awful name from Go … and the docs are wrong.
Five different boolean types?
Zero values. (Every value has some default value, like in Go.)
Odin also includes the Billion Dollar Mistake.
> There seemed to be an anti-DIY approach in the community that finally drew me away.
It's a "let a thousand flowers bloom" approach, at least until the community knows which design stands a good chance of not being a regretted addition to the standard library.
I really want to love rust, and I understand the niches it fills. My temporary allegiance with it comes down to performance, but I'm drawn by the crate ecosystem and support provided by cargo.
What's so damning to me is how debilitatingly unopinionated it is during situations like error handling. I've used it enough to at least approximate its advantages, but strongly hinting towards including a crate (though not required) to help with error processing seems to mirror the inconvenience of having to include an exception type in another language. I don't think it would be the end of the world if it came with some creature comforts here and there.
Oh, another one of those articles where people try to logically explain why they absolutely need to learn Rust and no other language will do. This time, even with religious connotations (https://en.wikipedia.org/wiki/God-shaped_hole). I mean, if you want to learn Rust, good for you, go ahead, no need to write a whole blog post rationalizing your decision!
I’m not sure what his abstraction column really means, nuts and bolts-wise. But, Fortran is native, you get to allocate your own memory, and it has object oriented features (maybe that’s abstraction).
They complain that Go is too low-level for their needs. Zig, with its explicit allocators, is definitely even lower-level.
Rust seems low-level too, but it isn't the same. It allows building powerful high-level interfaces that hide the complexity from you. E.g., RAII eliminates the need for explicit `defer` that can be forgotten
I'm interested in the long piecewise elimination section. Presumably that's where they explain why not use Ocaml/Nim/yaddah yaddah.
If I were to write such a list, the answer would probably come down to "because I wanted to pick ONE and be able to stick with it, and Rust seems solid and not going anywhere." As much as Clojure and Ocaml are, from what I've heard, right up my alley, learning all these different languages has definitely taken time away from getting crap done, like I used to be able to do perfectly well with Java 2 or PHP 5, even though those are horrible languages.
I like the "lower level of abstraction" of Go. It was a transition coming from writing Spring Boot Java code to having to actually implement the "magic", but I like that I can clearly see the control flow of things in Go.
Out of all the languages I've used, Go programs are the ones that have the highest percentage chance of working "first try". I think that has a lot to do with the plain and strongly typed style.
My two 'language poles' are Typescript as the 'north pole', and C as the 'south pole', with Python, C++, Zig (and to a lesser extent, Rust and Odin) placed somewhere along the latitudes.
I think that of all those options, Typescript and Zig feel closest related. Zig has that same 'lightness' when writing code as Typescript and the syntax is actually close enough that a Typescript syntax highlighter mostly works fine for Zig too ;)
Rust allows low level programming and static compilation, while still providing abstraction and safety. A good ecosystem and stable build tools help massively as well.
It is one of the few languages which managed to address a real life need in novel ways, rather than incrementing on existing solutions and introducing new trade offs.
Wow, a lot of stuff in here surprises me. C definitely can/does have spooky at a distance. Just share a pointer to a resource with something else and enjoy the spooky modifications. Changes are local as long as you program that way, but sometimes it can be a bit not-obvious that this is happening.
regarding redefining functions, what could the author mean? using global function pointers that get redefined? otherwise redefining a function wouldn't effect other modules that are compiled into separate object files. confusing.
C is simple in that it does not have a lot of features to learn, but because of e.g. undefined behavior, I find its very hard to call it a simple language. When a simple bug can cause your entire function to be UB'd out of existence, C doesn't feel very simple.
In haskell, side effects actually _happen_ when the pile of function applications evaluate to IO data type values, but, you can think about it very locally; that's what makes it so great. You could get those nice properties with a simpler model (i.e. don't make the langague lazy, but still have explicit effects), but, yeah.
The main thing that makes Haskell not simple IMO is that it just has such a vast set of things to learn. Normal language feature stuff (types, typeclasses/etc, functions, libraries), but then you also have a ton of other special haskell suff: more advanced type system tomfoolery, various language extensions, some of which are deprecated now, or perhaps just there are better things to use nowadays (like type families vs functional dependencies), hierarchies of unfamiliar math terms that are essentially required to actually do anything, etc, and then laziness/call-by-name/non-strict eval, which is its own set of problems (space leaks!). And yes, unfamiliar syntax is another stumbling block.
IME, Rust is actually more difficult than Haskell in a lot of ways. I imagine that once you learn all of the things you need to learn it is different. The way I've heard to make it "easier" is to just clone/copy data any time you have a need for it, but, what's the point of using Rust, then?
I wonder if the author considered OCaml or its kin, I haven't kept track of whats all available, but I've heard that better tooling is available and better/more familiar syntax. OCaml is a good language and a good gateway into many other areas.
There are some other langs that might fit, like I see nim as an example, or zig, or swift. I'd still like to do more with swift, the language is interesting.
When people say C is simple, besides everything that you point out, apparently they never learned anything beyond the classical K&R C book (ANSI/ISO C edition), and are stuck in a C89 mindset without any kind of compiler extensions.
> Wow, a lot of stuff in here surprises me. C definitely can/does have spooky at a distance. Just share a pointer to a resource with something else and enjoy the spooky modifications. Changes are local as long as you program that way, but sometimes it can be a bit not-obvious that this is happening.
I think the author means that the language constructs themselves have well-defined meanings, not that the semantics don't allow surprising things to happen at runtime. Small changes don't affect the meaning of the entire program. (I'm not sure I agree that this isn't the case for e.g. Haskell as well, I'm just commenting on what I think the author means.)
> IME, Rust is actually more difficult than Haskell in a lot of ways. I imagine that once you learn all of the things you need to learn it is different.
Having written code in both, Rust is quite a lot easier than Haskell for a programmer familiar with the "normal" languages like C, C++, Python, whatever. The pure functionality of Haskell is quite a big deal that ends up contorting my programs into weird poses, e.g. once you run into the need to compose Monads the complexity ramps way up.
> The way I've heard to make it "easier" is to just clone/copy data any time you have a need for it, but, what's the point of using Rust, then?
Memory safety. And the fact that this is the example of Rust complexity just goes to show what a higher level Haskell's difficulty is.
I've seen this argument for years. "C is an easy language and it's easy to code review it.".
Maybe if you want to skip all the off-by-1 errors, double frees, overflows, underflows, wrong API usage, you don't need to maintain multiplatform build environment, and you don't support multiple architectures.
I mean, in this sense, assembly is even easier than C. Its syntax is trivial, and if that would be the only thing that matters, people should write assembly.
But they don't write assembly, because it's not the only thing that matters. So please stop considering C only in terms of easy syntax. Because syntax is the only thing that's easy in C.
> Rust, from what I've heard, has a similar abstraction level as TypeScript, perhaps even closer to Haskell but that's good, I could do with a bit more help from the compiler. But it requires me to manage memory and lifetimes, which I think is something the compiler should do for me.
Eh.... yeah? I suppose technically? But not _really_. Rust gives you the option to do that. But most programs outside of "I'm building an operating system" don't really require thinking too hard about it.
It's not like C where you're feeding memory manually, or like C++ where you have to think about RAII just right.
I was thinking that too. There are many cases where you do want to manage memory yourself, and in that case you should likely use Rust or maybe Zig if you can choose your own tool. But if you don't want to manage your own memory Nim works nicely, though IMO it requires adherence to a style guide more than most languages.
Depends what you do but most of the time you do not need to do anything special about memory management in Rust. That is why people try to use it for other things then just system programming.
> And the very point of writing a native program in the first place is to make it feel solid.
What does that mean, and what is it about native programs (i.e. programs AOT-compiled to machine code) that makes them feel solid? BTW, such programs are often more, not less, sensitive to OS changes.
> realizing that I was just spawning complexity that is unrelated to the problem at hand
Wait till you use Rust for a while, then (you should try, though, if the language interests you).
For me, the benefit of languages with manual memory management is the significantly lower memory footprint (speed is no longer an issue; if you think Haskell and Go are good enough, try Java, which is faster). But this comes at a price. Manual memory management means, by necessity, a lower level of abstraction (i.e. the same abstraction can cover fewer implementations). The price is usually paid not when writing the first version, but when evolving the codebase over years. Sometimes this price is worth it, but it's there, and it's not small. That's why I only reach for low level languages when I absolutely must.
> What does that mean, and what is it about native programs (i.e. programs AOT-compiled to machine code) that makes them feel solid?
Im a little late here and as a Java user most of the time people tell me:
1. They just want to ship a binary. Most are not aware of Jpackage, but correct me if I’m wrong that just makes installers right? I’m hopeful that the “hermetic” work from Leyden will help here.
2. They frequently complain about Java’s memory usage, but don’t really understand how setting the heap size works and what the defaults are. I’m also hopeful that ZGC’s automatic heap sizing will solve this.
With those two features I think the view of Java will change, as long as there is good build tooling for them. It would be nice to make that the default, but that would break many builds.
> such programs are often more, not less, sensitive to OS changes.
You may be technically correct that they are more sensitive to the kernel interface changes. But the point is that native, static binaries depend only on the kernel interface, while the other programs also depend on the language runtime that's installed on that OS. Typical Python programs even depend on the libraries being installed separately (in source form!)
> But the point is that native, static binaries depend only on the kernel interface
Many binaries also depend on shared libraries.
> while the other programs also depend on the language runtime that's installed on that OS
You can (and probably should) embed the runtime and all dependencies in the program (as is easily done in Java). The runtime then makes responding to OS selection/changes easier (e.g. musl vs glibc), or avoids less stable OS APIs to begin with.
Yeah, and those are also the opposite of "solid" :) That's why I qualified with "static". I'm so glad that Go and Rust promote static linking as the default (ignoring glibc).
> You can (and probably should) embed the runtime and all dependencies in the program (as is easily done in Java).
Congrats to the Java team and users, then. That makes it similar to the Go approach to binaries and the runtime, which I approve
> Yeah, and those are also the opposite of "solid" :)
So if that's what the author meant by "solid", i.e. few environmental dependencies, then it's not really about "native" or not, but about how the language/runtime is designed. Languages that started out as "scripting" languages often do rely on the environment a lot, but that's not how, say, Java or .NET work.
> I'm so glad that Go and Rust promote static linking as the default (ignoring glibc).
That doesn't work so well (and so usually not done) once you have a GUI, although I guess you consider the GUI to be part of the kernel.
You mean like it happens on many OSes that aren't GNU/Linux?
A language runtime remains one, independently on how it was linked into the binary.
A language runtime are the set of operations that support the language semantics, which in C's case are everything that happens before main(), threading support (since C11), floating point emulation (if needed), execution hooks for running code before and after main(), delayed linking, and possibly more, depending on the compiler specific extensions.
You're being pedantic and trying to argue as if I misunderstand language runtimes and am speaking against language runtimes in general. That's not true. I qualified "the language runtime that's installed on that OS" from the beginning.
I don't have any negative experience with that one, but I remember having to manually install various versions of the Windows C++ runtimes to get an app working
> What does that mean, and what is it about native programs (i.e. programs AOT-compiled to machine code) that makes them feel solid? BTW, such programs are often more, not less, sensitive to OS changes
TFA also concludes
Since I want native code ...
I think by "solid" they mean as close to metal as possible, because, as you suggest, one can go "native" with AOT. With JS/TS (languages TFA prefers), I'm not sure how far WASM's AOT will take you ... Go (the other language TFA prefers) even has PGO now on top of "AOT".
> I think by "solid" they mean as close to metal as possible
A JIT compiler compiles your code to machine code just as an AOT compiler does, so I don't think that's what's meant here (and they don't mean the level of the source code because they consider Haskell to be "native").
> For me, the benefit of languages with manual memory management is the significantly lower memory footprint (speed is no longer an issue; if you think Haskell and Go are good enough, try Java, which is faster).
... what? Speed is no longer an issue? Haskell and Go? ??? How'd we go from manual memory management languages to Haskell and Go and then somehow to Java? Gotta plug that <my favorite language> somehow I guess...
It seems to me you have a deep misunderstanding of performance. If one program is 5% faster than another but at 100x memory cost, that program is not actually more performant. It just traded all possible memory for any and all speed gain. What a horrible tradeoff.
This thinking is typical in Java land [1]. You see: 8% better performance. I see: 28x the memory usage. In other words, had the Rust program been designed with the same insane memory allowance in mind as the Java program, it'd wipe the floor with it.
> How'd we go from manual memory management languages to Haskell and Go
Because that's what's discussed in the article, which discusses Go and Haskell specifically.
> In other words, had the Rust program been designed with the same insane memory allowance in mind as the Java program, it'd wipe the floor with it.
No, it wouldn't (I've worked with C and C++ for almost 30 years - including on embedded and safety-critical hard realtime software - with Java for over 20, and I work on the HotSpot VM). That's because tracing GCs convert memory to speed, but that's not the case for other memory management techniques. To see why, look at a highly simplified view of tracing collectors (modern collectors don't quite work like that, but the idea generalises): When the heap is exhausted, live objects are traced and compacted to the "top" of the heap. I.e. the cost of each collection is only dependent on the working set, i.e. the size of objects that are still live. Because the working set is more-or-less a constant for a given program under a given workload, the larger the heap the less frequent the collections (each of a constant cost), and so the cost of memory management with a tracing collector goes to zero as the heap size grows. There are details in the actual implementations that are worse and others that are better than this idealised description, but the point is that tracing garbage collection is a mechanism that is very effectively converts RAM to speed as its cost scales with the ratio working-set/heap-size. This is not the case for manual memory management or for primitive ref-counting GCs.
Of course, even when memory management is zero, there are still computational costs, but Java compiles to the same machine instructions as C with that kind of work (with some important caveats in certain situations that will soon be gone). It is true that even outside those specific areas, you can, with significant additional effort, get a C program (or a program in any other low-level language) to be faster than a Java program but that's due to the availability of micro-optimisations (that we don't want to offer in Java as to not to complicate the language or make it too dependent on a particular hardware/OS architecture), but that effect isn't large.
> I once spent an entire year in the heaven of C++, walking around in a glorious daze of std::vector and RAII, before one day snapping out of it and realizing that I was just spawning complexity that is unrelated to the problem at hand.
Good luck to the author with trying Rust. I hope he writes an honest experience report.
> That leaves me with the following options — C, C++, Go, Rust.
> Technically, there are a lot more options, and I wrote a long section here about eliminating them piecewise, but after writing it I felt like it was just noise.
Uh? I am guessing OP doesn't like virtual machines maybe, cause Java and C# sound like something that fits what they want. Both support AoT compilation though now...
Also the assumption about Typescript to Wasm being not "solid" seems wrong.
I mean I find it super weird that the author's only option for "native typescript" is Rust.
A nice thing about C is that you can be pretty confident that you know all major footguns (assuming you spent some time reading about it). With languages that are young or complex there is a much greater chance you’re making a terrible mistake because you’re not aware of it.
If anyone reads this and like me fears the difficulty and complexity of rust, but still wants a language that is competitive in performance, works for system level programming as well as something more general purpose definitely give Swift a go.
Over the last year I’ve started to write every new project using it. On windows, on linux and mac.
It is honestly a wonderful language to work with. Its mature, well designed, has a lot of similarities to rust. Has incredible interop with C, C++, Objective-C and even Java as of this year which feels fairly insane. It also is ergonomic as hell and well understood by LLM’s so is easy to get into from a 0 starting point.
> If someone posts a patch or submits a PR to a codebase written in C, it is easier to review than any other mainstream language. There is no spooky at a distance. [..] Changes are local.
Lol, wut? What about about race conditions, null pointers indirectly propagated into functions that don't expect null, aliased pointers indirectly propagated into `restrict` functions, and the other non-local UB causes? Sadly, C's explicit control flow isn't enough to actually enable local reasoning in the way that Rust (and some other functional languages) do.
I agree that Go is decent at this. But it's still not perfect, due to "downcast from interface{}", implicit nullability, and similar fragile runtime business.
I largely agree with the rest of the post! Although Rust enables better local reasoning, it definitely has more complexity and a steeper learning curve. I don't need its manual memory management most of the time, either.
Related post about a "higer-level Rust" with less memory management: https://without.boats/blog/notes-on-a-smaller-rust/
I program both C/C++ and Rust (the latter to a lesser degree currently). Rust is far superior to C in Error locality, if you write ideomatic Rust. Most of the types of errors I make in C would have been cought at compile time in Rust.
Aside from Rusts ownership model you can use the type system to enforce certain things. A typical example is that Rust uses different String types to force programmers to deal with the pitfalls. Turns out if you have a file name in an operating system it could not be a valid string, or you could have valid Unicode text that could not be a filename. Rust having different types for OS Strings and internal Unicode means when you want to go from one to the other you need to explicitly deal with the errors or choose a strategy how to handle them.
Now you could totally implement strings within Rust in a way that wouldn't force that conversion and programmers would then yolo their way through any conversion, provided they even knew about the issue. And the resulting error would not necessarily surface where it orginated. But that would be programming Rust like C.
In my experience many C libraries will just happily gulp up any input of any remotly valid shape as if it was valid data without many devs being even aware there were cases or conversions they would have had to deal with. You recognize exceptionally good C devs by the way they avoid those pitfalls.
And these skilled C devs are like seasoned mountaineers, they watch their every step carefully. But that doesn't mean the steep north face of the mountain is the safest, fastest or most ergonomic way to get to the summit. And if you believe that C is that, you should be nowhere near that language.
Even ignoring undefined behavior and nulls, everything in C is mutable. Action at a distance is basically the norm.
The C myth keeps being perpetuated, sadly.
And had not GNU/FSF made C the official main language for FOSS software on their manifesto, by the time when C++ was already the main userspace language across Windows, OS/2, Mac OS, BeOS, that "It is the reason for C's endurance" would be much less than it already is nowadays, where it is mostly UNIX/POSIX, embedded and some OS ABIs.
If anything I'd like an even lower-level Rust.
There's so many good high-level languages to choose from, but when you need to go low-level, there's essentially only C, C++, Rust. Maybe Zig once it reaches 1.0.
What we need isn't Rust without the borrow checker. It's C with a borrow checker, and without all the usual footguns.
That blog post is pretty close to describing Swift. :)
I was very surprised to read this too. Local reasoning is the antithesis of C.
Yeah, and C can get unreadable fast. Obfuscated C contest is a great example: https://www.ioccc.org
But that doesn't mean it's a good idea to use such style for PRs, lol.
It is a bit unclear to me why somebody who rejects C++ because "I once spent an entire year in the heaven of C++, walking around in a glorious daze of std::vector and RAII, before one day snapping out of it and realizing that I was just spawning complexity that is unrelated to the problem at hand." (which I can absolutely agree with!) is picking Rust from all options. If there is a language that can rival C++ in terms of complexity, it is Rust.
I've seen this idea from a few people and I don't get it at all.
Rust is certainly not the simplest language you'll run into, but C++ is incredibly baroque, they're not really comparable on this axis.
One difference which is already important and I think will grow only more important over time is that Rust's Editions give it permission to go back and fix things, so it does - where in C++ it's like venturing into a hoarder's home when you trip over things which are abandoned in favour of a newer shinier alternative.
You’re right that Rust is a ball of complication.
I am a fan of Rust but it’s definitely a terse language.
However there are definitely signs that they have thought about making it as readable as possible (by omitting implicit things unless they’re overwritten, like lifetimes).
I’m reminded also about a passage in a programming book I once read about “the right level of abstraction”. The best level of abstraction is the one that cuts to the meat of your problem the quickest - spending a significant amount of time rebuilding the same abstractions over and over (which, is unfortunately often the case in C/C++) is not actually more simple, even if the language specifications themselves are simpler.
C codebases in particular, to me, are nearly inscrutable unless I spend a good amount of time unpicking the layers of abstractions that people need to write to make something functional.
I still agree that Rust is a complex language, but I think that largely just means it’s frontloading. a lot of the understanding about certain abstractions.
Reading between the lines, the author is a Haskell fan. Haskell is another "complicated" language, but the complexity feels much different than the C++ complexity. Perhaps I would describe it as "complexity that improves expressiveness". If you like Haskell for its expressiveness but dislike C++ for it's complexity, I suspect Rust is a language you're going to like.
The two are incomparable in both quality and quantity. The complexity of Rust comes from the fact that it's solving complex problems. The complexity of C++ comes from a poorly thought out design and backwards-compatibility. (Not to slight the standards committee; they are smart people, and did the best with what they had.)
Anothere way of putting it is, if you didn't care about backwards-compatibility, you could greatly simplify C++ without losing anything. You can't say the same about Rust; the complexity of Rust is high-entropy, C++'s is low-entropy.
It's really not even remotely the same. C++ has literally >50 pages of specification on the topic of initialising values. All of these are inconsistent, not subject to any overarching or unifying rule, and you have to keep it all in mind to not run into bugs or problematic performance.
Rust is a dead simple language in comparison.
Swift is a great C++ and Rust alternative that doesn’t get enough attention outside of Apple platforms. It’s a performant, statically typed, compiled language that feels almost like a scripting language to write code in. It’s memory safe, cross platform, has a fantastic standard library, and has excellent concurrency capabilities. Even the non-Xcode tooling is maturing rapidly.
The big weak spot really is lack of community outside of Apple platforms.
I'm not at all fluent at Rust, but I think c++ is not just complex, but every c++ project is complete in a different way.
Having developed a fair amount of expertise with both C++ and Rust, C++ is on a completely different level of complexity from Rust.
In Rust, for most users, the main source of complexity is struggling with the borrow checker, especially because you're likely to go through a phase where you're yelling at the borrow checker for complaining that your code violates lifetime rules when it clearly doesn't (only to work it out yourself and realize that, in fact, the compiler was right and you were wrong) [1]. Beyond this, the main issues I run into are Rust's auto-Deref seeming to kick in somewhat at random making me unsure of where I need to be explicit (but at least the error messages basically always tell you what the right answer is when you get it wrong) and to a much lesser degree issues around getting dyn traits working correctly.
By contrast C++ has just so much weird stuff. There's three or four subtly different kinds of initialization going on, and three or four subtly different kinds of type inference going on. You get things like `friend X;` and `friend class X;` having different meanings. Move semantics via rvalue references are clearly bolted on after the fact, and it's somewhat hard to reason about the right things to do. It has things like most-vexing parse. Understanding C++ better doesn't give you more confidence that things are correct; it gives you more trepidation as you know better how things can go awry.
[1] And the commonality of people going through this phase makes me skeptical of people who argue that you don't need the compiler bonking you on the head because the rules are easy to follow.
Rust is a lot better at producing helpful error messages than any C++ I’ve seen.
It's funny because I have completely the opposite stance. When I code in rust (mainly algorithm), I always struggle to change what I want to do to what rust allow me to do. And all this complexity has nothing to do with the problem.
>If there is a language that can rival C++ in terms of complexity
Fair, but this relative. C++ has 50 years of baggage it needs to support--and IMO the real complexity of C++ isn't the language, it's the ecosystem around it.
Why would everyone call rust a c+++ whet it is obviously an ocaml erlang hybrid :D
>There is an apocryphal story about Euler in elementary school solving all the math problems that the teacher gave to the class in a jiffy, so the teacher tells him to sum up the numbers to a thousand to get him to stop pestering for more. The expectation was that Euler would go through the numbers "imperatively", like C, summing them up. Instead, what Euler did was discover the summation formula and solved it "declaratively" like Haskell, in one go, as an equation.
I've heard this story be accounted to Gauss, not Euler.
Yes, it's Gauss. In fact the technique is sometimes known as "Gaussian summation". The New Scientist has an article where the author chases down early references to the story: https://www.americanscientist.org/article/gausss-day-of-reck...
The earliest reference is a biography of Gauss published a year after his death by a professor at Gauss' own university (Gottingen). The professor claims that the story was "often related in old age with amusement and relish" by Gauss. However, it describes the problem simply as "the summing of an arithmetic series", without mention of specific numbers (like 1-100). Also, it was posed to the entire classroom - presumably as a way to keep them busy for a couple of hours - rather than as an attempt to humiliate a precocious individual.
Yes. In Germany the formula n(n+1)/2 is actually called the Gaussian sum formula, or even the "small Gauss". [0]
[0] https://de.wikipedia.org/wiki/Gaußsche_Summenformel
>To paraphrase Norvig's Latency numbers a programmer should know, if we imagine a computer that executes 1 CPU instruction every second, it would take it days to read from RAM.
It's a detail, but this is a little bit off. RAM latency is roughly around ~100ns, CPUs average a couple instructions per cycle and a few cycles per ns.
Then in the analogy, a stall on RAM is about a 10 minute wait; not quite as bad as losing entire days.
In current machines, that's way off depending on how you choose to count "1 CPU instruction" for the metaphor.
Take Apple's latest laptops. They have 16 CPU cores, 12 of those clocking at 4.5 GHz and able to decode/dispath up to 10 instructions per cycle. 4 of those clocking at 2.6 GHz, I'm not sure about their decode/dispatch width but let's assume 10. Those decoder widths don't translate to that many instructions-per-cycle in practice, but let's roll with it because the order of magnitude is close enough.
If the instructions are just right, that's 824 instructions per nanosecond. Or, roughly a million times faster than the 6502 in the Apple-II! Computers really have got faster, and we haven't even counted all the cores yet.
Scaling those to one per second, a RAM fetch taking 100ns would scale to 82400 seconds, which 22.8 hours, just short of a day.
Fine, but we forgot about the 40 GPU cores and the 16 ANE cores! More instructions per ns!
Now we're definitely into "days".
For the purpose of the metaphor, perhaps we should also count the multiple lanes of each vector instruction on the CPU, and lanes on the GPU cores, as if thery were separate processing instructions.
One way to measure that, which seems fair and useful to me, is to look at TOPS instead - tera operations per second. How many floating-point calculations can the processor complex do per second? I wasn't able to find good figures for the Apple M4 Max as a whole, only the ANE component, for which 38 TOPS is claimed. For various reasons tt's reasonable to estimate the GPU is the same order of magnitude in TOPS on those chips.
If you count 38 TOPS as equivalent to "CPU instructions" in the metaphor, then scale those to 1 per second, a RAM fetch taking 100ns scales to a whopping 43.9 days on a current laptop!
Very cool, I wish the author good luck! I've been writing a compiler in Rust for a few months now and I absolutely love it. The ways it solves most of the problems it addresses feel like the "right" way to do things.
There are some things that feel a little weird, like the fact that often when you want a more complex data structure you end up putting everything in a flat array/map and using indices as pointers. But I think I've gotten used to them, and I've come up with a few tricks to make it better (like creating a separate integer type for each "pointer" type I use, so that I can't accidentally index an object array with the wrong kind of index).
Rust is one of those languages that change how you think, like Haskell or Lisp or Forth. It won't be easy, but it's worth it.
The best way to use Rust is to circumvent use of the borrow-checker and lifetimes and use indices everywhere! Suddenly, it becomes more pleasant and easy to refactor :).
This hits close to home. TypeScript is also my language of choice for 90% of the software I write. I agree with the author that TypeScript is very close to the perfect level of abstraction, and I haven't seen another language with a type system that's nearly as enjoyable to use. Of course, TS (any by extension JS) obviously has its issues/complications. Bun solves a lot of the runtime-related issues/annoyances though.
For the other 10% software that is performance-sensitive or where I need to ship some binary, I haven't found a language that I'm "happy" with. Just like the author talks about, I basically bounce between Go and Rust depending on what it is. Go is too simple almost to a fault (give me type unions please). Rust is too expressive; I find myself debugging my knowledge of Rust rather than the program (also I think metaprogramming/macros are a mistake).
I think there's space in the programming language world for a slightly higher level Go-like language with more expressiveness.
I'm surprised ocaml doesn't have more market share here. Native, fast, robust type system, GC, less special syntax than rust, less obtuse than Haskell.
TypeScript is good as a language. You can't generate static binaries out of it (except Docker images) and that itself is a deal breaker.
I recently came to a production Typescript codebase and it took minutes to compile. Strangely, it could not behave correctly without a linter rule `no-floating-promises` but the linter also took minutes to lint the codebase. It was an astounding exercise in patience. Faster linters like oxlint exist but they don't have a notion of cross-file types so `no-floating-promises` is impossible on them.
The worst part is that `no-floating-promises` is strange. Without it, Knex (some ORM toolkit in this codebase) can crash (segfault equivalent) the entire runtime on a codebase that compiles. With it, Knex's query builders will fail the lint.
It was confusing. The type system was sophisticated enough that I could generate a CamelCaseToSnakeCase<T> type but somehow too weak to ensure object borrow semantics. Programmers on the codebase would frequently forget to use `await` on something causing a later hidden crash until I added the `no-floating-promises` lint, at which point they had to suppress it on all their query builders.
One could argue that they should just have been writing SQL queries and I did, but it didn't take. So the entire experience was fairly nightmarish.
I very much enjoy reading and writing TS code. What I don't enjoy is the npm ecosystem (and accompanying mindset), and what I can't stand is trying to configure the damn thing. I've been doing this since TSC was first released, and just the other day I wasted hours trying to make a simple ts-node command line program work with file-extension-free imports and no weird disagreements between the ts-node runner and the language server used by the editor.
And then gave up in disgust.
Look, I'm no genius, not by a long shot. But I am both competent and experienced. If I can't make these things work just by messing with it and googling around, it's too damned hard.
If they're serious about their criteria they should go with OCaml (or maybe, like, Swift, or any of dozens of languages in that space).
(Of course they actually do want Haskell but they probably need to get there gradually)
Does ocaml have a mature ecosystem of libraries and dependencies? And easy way to manage them? Even rust with all its hype lacks in this area imo.
right! the table at the end just screamed "use ocaml and be happy"
It doesn't use curly-braces driven syntax, so will probably be accused of being "complex" and dismissed.
Doesn't Rust steal a lot from Ocaml?
> To paraphrase Norvig's Latency numbers a programmer should know, if we imagine a computer that executes 1 CPU instruction every second, it would take it _days_ to read from RAM.
I think the author probably misread the numbers. If CPU executed 1 instruction every second, it would take just 1—2 minutes to read from uncached RAM, no need to be overly dramatic.
Overall, this reads to me like a very young programmer trying to convince themselves to learn Rust because he heard it's cool, not an objective evaluation. And I'm totally on board with that, whatever convinces you, just learn new things!
I actively seek out tools written in static languages. They are less fragile and have a longer shelf life.
https://ashishb.net/programming/maintaining-android-app/
I agree so hard. That's why I use Hugo for my website. Speed was always only a bonus
> “Rust, from what I've heard, has a similar abstraction level as TypeScript, perhaps even closer to Haskell but that's good, I could do with a bit more help from the compiler. But it requires me to manage memory and lifetimes, which I think is something the compiler should do for me.”
The Rust compiler does manage memory and lifetimes. It just manages them statically at compile-time. If your code can’t be guaranteed to be memory-safe under Rust’s rules, it won’t compile and you need to change it.
You can use runtime ownership structures like ref cells, arcs, etc as well.
Rust is an amazing language once you get over the initial mental hurdle. An important thing to go in with: 99% of programs should not require you to manage lifetimes (‘a notation) If you find yourself doing this and aren’t writing a inner loop high performance library, back up and find another way. Usually this entails using a Mutex or Arc (or other alternatives based on the scenario) to provide interior mutability or multiple references. This statement might not make sense now but write it down for when it will.
I use Rust now for everything from CLIs to APIs and feel more productive in it end to end than python even.
Deno creates binary files from typescript. https://deno.com/blog/deno-compile-executable-programs
I've written a ton of C in my life and a C lot of Go and I was rofl at the "no spooky action at distance" lines.
This was brilliant performance art. Bless your heart Dear Author, I adore you.
Everyone who has said the phrase "Spooky action at a distance" has been proven wrong :)
I write Gleam for this. A rust like language on the erlang VM. It's neat, but not widely used.
I've written some Gleam as an exploration, and I liked it, but the "not widely used" thing is a concern. I need there to be some well maintained libraries for enterprise stuff.
I know this is not the fault of the language, and that is unfortunate.
I have been wanting to use Gleam more, but I havent found the right project or time. I can prototype drastically easier using Python.
I don't think you need an elaborate process of elimination when one of your axioms is "must manage memory manually".
> Memory management was indeed the sore sticking point, why Rust hadn't appealed to me earlier.
The author doesn't want manual memory management, but still decides to go with Rust.
I don't think you need an elaborate comment when one of your axioms is "must not read the post that you're responding to".
He's saying exactly the opposite
> Rust... But it requires me to manage memory and lifetimes, which I think is something the compiler should do for me.
> While I can jump through hoops to compile JavaScript into a binary, such wouldn't feel "solid". And the very point of writing a native program in the first place is to make it feel solid
You can use Bun to compile to native binaries without jumping through hoops. It's not mature, but it works well enough that we use it at work.
It's definitely nice for certain use cases. I just wish the binaries weren't so huge (~60MB + your actual source code).
Odin has been really growing on me lately as a language that checks all of those boxes. String types, first class allocators, built in tests, a batteries included philosophy, and ease of use are some of the things that really drew me towards it.
I really wanted to like rust and I wrote a few different small toy projects in it. At some point knowledge of the language becomes a blocker rather than knowledge the problem space, but this is a skill issue that I'm sure would lessen the more I used it.
What really set me off was how every project turned into a grocery list of crates that you need to pull in in order to do anything. It started to feel embarrassing to say that I was doing systems programming when any topic I would google in rust would lead me to a stack overflow saying to install a crate and use that. There seemed to be an anti-DIY approach in the community that finally drew me away.
> String types
It's a byte string.
> rune is the set of all Unicode code points.
We copied the awful name from Go … and the docs are wrong.
Five different boolean types?
Zero values. (Every value has some default value, like in Go.)
Odin also includes the Billion Dollar Mistake.
> There seemed to be an anti-DIY approach in the community that finally drew me away.
It's a "let a thousand flowers bloom" approach, at least until the community knows which design stands a good chance of not being a regretted addition to the standard library.
What's the difference between anti-DIY and "batteries included"?
I really want to love rust, and I understand the niches it fills. My temporary allegiance with it comes down to performance, but I'm drawn by the crate ecosystem and support provided by cargo.
What's so damning to me is how debilitatingly unopinionated it is during situations like error handling. I've used it enough to at least approximate its advantages, but strongly hinting towards including a crate (though not required) to help with error processing seems to mirror the inconvenience of having to include an exception type in another language. I don't think it would be the end of the world if it came with some creature comforts here and there.
re:crates https://web.archive.org/web/20250420085150/https://wiki.alop...
Oh, another one of those articles where people try to logically explain why they absolutely need to learn Rust and no other language will do. This time, even with religious connotations (https://en.wikipedia.org/wiki/God-shaped_hole). I mean, if you want to learn Rust, good for you, go ahead, no need to write a whole blog post rationalizing your decision!
I’m not sure what his abstraction column really means, nuts and bolts-wise. But, Fortran is native, you get to allocate your own memory, and it has object oriented features (maybe that’s abstraction).
Sounds like a Zig-shaped hole to me ;-)
They complain that Go is too low-level for their needs. Zig, with its explicit allocators, is definitely even lower-level.
Rust seems low-level too, but it isn't the same. It allows building powerful high-level interfaces that hide the complexity from you. E.g., RAII eliminates the need for explicit `defer` that can be forgotten
You may want to try the criminally underrated OCaml.
It's fast, compiles to native code AND javascript, and has garbage collection (so no manual memory management).
As an added bonus, you can mix Haskell-like functional code and imperative code in a single function.
I'm interested in the long piecewise elimination section. Presumably that's where they explain why not use Ocaml/Nim/yaddah yaddah.
If I were to write such a list, the answer would probably come down to "because I wanted to pick ONE and be able to stick with it, and Rust seems solid and not going anywhere." As much as Clojure and Ocaml are, from what I've heard, right up my alley, learning all these different languages has definitely taken time away from getting crap done, like I used to be able to do perfectly well with Java 2 or PHP 5, even though those are horrible languages.
I like the "lower level of abstraction" of Go. It was a transition coming from writing Spring Boot Java code to having to actually implement the "magic", but I like that I can clearly see the control flow of things in Go.
Out of all the languages I've used, Go programs are the ones that have the highest percentage chance of working "first try". I think that has a lot to do with the plain and strongly typed style.
My two 'language poles' are Typescript as the 'north pole', and C as the 'south pole', with Python, C++, Zig (and to a lesser extent, Rust and Odin) placed somewhere along the latitudes.
I think that of all those options, Typescript and Zig feel closest related. Zig has that same 'lightness' when writing code as Typescript and the syntax is actually close enough that a Typescript syntax highlighter mostly works fine for Zig too ;)
The table in the end sums it up nicely.
Rust allows low level programming and static compilation, while still providing abstraction and safety. A good ecosystem and stable build tools help massively as well.
It is one of the few languages which managed to address a real life need in novel ways, rather than incrementing on existing solutions and introducing new trade offs.
Wow, a lot of stuff in here surprises me. C definitely can/does have spooky at a distance. Just share a pointer to a resource with something else and enjoy the spooky modifications. Changes are local as long as you program that way, but sometimes it can be a bit not-obvious that this is happening.
regarding redefining functions, what could the author mean? using global function pointers that get redefined? otherwise redefining a function wouldn't effect other modules that are compiled into separate object files. confusing.
C is simple in that it does not have a lot of features to learn, but because of e.g. undefined behavior, I find its very hard to call it a simple language. When a simple bug can cause your entire function to be UB'd out of existence, C doesn't feel very simple.
In haskell, side effects actually _happen_ when the pile of function applications evaluate to IO data type values, but, you can think about it very locally; that's what makes it so great. You could get those nice properties with a simpler model (i.e. don't make the langague lazy, but still have explicit effects), but, yeah.
The main thing that makes Haskell not simple IMO is that it just has such a vast set of things to learn. Normal language feature stuff (types, typeclasses/etc, functions, libraries), but then you also have a ton of other special haskell suff: more advanced type system tomfoolery, various language extensions, some of which are deprecated now, or perhaps just there are better things to use nowadays (like type families vs functional dependencies), hierarchies of unfamiliar math terms that are essentially required to actually do anything, etc, and then laziness/call-by-name/non-strict eval, which is its own set of problems (space leaks!). And yes, unfamiliar syntax is another stumbling block.
IME, Rust is actually more difficult than Haskell in a lot of ways. I imagine that once you learn all of the things you need to learn it is different. The way I've heard to make it "easier" is to just clone/copy data any time you have a need for it, but, what's the point of using Rust, then?
I wonder if the author considered OCaml or its kin, I haven't kept track of whats all available, but I've heard that better tooling is available and better/more familiar syntax. OCaml is a good language and a good gateway into many other areas.
There are some other langs that might fit, like I see nim as an example, or zig, or swift. I'd still like to do more with swift, the language is interesting.
When people say C is simple, besides everything that you point out, apparently they never learned anything beyond the classical K&R C book (ANSI/ISO C edition), and are stuck in a C89 mindset without any kind of compiler extensions.
> Wow, a lot of stuff in here surprises me. C definitely can/does have spooky at a distance. Just share a pointer to a resource with something else and enjoy the spooky modifications. Changes are local as long as you program that way, but sometimes it can be a bit not-obvious that this is happening.
I think the author means that the language constructs themselves have well-defined meanings, not that the semantics don't allow surprising things to happen at runtime. Small changes don't affect the meaning of the entire program. (I'm not sure I agree that this isn't the case for e.g. Haskell as well, I'm just commenting on what I think the author means.)
> IME, Rust is actually more difficult than Haskell in a lot of ways. I imagine that once you learn all of the things you need to learn it is different.
Having written code in both, Rust is quite a lot easier than Haskell for a programmer familiar with the "normal" languages like C, C++, Python, whatever. The pure functionality of Haskell is quite a big deal that ends up contorting my programs into weird poses, e.g. once you run into the need to compose Monads the complexity ramps way up.
> The way I've heard to make it "easier" is to just clone/copy data any time you have a need for it, but, what's the point of using Rust, then?
Memory safety. And the fact that this is the example of Rust complexity just goes to show what a higher level Haskell's difficulty is.
I've seen this argument for years. "C is an easy language and it's easy to code review it.".
Maybe if you want to skip all the off-by-1 errors, double frees, overflows, underflows, wrong API usage, you don't need to maintain multiplatform build environment, and you don't support multiple architectures.
I mean, in this sense, assembly is even easier than C. Its syntax is trivial, and if that would be the only thing that matters, people should write assembly.
But they don't write assembly, because it's not the only thing that matters. So please stop considering C only in terms of easy syntax. Because syntax is the only thing that's easy in C.
> Rust, from what I've heard, has a similar abstraction level as TypeScript, perhaps even closer to Haskell but that's good, I could do with a bit more help from the compiler. But it requires me to manage memory and lifetimes, which I think is something the compiler should do for me.
Eh.... yeah? I suppose technically? But not _really_. Rust gives you the option to do that. But most programs outside of "I'm building an operating system" don't really require thinking too hard about it.
It's not like C where you're feeding memory manually, or like C++ where you have to think about RAII just right.
Someone really needs to show Nim to the author :). It checks all of their boxes and then some
Yep it’s ideal for this sort of application without the headache of Rust. Plus it’s helpful it can compile to C,C++, or JavaScript. So take your pick.
I was thinking that too. There are many cases where you do want to manage memory yourself, and in that case you should likely use Rust or maybe Zig if you can choose your own tool. But if you don't want to manage your own memory Nim works nicely, though IMO it requires adherence to a style guide more than most languages.
Depends what you do but most of the time you do not need to do anything special about memory management in Rust. That is why people try to use it for other things then just system programming.
> And the very point of writing a native program in the first place is to make it feel solid.
What does that mean, and what is it about native programs (i.e. programs AOT-compiled to machine code) that makes them feel solid? BTW, such programs are often more, not less, sensitive to OS changes.
> realizing that I was just spawning complexity that is unrelated to the problem at hand
Wait till you use Rust for a while, then (you should try, though, if the language interests you).
For me, the benefit of languages with manual memory management is the significantly lower memory footprint (speed is no longer an issue; if you think Haskell and Go are good enough, try Java, which is faster). But this comes at a price. Manual memory management means, by necessity, a lower level of abstraction (i.e. the same abstraction can cover fewer implementations). The price is usually paid not when writing the first version, but when evolving the codebase over years. Sometimes this price is worth it, but it's there, and it's not small. That's why I only reach for low level languages when I absolutely must.
> What does that mean, and what is it about native programs (i.e. programs AOT-compiled to machine code) that makes them feel solid?
Im a little late here and as a Java user most of the time people tell me:
1. They just want to ship a binary. Most are not aware of Jpackage, but correct me if I’m wrong that just makes installers right? I’m hopeful that the “hermetic” work from Leyden will help here.
2. They frequently complain about Java’s memory usage, but don’t really understand how setting the heap size works and what the defaults are. I’m also hopeful that ZGC’s automatic heap sizing will solve this.
With those two features I think the view of Java will change, as long as there is good build tooling for them. It would be nice to make that the default, but that would break many builds.
> such programs are often more, not less, sensitive to OS changes.
You may be technically correct that they are more sensitive to the kernel interface changes. But the point is that native, static binaries depend only on the kernel interface, while the other programs also depend on the language runtime that's installed on that OS. Typical Python programs even depend on the libraries being installed separately (in source form!)
> But the point is that native, static binaries depend only on the kernel interface
Many binaries also depend on shared libraries.
> while the other programs also depend on the language runtime that's installed on that OS
You can (and probably should) embed the runtime and all dependencies in the program (as is easily done in Java). The runtime then makes responding to OS selection/changes easier (e.g. musl vs glibc), or avoids less stable OS APIs to begin with.
> Many binaries also depend on shared libraries.
Yeah, and those are also the opposite of "solid" :) That's why I qualified with "static". I'm so glad that Go and Rust promote static linking as the default (ignoring glibc).
> You can (and probably should) embed the runtime and all dependencies in the program (as is easily done in Java).
Congrats to the Java team and users, then. That makes it similar to the Go approach to binaries and the runtime, which I approve
> Yeah, and those are also the opposite of "solid" :)
So if that's what the author meant by "solid", i.e. few environmental dependencies, then it's not really about "native" or not, but about how the language/runtime is designed. Languages that started out as "scripting" languages often do rely on the environment a lot, but that's not how, say, Java or .NET work.
> I'm so glad that Go and Rust promote static linking as the default (ignoring glibc).
That doesn't work so well (and so usually not done) once you have a GUI, although I guess you consider the GUI to be part of the kernel.
Even C has a runtime, even if tiny one.
I meant a runtime that has to be installed separately. It's possible to statically link a C runtime if you use musl, for example
You mean like it happens on many OSes that aren't GNU/Linux?
A language runtime remains one, independently on how it was linked into the binary.
A language runtime are the set of operations that support the language semantics, which in C's case are everything that happens before main(), threading support (since C11), floating point emulation (if needed), execution hooks for running code before and after main(), delayed linking, and possibly more, depending on the compiler specific extensions.
You're being pedantic and trying to argue as if I misunderstand language runtimes and am speaking against language runtimes in general. That's not true. I qualified "the language runtime that's installed on that OS" from the beginning.
Like on Windows for the Universal C Runtime?
I can give other non-GNU/Linux examples.
I don't have any negative experience with that one, but I remember having to manually install various versions of the Windows C++ runtimes to get an app working
> What does that mean, and what is it about native programs (i.e. programs AOT-compiled to machine code) that makes them feel solid? BTW, such programs are often more, not less, sensitive to OS changes
TFA also concludes
I think by "solid" they mean as close to metal as possible, because, as you suggest, one can go "native" with AOT. With JS/TS (languages TFA prefers), I'm not sure how far WASM's AOT will take you ... Go (the other language TFA prefers) even has PGO now on top of "AOT".> I think by "solid" they mean as close to metal as possible
A JIT compiler compiles your code to machine code just as an AOT compiler does, so I don't think that's what's meant here (and they don't mean the level of the source code because they consider Haskell to be "native").
> For me, the benefit of languages with manual memory management is the significantly lower memory footprint (speed is no longer an issue; if you think Haskell and Go are good enough, try Java, which is faster).
... what? Speed is no longer an issue? Haskell and Go? ??? How'd we go from manual memory management languages to Haskell and Go and then somehow to Java? Gotta plug that <my favorite language> somehow I guess...
It seems to me you have a deep misunderstanding of performance. If one program is 5% faster than another but at 100x memory cost, that program is not actually more performant. It just traded all possible memory for any and all speed gain. What a horrible tradeoff.
This thinking is typical in Java land [1]. You see: 8% better performance. I see: 28x the memory usage. In other words, had the Rust program been designed with the same insane memory allowance in mind as the Java program, it'd wipe the floor with it.
[1]: https://old.reddit.com/r/java/comments/n75pa0/java_beats_out...
> How'd we go from manual memory management languages to Haskell and Go
Because that's what's discussed in the article, which discusses Go and Haskell specifically.
> In other words, had the Rust program been designed with the same insane memory allowance in mind as the Java program, it'd wipe the floor with it.
No, it wouldn't (I've worked with C and C++ for almost 30 years - including on embedded and safety-critical hard realtime software - with Java for over 20, and I work on the HotSpot VM). That's because tracing GCs convert memory to speed, but that's not the case for other memory management techniques. To see why, look at a highly simplified view of tracing collectors (modern collectors don't quite work like that, but the idea generalises): When the heap is exhausted, live objects are traced and compacted to the "top" of the heap. I.e. the cost of each collection is only dependent on the working set, i.e. the size of objects that are still live. Because the working set is more-or-less a constant for a given program under a given workload, the larger the heap the less frequent the collections (each of a constant cost), and so the cost of memory management with a tracing collector goes to zero as the heap size grows. There are details in the actual implementations that are worse and others that are better than this idealised description, but the point is that tracing garbage collection is a mechanism that is very effectively converts RAM to speed as its cost scales with the ratio working-set/heap-size. This is not the case for manual memory management or for primitive ref-counting GCs.
Of course, even when memory management is zero, there are still computational costs, but Java compiles to the same machine instructions as C with that kind of work (with some important caveats in certain situations that will soon be gone). It is true that even outside those specific areas, you can, with significant additional effort, get a C program (or a program in any other low-level language) to be faster than a Java program but that's due to the availability of micro-optimisations (that we don't want to offer in Java as to not to complicate the language or make it too dependent on a particular hardware/OS architecture), but that effect isn't large.
> I once spent an entire year in the heaven of C++, walking around in a glorious daze of std::vector and RAII, before one day snapping out of it and realizing that I was just spawning complexity that is unrelated to the problem at hand.
Good luck to the author with trying Rust. I hope he writes an honest experience report.
> That leaves me with the following options — C, C++, Go, Rust.
> Technically, there are a lot more options, and I wrote a long section here about eliminating them piecewise, but after writing it I felt like it was just noise.
Uh? I am guessing OP doesn't like virtual machines maybe, cause Java and C# sound like something that fits what they want. Both support AoT compilation though now...
Also the assumption about Typescript to Wasm being not "solid" seems wrong.
I mean I find it super weird that the author's only option for "native typescript" is Rust.
> If someone posts a patch or submits a PR to a codebase written in C, it is easier to review than any other mainstream language.
short-circuited reading this
A nice thing about C is that you can be pretty confident that you know all major footguns (assuming you spent some time reading about it). With languages that are young or complex there is a much greater chance you’re making a terrible mistake because you’re not aware of it.
If anyone reads this and like me fears the difficulty and complexity of rust, but still wants a language that is competitive in performance, works for system level programming as well as something more general purpose definitely give Swift a go.
Over the last year I’ve started to write every new project using it. On windows, on linux and mac.
It is honestly a wonderful language to work with. Its mature, well designed, has a lot of similarities to rust. Has incredible interop with C, C++, Objective-C and even Java as of this year which feels fairly insane. It also is ergonomic as hell and well understood by LLM’s so is easy to get into from a 0 starting point.