"Memory Safe
No garbage collector, no manual memory management. A work in progress, though."
I wish them the best, but until they have a better story here I'm not particularly interested.
Much of the complexity in Rust vs simplicity in Go really does come down to this part of the design space.
Rust has only succeeded in making a Memory Safe Language without garbage collection via significant complexity (that was a trade-off). No one really knows a sane way to do it otherwise, unless you also want to drop the general-purpose systems programming language requirement.
I'll be Very Interested if they find a new unexplored point in the design space, but at the moment I remain skeptical.
Folks like to mention Ada. In my understanding, Ada is not memory safe by contemporary definitions. So, this requires relaxing the definition. Zig goes in this direction: "let's make it as safe as possible without being an absolutist"
If you look at the Github, there's a design proposal (under docs/design) for that.
It looks like the idea at the present time is to have four modes: value types, affine types, linear types, and rc types. Instead, of borrowing, you have an inout parameter passing convention, like Swift. Struct fields cannot be inout, so you can't store borrowed references on the heap.
I'm very interested in seeing how this works in practice--especially given who is developing Rue. It seems like Rust spends a lot of work enabling the borrow checker to be quite general for C/C++-like usage. E.g. you can store a borrowed reference to a struct on the stack into the heap if you use lifetime annotations to make clear the heap object does not outlive the stack frame. On the other hand it seems like a lot of the pain points with Rust in practice are not the lifetime annotations, but borrowing different parts of the same object, or multiple borrows in functions further down the call stack, etc.
Not being able to store mutable ref in other type reduces expressiveness. The doc already mentions it cannot allow Iterator that doesn't consume container
Just to be clear, these proposals are basically scratch notes I have barely even validated, I just wanted to be able to iterate on some text.
But yes, there is going to inherently be some expressiveness loss. There is no silver bullet, that's right. The idea is, for some users, they may be okay with that loss to gain other things.
Yeah, that stuff is very much a sketch of the area I want to play in. It’s not final syntax nor semantics just yet. Gotta implement it and play around with it first (I have some naming tweaks I definitely want to implement separate from those ADRs.)
I don’t struggle with lifetimes either, but I do think there’s a lot of folks who just never want to think about it ever.
I always thought of Go as low level and Rust as high level. Go has a lot of verbosity as a "better C" with GC. Rust has low level control but many functional inspired abstractions. Just try writing iteration or error handling in either one to see.
I wonder if it's useful to think of this as go is low type-system-complexity and rust is high type-system-complexity. Where type system complexity entails a tradeoff between the complexity of the language and how powerful the language is in allowing you to define abstractions.
As an independent axis from close to the underlying machine/far away from the underlying machine (whether virtual like wasm or real like a systemv x86_64 abi), which describes how closely the language lets you interact with the environment it runs in/how much it abstracts that environment away in order to provide abstractions.
Rust lives in high type system complexity and close to the underlying machine environment. Go is low type system complexity and (relative to rust) far from the underlying machine.
> Where type system complexity entails a tradeoff between the complexity of the language and how powerful the language is in allowing you to define abstractions.
I don't think that's right. The level of abstraction is the number of implementations that are accepted for a particular interface (which includes not only the contract of the interface expressed in the type system, but also informally in the documentation). E.g. "round" is a higher abstraction than "red and round" because the set of round things is larger than the set of red and round things. It is often untyped languages that offer the highest level of abstraction, while a sophisticated type system narrows abstraction (it reduces the number of accepted implementations of an interface). That's not to say that higher abstraction is always better - although it does have practical consequences, explained in the next paragraph - but the word "abstraction" does mean something specific, certainly more specific than "describing things".
How the level of abstraction is felt is by considering how many changes to client code (the user of an interface) is required when making a change to the implementation. Languages that are "closer to the underlying machine" - especially as far as memory management goes - generally have lower abstraction than languages that are less explicit about memory management. A local change to how a subroutine manages memory typically requires more changes to the client - i.e. the language offers a lower abstraction - in a language that's "closer to the metal", whether the language has a rich type system like Rust or a simpler type system like C, than a language that is farther away.
The way I understood the bit you quoted was not as a claim that more complex type system = higher abstraction level, but as a claim that a more complex type system = more options for defining/encoding interface contracts using that language. I took their comment as suggesting an alternative to the typical higher/lower-level comparison, not as an elaboration.
As a more concrete example, the way I interpreted GP's comment is that a language that is unable to natively express/encode a tagged union/sum type/etc. in its type system would fall on the "less complex/less power to define abstractions" side of the proposed spectrum, whereas a language that is capable of such a thing would fall on the other side.
> which includes not only the contract of the interface expressed in the type system, but also informally in the documentation
I also feel like including informal documentation here kind of defeats the purpose of the axis GP proposes? If the desire is to compare languages based on what they can express, then allowing informal documentation to be included in the comparison renders all languages equally expressive since anything that can't be expressed in the language proper can simply be outsourced to prose.
Yep. This was the biggest thing that turned me off Go. I ported the same little program (some text based operational transform code) to a bunch of languages - JS (+ typescript), C, rust, Go, python, etc. Then compared the experience. How were they to use? How long did the programs end up being? How fast did they run?
I did C and typescript first. At the time, my C implementation ran about 20x faster than typescript. But the typescript code was only 2/3rds as many lines and much easier to code up. (JS & TS have gotten much faster since then thanks to improvements in V8).
Rust was the best of all worlds - the code was small, simple and easy to code up like typescript. And it ran just as fast as C. Go was the worst - it was annoying to program (due to a lack of enums). It was horribly verbose. And it still ran slower than rust and C at runtime.
I understand why Go exists. But I can't think of any reason I'd ever use it.
Rust gets harder with codebase size, because of borrow checker.
Not to mention most of the communication libraries decided to be async only, which adds another layer of complexity.
I strongly disagree with this take. The borrow checker, and rust in general, keeps reasoning extremely local. It's one of the languages where I've found that difficulty grows the least with codebase size, not the most.
The borrow checker does make some tasks more complex, without a doubt, because it makes it difficult to express something that might be natural in other languages (things including self referential data structures, for instance). But the extra complexity is generally well scoped to one small component that runs into a constraint, not to the project at large. You work around the constraint locally, and you end up with a public (to the component) API which is as well defined and as clean (and often better defined and cleaner because rust forces you to do so).
I work in a 400k+ LOC codebase in Rust for my day job. Besides compile times being suboptimal, Rust makes working in a large codebase a breeze with good tooling and strong typechecking.
I almost never even think about the borrow checker. If you have a long-lived shared reference you just Arc it. If it's a circular ownership structure like a graph you use a SlotMap. It by no means is any harder for this codebase than for small ones.
Disagree, having dealt with +40k LoC rust projects, bottow checker is not an issue.
Async is an irritation but not the end of the world ... You can write non asynchronous code I have done it ... Honestly I am coming around on async after years of not liking it... I wish we didn't have function colouring but yeah ... Here we are....
Funny, I explicitly waited to see async baked in before I even started experimenting with Rust. It's kind of critical to most things I work on. Beyond that, I've found that the async models in rust (along with tokio/axum, etc) have been pretty nice and clean in practice. Though most of my experience is with C# and JS/TS environments, the latter of which had about a decade of growing pains.
I still regularly use typescript. One problem I run into from time to time is "spooky action at a distance". For example, its quite common to create some object and store references to it in multiple places. After all, the object won't be changed and its often more efficient this way. But later, a design change results in me casually mutating that object, forgetting that its being shared between multiple components. Oops! Now the other part of my code has become invalid in some way. Bugs like this are very annoying to track down.
Its more or less impossible to make this mistake in rust because of how mutability is enforced. The mutability rules are sometimes annoying in the small, but in the large they tend to make your code much easier to reason about.
C has multiple problems like this. I've worked in plenty of codebases which had obscure race conditions due to how we were using threading. Safe rust makes most of these bugs impossible to write in the first place. But the other thing I - and others - run into all the time in C is code that isn't clear about ownership and lifetimes. If your API gives me a reference to some object, how long is that pointer valid for? Even if I now own the object and I'm responsible for freeing it, its common in C for the object to contain pointers to some other data. So my pointer might be invalid if I hold onto it too long. How long is too long? Its almost never properly specified in the documentation. In C, hell is other people's code.
Rust usually avoids all of these problems. If I call a function which returns an object of type T, I can safely assume the object lasts forever. It cannot be mutated by any other code (since its mine). And I'm not going to break anything else if I mutate the object myself. These are really nice properties to have when programming at scale.
I wholeheartedly concur based on my experience with Rust (and other languages) over the last ~7 or so years.
> If I call a function which returns an object of type T, I can safely assume the object lasts forever. It cannot be mutated by any other code (since its mine). And I'm not going to break anything else if I mutate the object myself. These are really nice properties to have when programming at scale.
I rarely see this mentioned in the way that you did, and I'll try to paraphrase it in my own way: Rust restricts what you can do as a programmer. One can say it is "less powerful" than C. In exchange for giving up some power, it gives you more information: who owns an object, what other callers can do with that object, the lifetime of that object in relation to other objects. And critically, in safe Rust, these are _guarantees_, which is the essence of real abstraction.
In large and/or complicated codebases, this kind of information is critical in languages without garbage garbage collection, but even when I program in languages with garbage collection, I find myself wanting this information. Who is seeing this object? What do they know about this object, and when? What can they do with it? How is this ownership flowing through the system?
Most languages have little/no language-level notion of these concepts. Most languages only enforce that types line up nominally (or implement some name-identified interface), or the visibility of identifiers (public/private, i.e. "information hiding" in OO parlance). I feel like Rust is one of the first languages on this path of providing real program dataflow information. I'm confident there will be future languages that will further explore providing the programmer with this kind of information, or at least making it possible to answer these kinds of questions easier.
> I rarely see this mentioned in the way that you did, and I'll try to paraphrase it in my own way: Rust restricts what you can do as a programmer. One can say it is "less powerful" than C. In exchange for giving up some power, it gives you more information
Your paraphrasing reminds me a bit of structured vs. unstructured programming (i.e., unrestricted goto). Like to what you said, structured programming is "less powerful" than unrestricted goto, but in return, it's much easier to follow and reason about a program's control flow.
At the risk of simplifying things too much, I think some other things you said make for an interesting way to sum this up - Rust does for "ownership flow"/"dataflow" what structured programming did for control flow.
I really like this analogy. In a sense, C restricts what you can do compared to programming directly in assembly. Like, there's a lot of programs you can write in assembly that you can't write in the same way in C. But those restrictions also constrain all the other code in your program. And that's a wonderful thing, because it makes it much easier to make large, complex programs.
The restrictions seem a bit silly to list out because we take them for granted so much. But its things like:
- When a function is called, execution starts at the top of the function's body.
- Outside of unions, variables can't change their type halfway through a program.
- Whenever a function is called, the parameters are always passed using the system calling convention.
- Functions return to the line right after their call site.
Rust takes this a little bit further, adding more restrictions. Things like "if you have a mutable reference to to a variable, there are no immutable references to that variable."
I think it depends on the patterns in place and the actual complexity of the problems in practice. Most of my personal experience in Rust has been a few web services (really love Axum) and it hasn't been significantly worse than C# or JS/TS in my experience. That said, I'll often escape hatch with clone over dealing with (a)rc, just to keep my sanity. I can't say I'm the most eloquent with Rust as I don't have the 3 decades of experience I have with JS or nearly as much with C#.
I will say, that for most of the Rust code that I've read, the vast majority of it has been easy enough to read and understand... more than most other languages/platforms. I've seen some truly horrendous C# and Java projects that don't come close to the simplicity of similar tasks in Rust.
Rust indeed gets harder with codebase size, just like other languages. But claiming it is because of borrow checker is laughable at best. Borrow checker is what keeps it reasonable because it limits the scope of how one memory allocation can affect the rest of your code.
If anything, borrow checker makes writing functions harder but combining them easier.
> it was annoying to program (due to a lack of enums)
Typescript also lacks enums. Why wasn't it considered annoying?
I mean, technically it does have an enum keyword that offers what most would consider to be enums, but that keyword behaves exactly the same as what Go offers, which you don't consider to be enums.
It’s trivial to switch based on the type field. And when you do, typescript gives you full type checking for that specific variant. It’s not as efficient at runtime as C, but it’s very clean code.
Go doesn’t have any equivalent to this. Nor does go support tagged unions - which is what I used in C. The most idiomatic approach I could think of in Go was to use interface {} and polymorphism. But that was more verbose (~50% more lines of code) and more error prone. And it’s much harder to read - instead of simply branching based on the operation type, I implemented a virtual method for all my different variants and called it. But that spread my logic all over the place.
If I did it again I’d consider just making a struct in go with the superset of all the fields across all my variants. Still ugly, but maybe it would be better than dynamic dispatch? I dunno.
I wish I still had the go code I wrote. The C, rust, swift and typescript variants are kicking around on my github somewhere. If you want a poke at the code, I can find them when I’m at my desk.
That wouldn't explain C, then, which does not have sum types either.
All three languages do have enums (as it is normally defined), though. Go is only the odd one out by using a different keyword. As these programs were told to be written as carbon copies of each other, not to the idioms of each language, it is likely the author didn't take time to understand what features are available. No enum keyword was assumed to mean it doesn't exist at all, I guess.
C has numeric enums and tagged unions, which are sum types without any compile time safety. That’s idiomatic C.
Go doesn’t have any equivalent. How do you do stuff like this in Go, at all?
I’ve been programming for 30+ years. Long enough to know direct translations between languages are rarely beautiful. But I’m not an expert in Go. Maybe there’s some tricks I’m missing?
Here’s the problem, if you want to have a stab at it. The code in question defines a text editing operation as a list of editing components: Insert, Delete and Skip. When applying an editing operation, we start at the start of the document. Skip moves the cursor forward by some specified length. Insert inserts at the current position and delete deletes some number of characters at the position.
Eg:
enum OpComponent {
Skip(int),
Insert(String),
Delete(int),
}
type Op = List<OpComponent>
Then there’s a whole bunch of functions with use operations - eg to apply them to a document, to compose them together and to do operational transform.
C has unions, but they're not tagged. You can roll your own tagged unions, of course, but that's moving beyond it being a feature of the language.
> How would you model this in Go?
I'm committing the same earlier sin by trying to model it from the solution instead of the problem, so the actual best approach might be totally different, but at least in staying somewhat true to your code:
type OpComponent interface { op() }
type Op = []OpComponent
type Skip struct { Value int }
func (s Skip) op() {}
type Insert struct { Value string }
func (i Insert) op() {}
type Delete struct { Value int }
func (d Delete) op() {}
op := Op{
Skip{Value: 5},
Insert{Value: "hello"},
Delete{Value: 3},
}
> C has unions, but they're not tagged. You can roll your own tagged unions, of course, but that's moving beyond it being a feature of the language.
This feels like a distinction without a real difference. Hand-rolled tagged unions are how lots of problems are approached in real, professional C. And I think they're the right tool here.
> the actual best approach might be totally different, but at least in staying somewhat true to your code: (...)
Thanks for having a stab at it. This is more or less what I ended up with in Go. As I said, I ended up needing about 50% more lines to accomplish the same thing in Go using this approach compared to the equivalent Typescript, rust and swift.
I wish I'd kept my Go implementation. I never uploaded it to github because I was unhappy with it, and I accidentally lost it somewhere along the way.
> the actual best approach might be totally different
Maybe. But honestly I doubt it. I think I accidentally chose a problem which happens to be an ideal use case for sum types. You'd probably need a different problem to show Go or C# in their best light.
But ... sum types are really amazing. Once you start using them, everything feels like a sum type. Programming without them feels like programming with one of your hands tied behind your back.
> As I said, I ended up needing about 50% more lines to accomplish the same thing in Go
I'd be using Perl if that bothered me. But there is folly in trying to model from a solution instead of the problem. For example, maybe all you needed was:
type OpType int
const (
OpTypeSkip OpType = iota
OpTypeInsert
OpTypeDelete
)
type OpComponent struct {
Type OpType
Int int
Str string
}
Or something else entirely. Without fully understanding the exact problem, it is hard to say what the right direction is, even where the direction you chose in other language is the right one for that language. What is certain is that you don't want to write code in language X as if it were language Y. That doesn't work in programming languages, just as it does not work in natural languages. Every language has their own rules and idioms that don't transfer to another. A new language means you realistically have to restart finding the solution from scratch.
> You'd probably need a different problem to show Go or C# in their best light.
That said, my profession sees me involved in working on a set of libraries in various languages, including Go and Typescript, that appear to be an awful lot like your example. And I can say from that experience that the Go version is much more pleasant to work on. It just works.
I'll agree with you all day every day that the Typescript version's types are much more desirable to read. It absolutely does a better job at modelling the domain. No question about it. But you only need to read it once to understand the model. When you have to fight everything else beyond that continually it is of little consolation how beautiful the type definitions are.
You're right, though, it all depends on what you find most important. No two programmers are ever going to ever agree on what to prioritize. You want short code, whereas I don't care. Likewise, you probably don't care about the things I care about. Different opinions is the spice of life, I suppose!
There's a lot of ecosystem behind it that makes sense for moving off of Node.js for specific workloads, but isn't as easily done in Rust.
So it works for those types of employers and employees who need more performance than Node.js, but can't use C for practical reasons, or can't use Rust because specific libraries don't exist as readily supported by comparison.
Rue author here, yeah I'm not the hugest fan of "low level vs high level" framing myself, because there are multiple valid ways of interpreting it. As you yourself demonstrate!
As some of the larger design decisions come into place, I'll find a better way of describing it. Mostly, I am not really trying to compete with C/C++/Rust on speed, but I'm not going to add a GC either. So I'm somewhere in there.
How very so humble of you to not mention being one of the primary authors behind TRPL book. Steve you're a gem to the world of computing. Always considered you the J. Kenji of the Rust world.
Seems like a great project let's see where it goes!
> Mostly, I am not really trying to compete with C/C++/Rust on speed, but I'm not going to add a GC either. So I'm somewhere in there.
Out of curiosity, how would you compare the goals of Rue with something like D[0] or one of the ML-based languages such as OCaml[1]?
EDIT:
This is a genuine language design question regarding an imperative/OOP or declarative/FP focus and is relevant to understanding the memory management philosophy expressed[2]:
No garbage collector, no manual memory management. A work
in progress, though.
Closer to an OCaml than a D, in terms of what I see as an influence. But it's likely to be more imperative/FP than OOP/declarative, even though I know those axes are usually considered to be the way you put them than the way I put them.
> But it's likely to be more imperative/FP than OOP/declarative, even though I know those axes are usually considered to be the way you put them than the way I put them.
Fascinating.
I look forward to seeing where you go with Rue over time.
> because there are multiple valid ways of interpreting i
There are quantitative ways of describing it, at least on a relative level. "High abstraction" means that interfaces have more possible valid implementations (whether or not the constraints are formally described in the language, or informally in the documentation) than "low abstraction": https://news.ycombinator.com/item?id=46354267
I don't think you'd want to write an operating system in Rue. I may not include an "unsafe" concept, and will probably require a runtime. So that's some areas where Rust will make more sense.
As for Go... I dunno. Go has a strong vision around concurrency, and I just don't have one yet. We'll see.
FWIW, I really like the way C# has approached this need... most usage is exposed via attribute declaration/declaration DllImport for P/Invoke. Contrasted with say JNI in Java or even the Go syntax. The only thing that might be a significant improvement would be an array/vector of lookup names for the library on the system given how specific versions are often tagged in Linux vs Windows.
Do you think you'll explore some of the same problem spaces as Rust? Lifetimes and async are both big pain points of Rust for me, so it'd be interesting to see a fresh approach to these problems.
I couldn't see how long-running memory is handled, is it handled similar to Rust?
Simplified as in easier to use, or simplified as in less language features? I'm all for the former, while the latter is also worth considering (but hard to get right, as all the people who consider Go a "primitive" language show)...
Since that seems to be the (frankly bs) slogan that almost entirely makes up the languages lading page, I expect it's really going to hurt the language and/or make it all about useless posturing.
That said, I'm an embedded dev, so the "level" idea is very tangible. And Rust is also very exciting for that reason and Rue might be as well. I should have a look, though it might not be on the way to be targeting bare metal soon. :)
I think it is precisely why Rust is gold - you can pick the abstraction level you work at. I used it a lot when simulating quantum physics - on one hand, needed to implement low-level numerical operations with custom data structures (to squeeze as much performance as possible), on the other - be able to write and debug it easily.
It is similar to PyTorch (which I also like), where you can add two tensors by hand, or have your whole network as a single nn.Module.
All are high level as long as they don't expose CPU capabilities, even ISO C is high level, unless we count in language extensions that are compiler specific, and any language can have compiler extensions.
C pointers are nothing special, plenty of languages expose pointers, even classical BASIC with PEEK and POKE.
The line is blurred, and doesn't help that some folks help spread the urban myth C is special somehow, only because they never bother with either the history of programming language, and specially the history of systems programming outside Bell Labs.
They're nothing special, but were designed for a particular CPU and expose the details of that CPU. And since we were talking about C specifically, not a bunch of other random languages that may have did similar things...
While most modern CPUs are designed for C and thus share in the same details, if your CPU is of a different design, you have to emulate the behaviour. Which works perfectly fine — but the question remains outstanding: Where does the practical line get drawn? Is 6502 assembler actually a high-level language too? After all, you too can treat it as an abstract machine and emulate its function on any other CPU just the same as you do with C pointers.
Agree with Go being basically C with string support and garbage collection. Which makes it a good language. I think rust feels more like a c++ replacement. Especially syntactically. But each person will say something different. If people can create new languages and there's a need then they will. Not to say it's a good or bad thing but eventually it would be good to level up properly. Maybe AI does that.
> C was designed as a high level language and stayed so for decades
C was designed as a "high level language" relative to the assembly languages available at the time and effectively became a portable version of same in short order. This is quite different to other "high level languages" at the time, such as FORTRAN, COBOL, LISP, etc.
When C was invented, K&R C, it was hardly lower level than other systems programming languages that predated it, since JOVIAL in 1958.
It didn't not even had compiler intrisics, a concept introduced by ESPOL in 1961, allowing to program Burroughs systems without using an external Assembler.
K&R C was high level enough that many of the CPU features people think about nowadays when using compiler extensions, as they are not present in the ISO C standard, had to be written as external Assembly code, the support for inline Assembly came later.
I think we are largely saying the same thing, as described in the introduction of the K&R C book:
C is a relatively "low level" language. This
characterization is not pejorative; it simply means that C
deals with the same sort of objects that most computers do,
namely characters, numbers, and addresses.[0]
Also working on a language / runtime in this space.
It transpiles to Zig, so you have native access to the entire C library.
It uses affine types (simple ownership -> transfers via GIVE/TAKES), MVCC & transactions to safely and scalably handle mutations (like databases, but it scales linearly after 32 cores, Arc and RwLock fall apart due to Cache Line Bouncing).
It limits concurrent complexity only to the spot in your code WHERE you want to mutate shared memory concurrently, not your entire codebase.
It's memory and liveness safe (Rust is only memory safe) without a garbage collector.
It's simpler than Go, too, IMO - and more predictable, no GC.
But it's nearly impossible to beat Go at its own game, and it's not zero overhead like Rust - so I'm pessimistic it's in a "sweet spot" that no one will be interested in.
Could you please explain what this implies in layman's terms? I've read the definition of 'linear type' as a type that must be used exactly once, and by 'mutable value semantics', I assume, that unlike Rust, multiple mutable borrows are allowed?
What's the practical implication of this - how does a Rue program differ from a Rust program? Does your method accept more valid programs than the borrow checker does?
Yeah, that's just one of the essays he was on as a phd student, but he was really interested in the interaction of linear types and region inferencing as a general resource management framework. That grew into an interest in linear types as part of logical frameworks for modeling concurrency. But then like a lot of people he became disillusioned with academia, went to make some money on wall street, then focused on his family after that.
Anyhow, I just thought it might be a good jumping off point for what you're exploring.
I think Vale is interesting, but yeah, they have had some setbacks, in my understanding more to do with the personal lives of the author rather than the ideas. I need to spend more time with it.
Nice! I see you're one of (if not the primary) contributor!
Do you see this as a prototype language, or as something that might evolve into something production grade? What space do you see it fitting into, if so?
You've been such a huge presence in the Rust space. What lessons do you think Rue will take, and where will it depart?
I see compile times as a feature - that's certainly nice to see.
This is a project between me and Claude, so yeah :)
It's a fun project for me right now. I want to just explore compiler writing. I'm not 100% sure where it will lead, and if anyone will care or not where it ends up. But it's primarily for me.
I've described it as "higher than Rust, lower than Go" because I don't want this to be a GC'd language, but I want to focus on ergonomics and compile times. A lot of Rust's design is about being competitive with C and C++, I think by giving up that ultra-performance oriented space, I can make a language that's significantly simpler, but still plenty fast and nice to use.
I've never seen any significant difference in linear vs affine types.
To me it just seems like Rust has Linear types, and the compiler just inserts some code to destroy your values for you if you don't do it yourself.
I guess the only difference is that linear types can _force_ you to manually consume a value (not necessarily via drop)? Is that what you are going for?
Affine types are "may use" and linear types are "must use," yeah. That is, linear types are stronger.
See https://faultlore.com/blah/linear-rust/ for a (now pretty old but still pretty relevant, I think) exploration into what linear types would mean for Rust.
Sure, ARC is a form of very specific, constrained garbage collection.
Compile-time, reference-counting GC, not runtime tracing GC. So no background collector, no heap tracing, and no stop-the-world pauses. Very different from the JVM, .Net, or Go.
Reference counting is a GC algorithm from CS point of view, it doesn't matter if it is compile time or runtime.
Additionally there isn't a single ARC implementation that is 100% compile time, that when looking at the generated machine code has removed all occurrences from RC machinery.
> The default is a minimal and a well performing tracing GC.
> The second way is autofree, it can be enabled with -autofree. It takes care of most objects (~90-100%): the compiler inserts necessary free calls automatically during compilation. Remaining small percentage of objects is freed via GC. The developer doesn't need to change anything in their code. "It just works", like in Python, Go, or Java, except there's no heavy GC tracing everything or expensive RC for each object.
> For developers willing to have more low-level control, memory can be managed manually with -gc none.
> Arena allocation is available via a -prealloc flag. Note: currently this mode is only suitable to speed up short lived, single-threaded, batch-like programs (like compilers).
So you have 1) a GC, 2) a GC with escape analysis (WIP), 3) manual memory management, or 4) ...Not sure? Wasn't able to easily find examples of how to use it. There's what appears to be its implementation [1], but since I'm not particularly familiar with V I don't feel particularly comfortable drawing conclusions from a brief glance through it.
In any case, none of those stand out as "memory safety without GC" to me.
"none of those stand out as "memory safety without GC" to me" ... can you explain why you believe they are not memory safe without GC? Im more interested to know the points in relation to autofree.
I did see that! Unfortunately it doesn't really move the needle on anything I said earlier. It describes manual memory management as an alternative to the GC when using autofree (which obviously isn't conducive to reliable memory safety barring additional guardrails not described in the post) and arenas are only mentioned, not discussed in any real detail.
> It is also accompanied with a demo video (pretty convincing in case you would like to watch).
Keep in mind the context of this conversation: whether V offers memory safety without GC or manual memory management. Strictly speaking, a demonstration that autofree works in one case is not sufficient to show V is memory safe without GC/manual memory management, as said capability is a property over all programs that can be written in a language. As a result, thoroughly describing how V supposedly achieves memory safety without a GC/manual memory management would be far more convincing than showing/claiming it works in specific cases.
As an example of what I'm trying to say, consider a similar video but with a leak/crash-free editor written in C. I doubt anyone would consider that video convincing proof that C is a memory-safe language; at most, it shows that memory-safe programs can be written in C, which is a very different claim.
As read in quote given by GP, `autofree` partially uses a GC. And is WIP. (Although was supposedly production-ready 5+ years ago.)
Reading "Memory safe; No garbage collector, no manual memory management" on Rue homepage made me think of V for this very reason. Many think is trivial to do it and Rust has been in wrong for 15 years with its "overcomplicated" borrow checking. It isn't.
I couldn't figure out the main points, besides the "between Rust & Go" slogan. I've worked both with Rust and Go, and I like Rust more, but there are several pain points:
* macro abuse. E.g. bitshift storing like in C needs a bunch of #[...] derive_macros. Clap also uses them too much, because a CLI parameter is more complex than a struct field. IDK what's a sane approach to fixing this, maybe like in Jai, or Zig? No idea.
* Rust's async causes lots of pain and side effects, Golang's channels seem better way and don't make colored functions
* Rust lacks Python's generators, which make very elegant code (although, hard to debug). I think if it gets implemented, it will have effects like async, where you can't keep a lock over an await statement.
Zig's way is just do things in the middle and be verbose. Sadly, its ecosystem is still small.
I'd like to see something attacking these problems.
I've been having fun with Gleam. I'm not really sure where it falls on the spectrum though. It is garbage collected, so it's less abrasive than Rust in that sense. But it's pure functional which is maybe another kind of unfriendly.
i see a lot of go hatred on HN but coming from c i actually kind of love go when i need just enough abstracted away from me to focus on doing a thing efficiently and still end up with a well-enough performing binary. i have always been obsessed with the possibility of building something that doesn't need me to install runtimes on the target i want to run it, it's just something that makes me happy. very rarely do i need to go lower than what go provides and when i do i just.. dip into c where i earned a lot of my stripes over the years.
rust is cool. a lot of really cool software im finding these days is written in rust these days & i know im missing some kind of proverbial boat here. but rusts syntax breaks my brain and makes it eject completely. it's just enough to feel like it requires paradigm shifts for me, and while others are really good at hopping between many languages it's just a massive weakness of mine. i just cant quite figure out the ergonomics of rust so that it feels comfy, my brain seems to process everything through a c-lens and this is just a flaw of mine that makes me weak in software.
golang was started by some really notable brains who had lots of time in the game and a lot of well thought out philosophies of what could be done differently and why they should do it differently coming from c. there was almost a socio-economic reason for the creation of go - provide a lang that people could easily get going in and become marketable contributors that would help their career prospects. and i think it meets that mark, i was able to get my jr engineers having fun in golang in no time at all & that's panned out to be a huge capability we added to what our team can offer.
i like the objective of rue here. reviewing the specification it actually looks like something my brain doesn't have any qualms with. but i dont know what takes a language from a proposal by one guy and amplifies it into something thats widely used with a great ecosystem. other minds joining to contribute & flesh out standard libraries, foundations backing, lots of evangelism. lots of time. i won't write any of those possibilities off right now, hopefully if it does something right here there's a bright future for it. sometimes convincing people to try a new stack is like asking them to cede their windows operating system and try out linux or mac. we've watched a lot of languages come and go, we watch a lot of languages still try to punch thru their ceilings of general acceptance. unlike some i dont really have huge tribalistic convictions of winners in software, i like having options. i think it's pretty damn neat that folks are using their experiences with other languages to come up with strong-enough opinions of how a language should look and behave and then.. going out and building it.
Yes, I started off with the idea that Rue's syntax would be a strict subset of Rust's.
I may eventually diverge from this, but I like Rust's syntax overall, and I don't want to bikeshed syntax right now, I want to work on semantics + compiler internals. The core syntax of Rust is good enough right now.
I've thought a Rust like language but at Go's performance level would be interesting. Garbage collected, but compiled to a binary (no VM), but with Rust's mix of procedural and functional programming. Maybe some more capable type inference.
If you don't mind me asking, how did you get started with programming language design? I've been reading Crafting Interpreters, but there is clearly a lot of theory that is being left out there.
Mostly just… using a lot of them. Trying as many as I could. Learning what perspectives they bring. Learning the names for their features, and how they fit together or come into tension.
The theory is great too, but starting off with just getting a wide overview of the practice is a great way to get situated and decide which rabbit holes you want to go down first.
Well I got that part covered at least. Seems like I'm constantly getting bored and playing around with a different language, probably more than I should lol
Okay, right now it's basically Pascal as it was described in Revised Report, only even more restricted. Which is... fine, I guess, you can still write a whole OS with something like that (without using pointers/addresses) as Per-Brinch Hansen demonstrated but it's... an acquired taste.
Are the actual references/pointers coming in the future?
So, one reason is "I just want to learn more about buck2."
But, for the first iteration of Rue, I maintained both. However, for a language project, there's one reason Cargo isn't sufficient now, and one reason why it may not later: the first one is https://github.com/rue-language/rue/blob/trunk/crates/rue-co... : I need to make sure that, no matter what configuration I build the compiler in, I build a staticlib for the runtime. With Cargo, I couldn't figure out how to do this. In test mode, it would still try to build it as a dylib.
Later, well, the reason that rustc has to layer a build system on top of Cargo: bootstrapping. I'm not sure if Rue will ever be bootstrapped, but rustc uses x.py for this. Buck does it a lot nicer, IMHO https://github.com/dtolnay/buck2-rustc-bootstrap
That’s part of the reason for the name! “Rust” also has negative interpretations as well. A “rue” is also a kind of flower, and a “rust” is a kind of fungus.
Fair enough! I do like how others are framing this is as "write less code" -- if Rue makes one think more and more about the code that finally makes it to the production, that can be a real win.
The positioning is interesting - claiming Rust's performance with Go's simplicity is basically every new systems language's promise since 2015. The key differentiator seems to be "zero-cost exceptions" which I assume means compile-time Result types without runtime unwinding overhead? That's compelling if true, since Rust's Result ergonomics can get verbose in deeply nested error chains.
But the real test is compile times and cognitive overhead. Rust's borrow checker is theoretically elegant but practically brutal when you're learning or debugging. If Rue can achieve memory safety without lifetime annotations everywhere, that's genuinely valuable. However, I'm skeptical - you can't eliminate tradeoffs, only move them around. If there's no borrow checker, what prevents use-after-free? If there's garbage collection, why claim "lower level than Go"?
The other critical factor is ecosystem maturity. Rust's pain is partially justified by its incredible crate ecosystem - tokio, serde, axum, etc. A new language needs either (1) seamless C FFI to bootstrap libraries, (2) a killer feature so valuable that people rewrite everything, or (3) 5+ years for the ecosystem to develop. Which path is Rue taking?
I'd love to see real-world benchmarks on: compile time for a 50k line project, memory usage of a long-running web server compared to Rust/Go, and cold start latency for CLI tools. Those metrics matter more than theoretical performance claims. The "fun to write" claim is subjective but important - if it's genuinely more ergonomic than Rust without sacrificing performance, that could attract the "Python developers wanting systems programming" demographic.
Your style of commenting is pretty full of LLM tells fyi. Normally don’t comment on it but this is the second such comment of yours I have read in a few minutes.
e: I would be curious of the thoughts of those downvoting as personally I don’t think mostly LLM written comments are a direction we want to move towards on HN.
Rather than downvoting you, I will speak up to say I don't see what you're seeing. Spaces around hyphens, yeah, sure, but LLMs prefer em dashes, and even that is unreliable, because it's borrowed from habits that real humans have had for many years.
For me, the more important indicator is the content. I see reports of personal experience, and thoughts that are not completely explained (because the reader is expected to draw the rest of the owl). I don't see smugly over-the-top piles of adjectives filling in for an inability to make critiques of any substance. I don't see wacky asides amounting to argumentum ad lapidem, accomplishing nothing beyond insulting readers who disagree with a baseless assertion.
I think it's likely you have drawn a false positive.
It saddens me a bit that this can't be distinguished by people on here. I encourage you to take a look at their profile and see if you are still as skeptical. Noticing em-dashes is facile and as you mention, common among human written text - but there are more subtle stylistic cues (although now that you mention it, this writer likely went out of their way to replace emdashes with hyphens).
I was raised in a family of professional writer-editors (but now am the tech-y black sheep) which might make the cues a bit more obvious to me. The degree to which this style of writing was common prior to 2022 is vastly overstated, the tells were actually not really that common.
A) you cannot tell
B) you have said nothing productive toward discussion, you’ve just accused someone of using a tool (that you don’t know if they used)
I’d prefer actual criticism of the content. (I cannot downvote and would not if I could)
I am certain that they used a tool. As I said, I normally do not complain and typically engage on the merits -- but these have been among the top comments on every front page article I've read today and it gets tiresome! To me, if you cannot invest enough effort to remove the pretty obvious cues, why am I investing the effort in reading the comment?
After seeing your reply, I looked at their comment history which makes it even more obvious imo.
that is fair —- you’re claiming this person has a pattern of lazy, low-effort comments. I didn’t check and if you’re right, I appreciate you calling it out
just as you’re annoyed by low-effort LLM posts/comments, I’m annoyed by low-effort “this sounds like it was written by ChatGPT” comments (hence my response and at least a possible explanation of downvotes)
edit: I also scrolled through, you’re absolutely right! it does look like a low-effort bot
I have mostly been writing Rust in the last 10 years, but recently (1 year) I have been writing Go as well as Rust.
The typical Go story is to use a bunch of auto generation, so a small change quickly blows up as all of the auto generate code is checked into git. Like easily a 20x blowup.
Rust on the other hand probably does much more such code generation (build.rs for stuff like bindgen, macros for stuff like serde, and monomorphized generics for basically everything). But all of this code is never checked into git (with the exception of some build.rs tools which can be configured to run as commands as well), or at least 99% of the time it's not.
This difference has impact on the developer story. In go land, you need to manually invoke the auto generator and it's easy to forget until CI reminds you. The auto generator is usually quite slow, and probably has much less caching smartness than the Rust people have figured out.
In Rust land, the auto generation can, worst case, run at every build, best case the many cache systems take care of it (cargo level, rustc level). But still, everyone who does a git pull has to re-run this, while with the auto generation one can theoretically only have the folks run it who actually made changes that changed the auto generated code, everyone else gets it via git pull.
So in Go, your IDE is ready to go immediately after git pull and doesn't have to compile a tree of hundreds of dependencies. Go IDEs and compilers are so fast, it's almost like cheating from Rust POV. Rust IDEs are not as fast at all even if everything is cached, and in the worst case you have to wait a long long time.
On the other hand, these auto generation tools in Go are only somewhat standardized, you don't have a central tool that takes care of things (or at least I'm not aware of it). In Rust land, cargo creates some level of standardization.
You can always look at the auto generated Go code and understand it, while Rust's auto generated code usually is not IDE inspectable and needs special tools for access (except for the build.rs generated stuff which is usually put inside the target directory).
I wonder how a language that is designed from scratch would approach auto generation.
> On the other hand, these auto generation tools in Go are only somewhat standardized, you don't have a central tool that takes care of things (or at least I'm not aware of it).
Yeah, this is a hard problem, and you're right that both have upsides and downsides. Metaprogramming isn't easy!
I know I don't want to have macros if I can avoid them, but I also don't forsee making code generation a-la-Go a first class thing. I'll figure it out.
> The typical Go story is to use a bunch of auto generation, so a small change quickly blows up as all of the auto generate code is checked into git. Like easily a 20x blowup.
Why do you think the typical Go story is to use a bunch of auto generation? This does not match my experience with the language at all. Most Go projects I've worked on, or looked at, have used little or no code generation.
I'm sure there are projects out there with a "bunch" of it, but I don't think they are "typical".
Same here. I've worked on one project that used code generation to implement a DSL, but that would have been the same in any implementation language, it was basically transpiring. And protobufs, of course, but again, that's true in all languages.
The only thing I can think of that Go uses a lot of generation for that other languages have other solutions for is mocks. But in many languages the solution is "write the mocks by hand", so that's hardly fair.
Me neither. My go code doesn't have any auto-generation. IMO it should be used sparingly, in cases where you need a practically different language for expressivity and correctness, such as a parser-generator.
Anything and everything related to Kubernetes in Go uses code generation. It is overwhelmingly "typical" to the point of extreme eye-rolling when you need to issue "make generate" three dozen times a day for any medium sized PR that deals with k8s types.
When Go was launched, it was said it was built specifically for building network services. More often than not that means using protobuf, and as such protobuf generated code ends up being a significant part of your application. You'd have that problem in any language, theoretically, due to the design of protobuf's ecosystem.
Difference is that other languages are built for things other than network services, so protobuf is much less likely to be a necessary dependency for their codebases.
What I've found over the years is that protobuf is actually not that widespread, and, given that, if you ignore gogoprotobuf package, it would generate terrible (for Go's GC) structs with pointers for every field, it's not been terribly popular in Go community either, despite both originating at Google
The "just generate go code automatically then check it in" is a massive miswart from the language, and makes perfect sense because that pathological pattern is central to how google3 works.
A ton of google3 is generated, like output from javascript compilers, protobuf serialization/deserialization code, python/C++ wrappers, etc.
So its an established Google standard, which has tons of help from their CI/CD systems.
For everyone else, keeping checked-in auto-generated code is a continuous toil and maintenance burden. The Google go developers don't see it that way of course, because they are biased due to their google3 experience. Ditto monorepos. Ditto centralized package authorities for even private modules (my least fave feature of Go).
> For everyone else, keeping checked-in auto-generated code is a continuous toil and maintenance burden. The Google go developers don't see it that way of course, because they are biased due to their google3 experience.
The golang/go repo itself has various checked-in generated repo
I write a lot of go. I tried to write a lot of rust but fell into lifetime traps. I really want to leave C++ but I just can’t without something that’s also object oriented.
Not a dig at functional, it’s just my big codebases are logically defined as objects and systems that don’t lend itself to just being a struct or an interface.
Inheritance is why I’m stuck in C++ land.
I would love to have something like rust but that supports classes, virtual methods, etc. but I guess I’ll keep waiting.
In Rust you can have structs with any number of methods defined on them, which is functionally not that different from a class. You get interface like behavior with traitsz and you get encapsulation with private/public data and methods.
Yes it does. Unless I can attach a trait to a struct without having to define all the methods of that trait for that struct. This is my issue with interfaces and go. I can totally separate out objects as interfaces but then I have to implement each implementation’s interface methods and it’s a serious chore when they’re always the same.
For example: Playable could be a trait that plays a sound when you interact with it. I would need to implement func interact for each object. Piano, jukebox, doorbell, etc. With inheritance, I write it once, add it to my class, and now all instances of that object have interact. Can I add instance variables to a trait?
This saves me time and keeps Claude out of my code. Otherwise I ask Claude to implement them all, modify them all, to try to keep them all logically the same.
I also don’t want to go type soup in order to abstract this into something workable.
You can use a struct that the other structs have as a field. The trait can then operate on that struct.
I'm not trying to convince you to use Rust. If you prefer C++ have at it. I was just trying to point out that most patterns in C++ have a fairly close analogy in Rust, just with different tradeoffs.
Yeah go has embedded structs. It’s ugly and allows one to address the fields on the parent and it exposes the struct (with the same fields) so it’s kind of a head scratcher.
To be honest, it’s been 3 years since I looked at rust and I might try again. I still prefer inheritance because some things just are-a thing. I also love ECS and components and see traits as that. I just wish I could store local state in those.
You can store state in the struct and then define a method in the trait to return the struct. Then all your default methods can use the "getter" to access the struct and it's state. The only thing you have to do is embed the struct and implement that one "getter" method to return it. I don't think it's much more boilerplate then utilizing inheritance.
Fyrox, a game engine written in Rust, uses an ECS and several object oriented patterns in their design. Might be a good reference if your interested. The rust book also has a section on OOP patterns in Rust.
I think it's Fyrox anyway. I remember the creator of a Rust game engine talking about it in an interview on Developer Voices. It could have been Bevy I guess, but I don't think so.
Yeah that’s what I did and it’s ugly. It works, and allows me to attach multiple behaviors but I would have to initialize them and write that boilerplate code to return them.
I think I might be able to do it with a macro but I’m not a rust guy so I’m limited by my knowledge.
I respect your preferences, but I am unlikely to add this sort of OOP. Ideally there'll be no subtyping at all in Rue. So you'll have to keep waiting, I'm afraid. Thanks for checking it out regardless!
As a long time C++ user, I’m curious why you like inheritance and virtual methods so much.
I maintain a medium sized, old-ish C++ code base. It uses classes and inheritance and virtual methods and even some multiple inheritance. I despise this stuff. Single Inheritance is great until you discover that you have a thing that doesn’t slot nicely into the hierarchy or when you realize that you want to decompose an interface (cough, base class) into a couple of non-hierarchically related things. Multiple inheritance is an absolute mess unless you strictly use base classes with pure virtual methods and no member variables. And forcing everything into an “is a” relationship instead of a “has a” relationship can be messy sometimes.
I often wish C++ had traits / or Haskell style type classes.
Protected and private inheritance are C++'s equivalent to traits, and they don't suffer from the usual issues of multiple inheritance. As for type classes, check out concepts. By no means am I trying to sell C++, I don't touch it myself, but it doesn't leave you completely adrift in those seas.
Anything out there for reference or would you be implementing from theory/ideas here? God speed to you in terms of the project overall, it's exciting to see the beginnings of a rust-like-lang without the headaches!
I am very interested in Hylo! I think they're playing in similar spaces. I'd like to explore mutable value semantics for Rue.
One huge difference is that Hylo is using LLVM, whereas I'm implementing my own backends. Another is that Hylo seems to know what they want to do with concurrency, whereas I really do not at all right now.
I think Hylo takes a lot of inspiration from Swift, whereas I take more inspiration from Rust. Swift and Rust are already very similar. So maybe Hylo and Rue will end up like this: sister languages. Or maybe they'll end up differently. I'm not sure! I'm just playing around right now.
Both are playing around in similar spaces, both shared team members for a while, both have a take an automatic memory management that isn’t a garbage collector, both were sponsored by a primary company for a while (Swift still is, I think). There’s a lot of differences too.
I landed non-generic enums this evening. I'm not 100% sure what abstraction paths I want to go down. But I see sum types as just as important as product types, for sure.
Interesting, for me the "between Rust and Go" would be a nice fit for Swift or Zig. I've always quite liked the language design of Swift, it's bad that it didn't really take off that much
I think with Swift 6 Apple really took it in a wrong direction. Even coding agents can’t wrap their mind around some of the “safety” features (not to mention the now bloated syntax). If anything, Swift would go down as a “good example why language design shouldn’t happen by committee in yearly iterations”.
AFAIK there’s still holes like reflection and you have some work, but if that’s changed that’s really good. I suspect it’ll be hard for C# to escape the stench of “enterprise” though.
I’m looking forward to seeing how it shapes out over the next few years. Especially once they release union types.
FWIW JIT is rarely an issue, and enables strong optimizations not available in AOT (it has its own, but JIT is overall much better for throughput). RyuJIT can do the same speculative optimizations OpenJDK Hotspot does except the language has fewer abstractions which are cheaper and access to low-level programming which allows it to have much different performance profile.
NativeAOT's primary goal is reducing memory footprint, binary size, making "run many methods once or rarely" much faster (CLI and GUI applications, serverless functions) and also shipping to targets where JIT is not allowed or undesirable. It can also be used to ship native dynamically or statically (the latter is tricky) linked libraries.
I wince every time I see naive recursive fibonacci as a code example. It is a major turnoff because it hints at a lack of experience with tail call optimization, which I consider a must have for a serious language.
Would someone please explain to me why TCO—seemingly alone amongst the gajillions of optimization passes performed by modern compilers—is so singularly important to some people?
For people that like functional style and using recursion for everything, TCO is a must. Otherwise there’s no way around imperative loops if you want decent performance and not having to worry about the stack limit.
Perhaps calling it an “optimization” is misleading. Certainly it makes code faster, but more importantly it’s syntax sugar to translate recursion into loops.
You don't need full fledged TCO for that; see Clojure's recur for an example. Zig recently added something similar but strongly typed with match/continue. These all map exactly to a closed set of mutually recursive functions with a single entry point, which is quite sufficient (and then some) to fully replace iterative loops while still desugaring to the same exact code.
Indeed there are more explicit versions of such mechanisms, which I prefer, otherwise there’s always a bit of paranoia about recursion without assurance that the compiler will handle it properly.
TCO is less of an optimization (which are typically best-effort on the part of the compiler) and more of an actual semantic change that expands the set of valid programs. It's like a new control flow construct that lives alongside `while` loops.
When you have recursive data structures, it's nice when the algorithms have the same shape. TCO is also handy when you're writing fancy control flow operations and implement them with continuation-passing style.
It virtue-signals that they're part of the hip functional crowd.
(To be fair, if you are programming functionally, it is essential. But to flat-out state that a language that doesn't support isn't "serious" is a bit rude, at best.)
I only have basic constant folding yet in terms of optimizations, but I'm very aware of TCO. I haven't decided if I want to require an annotation to guarantee it like Rust is going to.
Please require some form of annotation like an explicit `tailcall` operator or something similar. TCO wrecks havoc on actionable backtraces, so it should be opt-in rather than opt-out.
This is a bit silly but when i look at new languages coming up I always look at the syntax, which is usually horrible(Zig and Rust are good examples), and how much garbage there is. As someone that writes in Go, I can't stand semicolons and other crap that just pollutes the code and wastes time and space to write for absolutely no good reason whatsoever. And as this compares itself with Go, I just cannot but laugh when I see ";", "->" or ":" in the example. At least the semicolon seems optional. But still, it's an instant nope for me.
The (n: i32) can be just (n i32), because there is no benefit to adding the colon there.
The -> i32 can also be just i32 because, again, the -> serves no purpose in function/method definition syntax.
So you end up with simple and clean
fn fib(n i32) i32 {}
And semicolons are an ancient relic that has been passed on to new languages for 80 fucking years without any good reason. We have modern lexers/tokenizers and compilers that can handle if you don't put a stupid ; at the end of every single effing line.
Just go and count how many of these useless characters are in your codebase and imagine how many keystrokes, compilation errors and wasted time it cost you, whilst providing zero value in return.
Foo<T<string, T2>> -> (bool -> IDictionary<string, T3> -> i32 -> T3) where T2 : T3
even if you leave out the latter type constraint, I think it is hard to avoid undecidable ambiguity.
fn foo(n i32, m T2) (????) {}
You quickly get ambiguity due to type parameters / generics, functions as arguments, and tuples if you don't syntactically separate them.
Even if you your context-depended parser can recognize it, does the user? I agree that a language designer should minimize the amount of muscle damage, but he shouldn't forget that readability is perhaps even more critical.
____
1. Note, even if the parser can recognize this, for humans the '>' is confusing unless syntax highlighting takes care of it. One time it delimits a generic type argument, the other time it is part of '->'. This is also an argument for rendering these things as ligatures.
> The (n: i32) can be just (n i32), because there is no benefit to adding the colon there.
> The -> i32 can also be just i32 because, again, the -> serves no purpose in function/method definition syntax.
Well, there is, but it's more of a personal trait than a universal truth. Some human programmers (e.g. me) tend to read and parse (and even write, to some extent) source code more accurately when there is a sprinkle of punctuation thrown in into a long chain of nothing but identifiers and subtly nested parentheses. Some, e.g. you, don't need such assistance and find it annoying and frivolous.
Unfortunately, since we don't store the source code of our programs as binary AST blobs that could be rendered in a personalized matter, but as plain text instead, we have to accept the language designer's choices. Perhaps it actually has better consequences than the alternative; perhaps not.
oh, yeah. that looks good. i always hated using ", " delimiter for lists and the amount of typos it always takes to make clean(well, not with Go fmt).
Odin seems interesting but for me it has two deal-breakers: first one use the use of ^ for pointer de/reference. Not that it does not make sense, it's just that it is not an easy key to get to on my keyboard layout and i will not be changing that. The & and * are well known characters for this purpose and, at least for me, easily accessible on the keyboard. Second issue is the need to download many gigabytes of visual studio nonsense just so i am able to compile a program. Coming from Go, this is just a non-starter. Thirdly, and this is more about the type of work i do than the language, there are/were no db drivers, no http/s stack and other things i would need for my daily work. Other than that, Odin is interesting. Though I am not sure how I would fare without OOP after so many years with inheritance OOP and encapsulated OOP.
It's a Lisp thing, obviously, but also there's a benefit to explicit delimiters - it makes it possible to have an expression as an element without wrapping that in its own brackets, as S-exprs require.
"Memory Safe No garbage collector, no manual memory management. A work in progress, though."
I wish them the best, but until they have a better story here I'm not particularly interested.
Much of the complexity in Rust vs simplicity in Go really does come down to this part of the design space.
Rust has only succeeded in making a Memory Safe Language without garbage collection via significant complexity (that was a trade-off). No one really knows a sane way to do it otherwise, unless you also want to drop the general-purpose systems programming language requirement.
I'll be Very Interested if they find a new unexplored point in the design space, but at the moment I remain skeptical.
Folks like to mention Ada. In my understanding, Ada is not memory safe by contemporary definitions. So, this requires relaxing the definition. Zig goes in this direction: "let's make it as safe as possible without being an absolutist"
If you look at the Github, there's a design proposal (under docs/design) for that.
It looks like the idea at the present time is to have four modes: value types, affine types, linear types, and rc types. Instead, of borrowing, you have an inout parameter passing convention, like Swift. Struct fields cannot be inout, so you can't store borrowed references on the heap.
I'm very interested in seeing how this works in practice--especially given who is developing Rue. It seems like Rust spends a lot of work enabling the borrow checker to be quite general for C/C++-like usage. E.g. you can store a borrowed reference to a struct on the stack into the heap if you use lifetime annotations to make clear the heap object does not outlive the stack frame. On the other hand it seems like a lot of the pain points with Rust in practice are not the lifetime annotations, but borrowing different parts of the same object, or multiple borrows in functions further down the call stack, etc.
Not being able to store mutable ref in other type reduces expressiveness. The doc already mentions it cannot allow Iterator that doesn't consume container
https://github.com/rue-language/rue/blob/trunk/docs/designs/...
No silver bullet again
Just to be clear, these proposals are basically scratch notes I have barely even validated, I just wanted to be able to iterate on some text.
But yes, there is going to inherently be some expressiveness loss. There is no silver bullet, that's right. The idea is, for some users, they may be okay with that loss to gain other things.
Yeah, that stuff is very much a sketch of the area I want to play in. It’s not final syntax nor semantics just yet. Gotta implement it and play around with it first (I have some naming tweaks I definitely want to implement separate from those ADRs.)
I don’t struggle with lifetimes either, but I do think there’s a lot of folks who just never want to think about it ever.
I always thought of Go as low level and Rust as high level. Go has a lot of verbosity as a "better C" with GC. Rust has low level control but many functional inspired abstractions. Just try writing iteration or error handling in either one to see.
I wonder if it's useful to think of this as go is low type-system-complexity and rust is high type-system-complexity. Where type system complexity entails a tradeoff between the complexity of the language and how powerful the language is in allowing you to define abstractions.
As an independent axis from close to the underlying machine/far away from the underlying machine (whether virtual like wasm or real like a systemv x86_64 abi), which describes how closely the language lets you interact with the environment it runs in/how much it abstracts that environment away in order to provide abstractions.
Rust lives in high type system complexity and close to the underlying machine environment. Go is low type system complexity and (relative to rust) far from the underlying machine.
I think this is insightful! I'm going to ponder it, thank you. I think it may gesture towards what I'm trying to get at.
> Where type system complexity entails a tradeoff between the complexity of the language and how powerful the language is in allowing you to define abstractions.
I don't think that's right. The level of abstraction is the number of implementations that are accepted for a particular interface (which includes not only the contract of the interface expressed in the type system, but also informally in the documentation). E.g. "round" is a higher abstraction than "red and round" because the set of round things is larger than the set of red and round things. It is often untyped languages that offer the highest level of abstraction, while a sophisticated type system narrows abstraction (it reduces the number of accepted implementations of an interface). That's not to say that higher abstraction is always better - although it does have practical consequences, explained in the next paragraph - but the word "abstraction" does mean something specific, certainly more specific than "describing things".
How the level of abstraction is felt is by considering how many changes to client code (the user of an interface) is required when making a change to the implementation. Languages that are "closer to the underlying machine" - especially as far as memory management goes - generally have lower abstraction than languages that are less explicit about memory management. A local change to how a subroutine manages memory typically requires more changes to the client - i.e. the language offers a lower abstraction - in a language that's "closer to the metal", whether the language has a rich type system like Rust or a simpler type system like C, than a language that is farther away.
The way I understood the bit you quoted was not as a claim that more complex type system = higher abstraction level, but as a claim that a more complex type system = more options for defining/encoding interface contracts using that language. I took their comment as suggesting an alternative to the typical higher/lower-level comparison, not as an elaboration.
As a more concrete example, the way I interpreted GP's comment is that a language that is unable to natively express/encode a tagged union/sum type/etc. in its type system would fall on the "less complex/less power to define abstractions" side of the proposed spectrum, whereas a language that is capable of such a thing would fall on the other side.
> which includes not only the contract of the interface expressed in the type system, but also informally in the documentation
I also feel like including informal documentation here kind of defeats the purpose of the axis GP proposes? If the desire is to compare languages based on what they can express, then allowing informal documentation to be included in the comparison renders all languages equally expressive since anything that can't be expressed in the language proper can simply be outsourced to prose.
Yep. This was the biggest thing that turned me off Go. I ported the same little program (some text based operational transform code) to a bunch of languages - JS (+ typescript), C, rust, Go, python, etc. Then compared the experience. How were they to use? How long did the programs end up being? How fast did they run?
I did C and typescript first. At the time, my C implementation ran about 20x faster than typescript. But the typescript code was only 2/3rds as many lines and much easier to code up. (JS & TS have gotten much faster since then thanks to improvements in V8).
Rust was the best of all worlds - the code was small, simple and easy to code up like typescript. And it ran just as fast as C. Go was the worst - it was annoying to program (due to a lack of enums). It was horribly verbose. And it still ran slower than rust and C at runtime.
I understand why Go exists. But I can't think of any reason I'd ever use it.
Rust gets harder with codebase size, because of borrow checker. Not to mention most of the communication libraries decided to be async only, which adds another layer of complexity.
I strongly disagree with this take. The borrow checker, and rust in general, keeps reasoning extremely local. It's one of the languages where I've found that difficulty grows the least with codebase size, not the most.
The borrow checker does make some tasks more complex, without a doubt, because it makes it difficult to express something that might be natural in other languages (things including self referential data structures, for instance). But the extra complexity is generally well scoped to one small component that runs into a constraint, not to the project at large. You work around the constraint locally, and you end up with a public (to the component) API which is as well defined and as clean (and often better defined and cleaner because rust forces you to do so).
I work in a 400k+ LOC codebase in Rust for my day job. Besides compile times being suboptimal, Rust makes working in a large codebase a breeze with good tooling and strong typechecking.
I almost never even think about the borrow checker. If you have a long-lived shared reference you just Arc it. If it's a circular ownership structure like a graph you use a SlotMap. It by no means is any harder for this codebase than for small ones.
Disagree, having dealt with +40k LoC rust projects, bottow checker is not an issue.
Async is an irritation but not the end of the world ... You can write non asynchronous code I have done it ... Honestly I am coming around on async after years of not liking it... I wish we didn't have function colouring but yeah ... Here we are....
We all know that lines of code is a poor measure of project size, but that said, 40k sloc is not a lot
Funny, I explicitly waited to see async baked in before I even started experimenting with Rust. It's kind of critical to most things I work on. Beyond that, I've found that the async models in rust (along with tokio/axum, etc) have been pretty nice and clean in practice. Though most of my experience is with C# and JS/TS environments, the latter of which had about a decade of growing pains.
This hasn't been my experience at all.
I still regularly use typescript. One problem I run into from time to time is "spooky action at a distance". For example, its quite common to create some object and store references to it in multiple places. After all, the object won't be changed and its often more efficient this way. But later, a design change results in me casually mutating that object, forgetting that its being shared between multiple components. Oops! Now the other part of my code has become invalid in some way. Bugs like this are very annoying to track down.
Its more or less impossible to make this mistake in rust because of how mutability is enforced. The mutability rules are sometimes annoying in the small, but in the large they tend to make your code much easier to reason about.
C has multiple problems like this. I've worked in plenty of codebases which had obscure race conditions due to how we were using threading. Safe rust makes most of these bugs impossible to write in the first place. But the other thing I - and others - run into all the time in C is code that isn't clear about ownership and lifetimes. If your API gives me a reference to some object, how long is that pointer valid for? Even if I now own the object and I'm responsible for freeing it, its common in C for the object to contain pointers to some other data. So my pointer might be invalid if I hold onto it too long. How long is too long? Its almost never properly specified in the documentation. In C, hell is other people's code.
Rust usually avoids all of these problems. If I call a function which returns an object of type T, I can safely assume the object lasts forever. It cannot be mutated by any other code (since its mine). And I'm not going to break anything else if I mutate the object myself. These are really nice properties to have when programming at scale.
I wholeheartedly concur based on my experience with Rust (and other languages) over the last ~7 or so years.
> If I call a function which returns an object of type T, I can safely assume the object lasts forever. It cannot be mutated by any other code (since its mine). And I'm not going to break anything else if I mutate the object myself. These are really nice properties to have when programming at scale.
I rarely see this mentioned in the way that you did, and I'll try to paraphrase it in my own way: Rust restricts what you can do as a programmer. One can say it is "less powerful" than C. In exchange for giving up some power, it gives you more information: who owns an object, what other callers can do with that object, the lifetime of that object in relation to other objects. And critically, in safe Rust, these are _guarantees_, which is the essence of real abstraction.
In large and/or complicated codebases, this kind of information is critical in languages without garbage garbage collection, but even when I program in languages with garbage collection, I find myself wanting this information. Who is seeing this object? What do they know about this object, and when? What can they do with it? How is this ownership flowing through the system?
Most languages have little/no language-level notion of these concepts. Most languages only enforce that types line up nominally (or implement some name-identified interface), or the visibility of identifiers (public/private, i.e. "information hiding" in OO parlance). I feel like Rust is one of the first languages on this path of providing real program dataflow information. I'm confident there will be future languages that will further explore providing the programmer with this kind of information, or at least making it possible to answer these kinds of questions easier.
> I rarely see this mentioned in the way that you did, and I'll try to paraphrase it in my own way: Rust restricts what you can do as a programmer. One can say it is "less powerful" than C. In exchange for giving up some power, it gives you more information
Your paraphrasing reminds me a bit of structured vs. unstructured programming (i.e., unrestricted goto). Like to what you said, structured programming is "less powerful" than unrestricted goto, but in return, it's much easier to follow and reason about a program's control flow.
At the risk of simplifying things too much, I think some other things you said make for an interesting way to sum this up - Rust does for "ownership flow"/"dataflow" what structured programming did for control flow.
I really like this analogy. In a sense, C restricts what you can do compared to programming directly in assembly. Like, there's a lot of programs you can write in assembly that you can't write in the same way in C. But those restrictions also constrain all the other code in your program. And that's a wonderful thing, because it makes it much easier to make large, complex programs.
The restrictions seem a bit silly to list out because we take them for granted so much. But its things like:
- When a function is called, execution starts at the top of the function's body.
- Outside of unions, variables can't change their type halfway through a program.
- Whenever a function is called, the parameters are always passed using the system calling convention.
- Functions return to the line right after their call site.
Rust takes this a little bit further, adding more restrictions. Things like "if you have a mutable reference to to a variable, there are no immutable references to that variable."
I think it depends on the patterns in place and the actual complexity of the problems in practice. Most of my personal experience in Rust has been a few web services (really love Axum) and it hasn't been significantly worse than C# or JS/TS in my experience. That said, I'll often escape hatch with clone over dealing with (a)rc, just to keep my sanity. I can't say I'm the most eloquent with Rust as I don't have the 3 decades of experience I have with JS or nearly as much with C#.
I will say, that for most of the Rust code that I've read, the vast majority of it has been easy enough to read and understand... more than most other languages/platforms. I've seen some truly horrendous C# and Java projects that don't come close to the simplicity of similar tasks in Rust.
Rust indeed gets harder with codebase size, just like other languages. But claiming it is because of borrow checker is laughable at best. Borrow checker is what keeps it reasonable because it limits the scope of how one memory allocation can affect the rest of your code.
If anything, borrow checker makes writing functions harder but combining them easier.
async seems sensible for anything subject to internet latency.
> it was annoying to program (due to a lack of enums)
Typescript also lacks enums. Why wasn't it considered annoying?
I mean, technically it does have an enum keyword that offers what most would consider to be enums, but that keyword behaves exactly the same as what Go offers, which you don't consider to be enums.
In typescript I typed my text editing operations like this:
It’s trivial to switch based on the type field. And when you do, typescript gives you full type checking for that specific variant. It’s not as efficient at runtime as C, but it’s very clean code.Go doesn’t have any equivalent to this. Nor does go support tagged unions - which is what I used in C. The most idiomatic approach I could think of in Go was to use interface {} and polymorphism. But that was more verbose (~50% more lines of code) and more error prone. And it’s much harder to read - instead of simply branching based on the operation type, I implemented a virtual method for all my different variants and called it. But that spread my logic all over the place.
If I did it again I’d consider just making a struct in go with the superset of all the fields across all my variants. Still ugly, but maybe it would be better than dynamic dispatch? I dunno.
I wish I still had the go code I wrote. The C, rust, swift and typescript variants are kicking around on my github somewhere. If you want a poke at the code, I can find them when I’m at my desk.
They presumably mean tagged unions like `User = Guest | LoggedIn(id, username)`.
That wouldn't explain C, then, which does not have sum types either.
All three languages do have enums (as it is normally defined), though. Go is only the odd one out by using a different keyword. As these programs were told to be written as carbon copies of each other, not to the idioms of each language, it is likely the author didn't take time to understand what features are available. No enum keyword was assumed to mean it doesn't exist at all, I guess.
C has numeric enums and tagged unions, which are sum types without any compile time safety. That’s idiomatic C.
Go doesn’t have any equivalent. How do you do stuff like this in Go, at all?
I’ve been programming for 30+ years. Long enough to know direct translations between languages are rarely beautiful. But I’m not an expert in Go. Maybe there’s some tricks I’m missing?
Here’s the problem, if you want to have a stab at it. The code in question defines a text editing operation as a list of editing components: Insert, Delete and Skip. When applying an editing operation, we start at the start of the document. Skip moves the cursor forward by some specified length. Insert inserts at the current position and delete deletes some number of characters at the position.
Eg:
Then there’s a whole bunch of functions with use operations - eg to apply them to a document, to compose them together and to do operational transform.How would you model this in Go?
> C has numeric enums and tagged unions
C has unions, but they're not tagged. You can roll your own tagged unions, of course, but that's moving beyond it being a feature of the language.
> How would you model this in Go?
I'm committing the same earlier sin by trying to model it from the solution instead of the problem, so the actual best approach might be totally different, but at least in staying somewhat true to your code:
> C has unions, but they're not tagged. You can roll your own tagged unions, of course, but that's moving beyond it being a feature of the language.
This feels like a distinction without a real difference. Hand-rolled tagged unions are how lots of problems are approached in real, professional C. And I think they're the right tool here.
> the actual best approach might be totally different, but at least in staying somewhat true to your code: (...)
Thanks for having a stab at it. This is more or less what I ended up with in Go. As I said, I ended up needing about 50% more lines to accomplish the same thing in Go using this approach compared to the equivalent Typescript, rust and swift.
If anyone is curious, here's my C implementation: https://github.com/ottypes/libot
Swift: https://github.com/josephg/libot-swift
Rust: https://github.com/josephg/textot.rs
Typescript: https://github.com/ottypes/text-unicode
I wish I'd kept my Go implementation. I never uploaded it to github because I was unhappy with it, and I accidentally lost it somewhere along the way.
> the actual best approach might be totally different
Maybe. But honestly I doubt it. I think I accidentally chose a problem which happens to be an ideal use case for sum types. You'd probably need a different problem to show Go or C# in their best light.
But ... sum types are really amazing. Once you start using them, everything feels like a sum type. Programming without them feels like programming with one of your hands tied behind your back.
> As I said, I ended up needing about 50% more lines to accomplish the same thing in Go
I'd be using Perl if that bothered me. But there is folly in trying to model from a solution instead of the problem. For example, maybe all you needed was:
Or something else entirely. Without fully understanding the exact problem, it is hard to say what the right direction is, even where the direction you chose in other language is the right one for that language. What is certain is that you don't want to write code in language X as if it were language Y. That doesn't work in programming languages, just as it does not work in natural languages. Every language has their own rules and idioms that don't transfer to another. A new language means you realistically have to restart finding the solution from scratch.> You'd probably need a different problem to show Go or C# in their best light.
That said, my profession sees me involved in working on a set of libraries in various languages, including Go and Typescript, that appear to be an awful lot like your example. And I can say from that experience that the Go version is much more pleasant to work on. It just works.
I'll agree with you all day every day that the Typescript version's types are much more desirable to read. It absolutely does a better job at modelling the domain. No question about it. But you only need to read it once to understand the model. When you have to fight everything else beyond that continually it is of little consolation how beautiful the type definitions are.
You're right, though, it all depends on what you find most important. No two programmers are ever going to ever agree on what to prioritize. You want short code, whereas I don't care. Likewise, you probably don't care about the things I care about. Different opinions is the spice of life, I suppose!
There's a lot of ecosystem behind it that makes sense for moving off of Node.js for specific workloads, but isn't as easily done in Rust.
So it works for those types of employers and employees who need more performance than Node.js, but can't use C for practical reasons, or can't use Rust because specific libraries don't exist as readily supported by comparison.
Rue author here, yeah I'm not the hugest fan of "low level vs high level" framing myself, because there are multiple valid ways of interpreting it. As you yourself demonstrate!
As some of the larger design decisions come into place, I'll find a better way of describing it. Mostly, I am not really trying to compete with C/C++/Rust on speed, but I'm not going to add a GC either. So I'm somewhere in there.
How very so humble of you to not mention being one of the primary authors behind TRPL book. Steve you're a gem to the world of computing. Always considered you the J. Kenji of the Rust world. Seems like a great project let's see where it goes!
That is a very kind thing to say, I admire him quite a bit. Thank you!
You couldn't get the rue-lang.org domain? There are rust-lang.org, scala-lang.org, so rue-lang.org sounds better than .dev.
I'd love to see how Rue solves/avoids the problems that Rust's borrow checker tries to solves. You should put it on the 1st page, I think.
Already taken.
I'll put more about that there once it's implemented :)
Wow didn't realise it was you who was the author. I learnt a lot about Rust from your writings.
I'm glad to have helped you :)
> Mostly, I am not really trying to compete with C/C++/Rust on speed, but I'm not going to add a GC either. So I'm somewhere in there.
Out of curiosity, how would you compare the goals of Rue with something like D[0] or one of the ML-based languages such as OCaml[1]?
EDIT:
This is a genuine language design question regarding an imperative/OOP or declarative/FP focus and is relevant to understanding the memory management philosophy expressed[2]:
0 - https://dlang.org/1 - https://ocaml.org/
2 - https://rue-lang.dev/
Closer to an OCaml than a D, in terms of what I see as an influence. But it's likely to be more imperative/FP than OOP/declarative, even though I know those axes are usually considered to be the way you put them than the way I put them.
> But it's likely to be more imperative/FP than OOP/declarative, even though I know those axes are usually considered to be the way you put them than the way I put them.
Fascinating.
I look forward to seeing where you go with Rue over time.
> because there are multiple valid ways of interpreting i
There are quantitative ways of describing it, at least on a relative level. "High abstraction" means that interfaces have more possible valid implementations (whether or not the constraints are formally described in the language, or informally in the documentation) than "low abstraction": https://news.ycombinator.com/item?id=46354267
Since it's framed as 'in between' Rust and Go, is it trying to target an intersection of both languages' use-cases?
I don't think you'd want to write an operating system in Rue. I may not include an "unsafe" concept, and will probably require a runtime. So that's some areas where Rust will make more sense.
As for Go... I dunno. Go has a strong vision around concurrency, and I just don't have one yet. We'll see.
Do you have plans for handling C FFI without "unsafe"? Will it require some sort of extension module written in C/C++/Rust?
No direct plans. For the immediate future, only the runtime is allowed to call into C.
If this ever becomes a production thing, then I can worry about FFI, and I'll probably just follow what managed languages do here.
FWIW, I really like the way C# has approached this need... most usage is exposed via attribute declaration/declaration DllImport for P/Invoke. Contrasted with say JNI in Java or even the Go syntax. The only thing that might be a significant improvement would be an array/vector of lookup names for the library on the system given how specific versions are often tagged in Linux vs Windows.
Do you think you'll explore some of the same problem spaces as Rust? Lifetimes and async are both big pain points of Rust for me, so it'd be interesting to see a fresh approach to these problems.
I couldn't see how long-running memory is handled, is it handled similar to Rust?
I'm going to try and avoid lifetimes entirely. They're great in Rust! But I'm going to a higher level spot.
I'm totally unsure about async.
Right now there's no heap memory at all. I'll get there :) Sorta similar to Rust/Swift/Hylo... we'll see!
So if you don't have a garbage collector, and you don't have manual memory management, and you don't have lifetimes... What do you have?
The plan is something like mutable value semantics and linear types. I'm figuring it out :)
Is this a simplified / distilled version of Rust ? Or Subset of Rust with some changes ?
Some of it is like that, but some of it is going to be from other stuff too. I'm figuring it out :)
Simplified as in easier to use, or simplified as in less language features? I'm all for the former, while the latter is also worth considering (but hard to get right, as all the people who consider Go a "primitive" language show)...
Since that seems to be the (frankly bs) slogan that almost entirely makes up the languages lading page, I expect it's really going to hurt the language and/or make it all about useless posturing.
That said, I'm an embedded dev, so the "level" idea is very tangible. And Rust is also very exciting for that reason and Rue might be as well. I should have a look, though it might not be on the way to be targeting bare metal soon. :)
I don't mind if a sentence I threw up for a side project "hurts the language" at this stage, this is a project primarily for me.
You should use Rust for embedded, I doubt Rue will ever be good for it.
I think it is precisely why Rust is gold - you can pick the abstraction level you work at. I used it a lot when simulating quantum physics - on one hand, needed to implement low-level numerical operations with custom data structures (to squeeze as much performance as possible), on the other - be able to write and debug it easily.
It is similar to PyTorch (which I also like), where you can add two tensors by hand, or have your whole network as a single nn.Module.
All are high level as long as they don't expose CPU capabilities, even ISO C is high level, unless we count in language extensions that are compiler specific, and any language can have compiler extensions.
C pointers expose CPU capabilities.
You can always emulate functionality on different architectures, though, so where is the practical line even drawn?
C pointers are nothing special, plenty of languages expose pointers, even classical BASIC with PEEK and POKE.
The line is blurred, and doesn't help that some folks help spread the urban myth C is special somehow, only because they never bother with either the history of programming language, and specially the history of systems programming outside Bell Labs.
They're nothing special, but were designed for a particular CPU and expose the details of that CPU. And since we were talking about C specifically, not a bunch of other random languages that may have did similar things...
While most modern CPUs are designed for C and thus share in the same details, if your CPU is of a different design, you have to emulate the behaviour. Which works perfectly fine — but the question remains outstanding: Where does the practical line get drawn? Is 6502 assembler actually a high-level language too? After all, you too can treat it as an abstract machine and emulate its function on any other CPU just the same as you do with C pointers.
Agree with Go being basically C with string support and garbage collection. Which makes it a good language. I think rust feels more like a c++ replacement. Especially syntactically. But each person will say something different. If people can create new languages and there's a need then they will. Not to say it's a good or bad thing but eventually it would be good to level up properly. Maybe AI does that.
C was designed as a high level language and stayed so for decades
> C was designed as a high level language and stayed so for decades
C was designed as a "high level language" relative to the assembly languages available at the time and effectively became a portable version of same in short order. This is quite different to other "high level languages" at the time, such as FORTRAN, COBOL, LISP, etc.
When C was invented, K&R C, it was hardly lower level than other systems programming languages that predated it, since JOVIAL in 1958.
It didn't not even had compiler intrisics, a concept introduced by ESPOL in 1961, allowing to program Burroughs systems without using an external Assembler.
K&R C was high level enough that many of the CPU features people think about nowadays when using compiler extensions, as they are not present in the ISO C standard, had to be written as external Assembly code, the support for inline Assembly came later.
I think we are largely saying the same thing, as described in the introduction of the K&R C book:
0 - https://dn710204.ca.archive.org/0/items/the-c-programming-la...Go has a GC and Rust doesn't. That alone makes Go higher level.
> Memory Safe
> No garbage collector, no manual memory management. A work in progress, though.
I couldn't find an explanation in the docs or elsewhere how Rue approaches this.
If not GC, is it via:
a) ARC
b) Ownership (ala Rust)
c) some other way?
I am playing around with this! I'm mostly interested in something in the space of linear types + mutable value semantics.
Also working on a language / runtime in this space.
It transpiles to Zig, so you have native access to the entire C library.
It uses affine types (simple ownership -> transfers via GIVE/TAKES), MVCC & transactions to safely and scalably handle mutations (like databases, but it scales linearly after 32 cores, Arc and RwLock fall apart due to Cache Line Bouncing).
It limits concurrent complexity only to the spot in your code WHERE you want to mutate shared memory concurrently, not your entire codebase.
It's memory and liveness safe (Rust is only memory safe) without a garbage collector.
It's simpler than Go, too, IMO - and more predictable, no GC.
But it's nearly impossible to beat Go at its own game, and it's not zero overhead like Rust - so I'm pessimistic it's in a "sweet spot" that no one will be interested in.
Time will tell.
Neat! Good luck, that sounds very cool. I have no idea what if anything I'm going to do about liveliness.
can you share the link, sounds fascinating language to follow its development as well and good luck on this project!
Could you please explain what this implies in layman's terms? I've read the definition of 'linear type' as a type that must be used exactly once, and by 'mutable value semantics', I assume, that unlike Rust, multiple mutable borrows are allowed?
What's the practical implication of this - how does a Rue program differ from a Rust program? Does your method accept more valid programs than the borrow checker does?
I’m on my phone on a long road trip, so I can’t really give you a good lengthy explanation right now, to be honest.
Mutable value semantics means no references at all, from a certain perspective.
You can sort of think of linear types as RAII where you must explicitly drop. Sorta.
“More programs” isn’t really the right way to think about it. Different semantics, so different programs :)
You might find one of my late brother's research interests relevant: https://www.cs.princeton.edu/~dpw/papers/space.pdf
Thank you for the link! I'll check it out for sure.
(And sorry to hear about your brother's passing.)
Yeah, that's just one of the essays he was on as a phd student, but he was really interested in the interaction of linear types and region inferencing as a general resource management framework. That grew into an interest in linear types as part of logical frameworks for modeling concurrency. But then like a lot of people he became disillusioned with academia, went to make some money on wall street, then focused on his family after that.
Anyhow, I just thought it might be a good jumping off point for what you're exploring.
Have you explored the ideas explored for the Vale language: https://vale.dev/
May be an interesting approach. That language seems very academic and slow moving at the moment though.
I think Vale is interesting, but yeah, they have had some setbacks, in my understanding more to do with the personal lives of the author rather than the ideas. I need to spend more time with it.
Nice! I see you're one of (if not the primary) contributor!
Do you see this as a prototype language, or as something that might evolve into something production grade? What space do you see it fitting into, if so?
You've been such a huge presence in the Rust space. What lessons do you think Rue will take, and where will it depart?
I see compile times as a feature - that's certainly nice to see.
This is a project between me and Claude, so yeah :)
It's a fun project for me right now. I want to just explore compiler writing. I'm not 100% sure where it will lead, and if anyone will care or not where it ends up. But it's primarily for me.
I've described it as "higher than Rust, lower than Go" because I don't want this to be a GC'd language, but I want to focus on ergonomics and compile times. A lot of Rust's design is about being competitive with C and C++, I think by giving up that ultra-performance oriented space, I can make a language that's significantly simpler, but still plenty fast and nice to use.
We'll see.
Love it! I think that's a nice target.
Have fun! :)
So linear type + mutable value would be quite close to Rust, right?
Rust has affine types, not linear. It also doesn't have mutable value semantics, it uses references, lifetimes, and borrowing.
I've never seen any significant difference in linear vs affine types.
To me it just seems like Rust has Linear types, and the compiler just inserts some code to destroy your values for you if you don't do it yourself.
I guess the only difference is that linear types can _force_ you to manually consume a value (not necessarily via drop)? Is that what you are going for?
Affine types are "may use" and linear types are "must use," yeah. That is, linear types are stronger.
See https://faultlore.com/blah/linear-rust/ for a (now pretty old but still pretty relevant, I think) exploration into what linear types would mean for Rust.
ARC is GC, chapter 5.
https://gchandbook.org/
Sure, ARC is a form of very specific, constrained garbage collection.
Compile-time, reference-counting GC, not runtime tracing GC. So no background collector, no heap tracing, and no stop-the-world pauses. Very different from the JVM, .Net, or Go.
Reference counting is a GC algorithm from CS point of view, it doesn't matter if it is compile time or runtime.
Additionally there isn't a single ARC implementation that is 100% compile time, that when looking at the generated machine code has removed all occurrences from RC machinery.
Check out V-lang ... it has the details. It's a beautiful language... but, mostly unknown.
> Check out V-lang ... it has the details.
Does it? From its docs [0]:
> There are 4 ways to manage memory in V.
> The default is a minimal and a well performing tracing GC.
> The second way is autofree, it can be enabled with -autofree. It takes care of most objects (~90-100%): the compiler inserts necessary free calls automatically during compilation. Remaining small percentage of objects is freed via GC. The developer doesn't need to change anything in their code. "It just works", like in Python, Go, or Java, except there's no heavy GC tracing everything or expensive RC for each object.
> For developers willing to have more low-level control, memory can be managed manually with -gc none.
> Arena allocation is available via a -prealloc flag. Note: currently this mode is only suitable to speed up short lived, single-threaded, batch-like programs (like compilers).
So you have 1) a GC, 2) a GC with escape analysis (WIP), 3) manual memory management, or 4) ...Not sure? Wasn't able to easily find examples of how to use it. There's what appears to be its implementation [1], but since I'm not particularly familiar with V I don't feel particularly comfortable drawing conclusions from a brief glance through it.
In any case, none of those stand out as "memory safety without GC" to me.
[0]: https://docs.vlang.io/memory-management.html
[1]: https://github.com/vlang/v/blob/master/vlib/builtin/prealloc...
"none of those stand out as "memory safety without GC" to me" ... can you explain why you believe they are not memory safe without GC? Im more interested to know the points in relation to autofree.
Regarding the details, here is a pretty informative github discussion thread on same topic: https://github.com/vlang/v/discussions/17419
It is also accompanied with a demo video (pretty convincing in case you would like to watch).
V-lang is not shiny as other languages are, but, it does have a lot to learn from.
> Im more interested to know the points in relation to autofree.
As sibling said, autofree is still stated to use a GC, which obviously disqualifies it from "memory safety without GC".
> Regarding the details, here is a pretty informative github discussion thread on same topic: https://github.com/vlang/v/discussions/17419
I did see that! Unfortunately it doesn't really move the needle on anything I said earlier. It describes manual memory management as an alternative to the GC when using autofree (which obviously isn't conducive to reliable memory safety barring additional guardrails not described in the post) and arenas are only mentioned, not discussed in any real detail.
> It is also accompanied with a demo video (pretty convincing in case you would like to watch).
Keep in mind the context of this conversation: whether V offers memory safety without GC or manual memory management. Strictly speaking, a demonstration that autofree works in one case is not sufficient to show V is memory safe without GC/manual memory management, as said capability is a property over all programs that can be written in a language. As a result, thoroughly describing how V supposedly achieves memory safety without a GC/manual memory management would be far more convincing than showing/claiming it works in specific cases.
As an example of what I'm trying to say, consider a similar video but with a leak/crash-free editor written in C. I doubt anyone would consider that video convincing proof that C is a memory-safe language; at most, it shows that memory-safe programs can be written in C, which is a very different claim.
As read in quote given by GP, `autofree` partially uses a GC. And is WIP. (Although was supposedly production-ready 5+ years ago.)
Reading "Memory safe; No garbage collector, no manual memory management" on Rue homepage made me think of V for this very reason. Many think is trivial to do it and Rust has been in wrong for 15 years with its "overcomplicated" borrow checking. It isn't.
Oh, it's known. It just has an incredibly negative reputation on this site.
I kinda expected.. just hesitated to point it out.
I couldn't figure out the main points, besides the "between Rust & Go" slogan. I've worked both with Rust and Go, and I like Rust more, but there are several pain points:
* macro abuse. E.g. bitshift storing like in C needs a bunch of #[...] derive_macros. Clap also uses them too much, because a CLI parameter is more complex than a struct field. IDK what's a sane approach to fixing this, maybe like in Jai, or Zig? No idea.
* Rust's async causes lots of pain and side effects, Golang's channels seem better way and don't make colored functions
* Rust lacks Python's generators, which make very elegant code (although, hard to debug). I think if it gets implemented, it will have effects like async, where you can't keep a lock over an await statement.
Zig's way is just do things in the middle and be verbose. Sadly, its ecosystem is still small.
I'd like to see something attacking these problems.
I've been having fun with Gleam. I'm not really sure where it falls on the spectrum though. It is garbage collected, so it's less abrasive than Rust in that sense. But it's pure functional which is maybe another kind of unfriendly.
Gleam is very cool! But yeah, higher level than I’m shooting for here.
It’s too early to have slick marketing and main points.
Noted, thanks for the comment. I share some of these opinions more than others, but it’s always good to get input.
I am surprised that a language with nothing than a couple of promises gets so much attention. Why exactly?
I've been a member of this community for a long time.
People also like hearing about new languages.
I agree that it's not really ready for this much attention just yet, but that's the way of the world. We'll see how it goes.
I think ~everyone wants a language that's kind of like Go with a Rusty type system (and maybe syntax), so any title like this gets attention.
There's an obvious sweet spot in there.
One of which the implementation is 100% vibe coded even.
IIRC the author has a good track record with programming languages.
Probably best to link to the repo itself, this is not meant to be used yet. https://github.com/rue-language/rue
It may have been more useful to link to the blog post [0] which gives more of an introduction than the front page at this point.
[0] https://rue-lang.dev/blog/hello-world/
I posted that, and also https://steveklabnik.com/writing/thirteen-years-of-rust-and-...
Just to link them all together. This is the one that the algorithm picked up :)
I don't need lower level than go. I just really like Rust' type system and error handling and I want it in a compiled language.
Zero Cost abstractions and it's memory model is fascinating - but isn't particularly useful for the part of the tech stack I work on.
i see a lot of go hatred on HN but coming from c i actually kind of love go when i need just enough abstracted away from me to focus on doing a thing efficiently and still end up with a well-enough performing binary. i have always been obsessed with the possibility of building something that doesn't need me to install runtimes on the target i want to run it, it's just something that makes me happy. very rarely do i need to go lower than what go provides and when i do i just.. dip into c where i earned a lot of my stripes over the years.
rust is cool. a lot of really cool software im finding these days is written in rust these days & i know im missing some kind of proverbial boat here. but rusts syntax breaks my brain and makes it eject completely. it's just enough to feel like it requires paradigm shifts for me, and while others are really good at hopping between many languages it's just a massive weakness of mine. i just cant quite figure out the ergonomics of rust so that it feels comfy, my brain seems to process everything through a c-lens and this is just a flaw of mine that makes me weak in software.
golang was started by some really notable brains who had lots of time in the game and a lot of well thought out philosophies of what could be done differently and why they should do it differently coming from c. there was almost a socio-economic reason for the creation of go - provide a lang that people could easily get going in and become marketable contributors that would help their career prospects. and i think it meets that mark, i was able to get my jr engineers having fun in golang in no time at all & that's panned out to be a huge capability we added to what our team can offer.
i like the objective of rue here. reviewing the specification it actually looks like something my brain doesn't have any qualms with. but i dont know what takes a language from a proposal by one guy and amplifies it into something thats widely used with a great ecosystem. other minds joining to contribute & flesh out standard libraries, foundations backing, lots of evangelism. lots of time. i won't write any of those possibilities off right now, hopefully if it does something right here there's a bright future for it. sometimes convincing people to try a new stack is like asking them to cede their windows operating system and try out linux or mac. we've watched a lot of languages come and go, we watch a lot of languages still try to punch thru their ceilings of general acceptance. unlike some i dont really have huge tribalistic convictions of winners in software, i like having options. i think it's pretty damn neat that folks are using their experiences with other languages to come up with strong-enough opinions of how a language should look and behave and then.. going out and building it.
All the Rue code in the manual seems to also be valid Rust code, except for the @-prefixed intrinsics
Yes, I started off with the idea that Rue's syntax would be a strict subset of Rust's.
I may eventually diverge from this, but I like Rust's syntax overall, and I don't want to bikeshed syntax right now, I want to work on semantics + compiler internals. The core syntax of Rust is good enough right now.
Out of interest, what's the motivation? What are you hoping to do with Rue that Rust doesn't currently provide?
Primary motivation is to have a fun project. If nobody ever uses this, I'll still be happy.
I'd like fast compile times, and giving up some of Rust's lowest level and highest performance goals in exchange for it. As well as maybe ease of use.
Nice, seems like a super cool project.
I've thought a Rust like language but at Go's performance level would be interesting. Garbage collected, but compiled to a binary (no VM), but with Rust's mix of procedural and functional programming. Maybe some more capable type inference.
If you don't mind me asking, how did you get started with programming language design? I've been reading Crafting Interpreters, but there is clearly a lot of theory that is being left out there.
Thanks :)
Crafting interpreters is fantastic!
Mostly just… using a lot of them. Trying as many as I could. Learning what perspectives they bring. Learning the names for their features, and how they fit together or come into tension.
The theory is great too, but starting off with just getting a wide overview of the practice is a great way to get situated and decide which rabbit holes you want to go down first.
> Mostly just… using a lot of them
Well I got that part covered at least. Seems like I'm constantly getting bored and playing around with a different language, probably more than I should lol
How is it a subset then if it has the @-prefix? Wait, does Rust's grammar still have the @ and ~ sigils from the pre 1.0 times for pointers?
It started off that way, but didn't (and won't) remain that way.
I'm using @ for intrinsics because that's how Zig does it and I like it for similar reasons to how Rust uses ! for macros.
Okay, right now it's basically Pascal as it was described in Revised Report, only even more restricted. Which is... fine, I guess, you can still write a whole OS with something like that (without using pointers/addresses) as Per-Brinch Hansen demonstrated but it's... an acquired taste.
Are the actual references/pointers coming in the future?
Maybe Pascal without the syntax, sure. It’s still very early on.
I hope to not introduce references, because I’m going to give mutable value semantics a go. We’ll see though!
What was the rationale to not use cargo? By the way, I really enjoy when you are a guest on the fallthrough podcast.
Thanks!
So, one reason is "I just want to learn more about buck2."
But, for the first iteration of Rue, I maintained both. However, for a language project, there's one reason Cargo isn't sufficient now, and one reason why it may not later: the first one is https://github.com/rue-language/rue/blob/trunk/crates/rue-co... : I need to make sure that, no matter what configuration I build the compiler in, I build a staticlib for the runtime. With Cargo, I couldn't figure out how to do this. In test mode, it would still try to build it as a dylib.
Later, well, the reason that rustc has to layer a build system on top of Cargo: bootstrapping. I'm not sure if Rue will ever be bootstrapped, but rustc uses x.py for this. Buck does it a lot nicer, IMHO https://github.com/dtolnay/buck2-rustc-bootstrap
Just pointing out here that "rue" is used to express "to regret", emphatically. Perhaps it is not the best name for a programming language.
That’s part of the reason for the name! “Rust” also has negative interpretations as well. A “rue” is also a kind of flower, and a “rust” is a kind of fungus.
Fair enough! I do like how others are framing this is as "write less code" -- if Rue makes one think more and more about the code that finally makes it to the production, that can be a real win.
Sounds fitting to me. Every line of code I wrote that ultimately didn't need code to begin with, is basically codified regrets checked into git.
The best code is the code not written, so perhaps it is the best name for a programming language?
The positioning is interesting - claiming Rust's performance with Go's simplicity is basically every new systems language's promise since 2015. The key differentiator seems to be "zero-cost exceptions" which I assume means compile-time Result types without runtime unwinding overhead? That's compelling if true, since Rust's Result ergonomics can get verbose in deeply nested error chains.
But the real test is compile times and cognitive overhead. Rust's borrow checker is theoretically elegant but practically brutal when you're learning or debugging. If Rue can achieve memory safety without lifetime annotations everywhere, that's genuinely valuable. However, I'm skeptical - you can't eliminate tradeoffs, only move them around. If there's no borrow checker, what prevents use-after-free? If there's garbage collection, why claim "lower level than Go"?
The other critical factor is ecosystem maturity. Rust's pain is partially justified by its incredible crate ecosystem - tokio, serde, axum, etc. A new language needs either (1) seamless C FFI to bootstrap libraries, (2) a killer feature so valuable that people rewrite everything, or (3) 5+ years for the ecosystem to develop. Which path is Rue taking?
I'd love to see real-world benchmarks on: compile time for a 50k line project, memory usage of a long-running web server compared to Rust/Go, and cold start latency for CLI tools. Those metrics matter more than theoretical performance claims. The "fun to write" claim is subjective but important - if it's genuinely more ergonomic than Rust without sacrificing performance, that could attract the "Python developers wanting systems programming" demographic.
I’m explicitly not claiming Rust’s performance. Rust will always be ahead here. I’m giving up some of that performance for other things.
I do agree that those benchmarks are important. Once I have enough language features to make such a thing meaningful, I’ll be tracking them.
Where did I write that it’s fun to write?
Your style of commenting is pretty full of LLM tells fyi. Normally don’t comment on it but this is the second such comment of yours I have read in a few minutes.
e: I would be curious of the thoughts of those downvoting as personally I don’t think mostly LLM written comments are a direction we want to move towards on HN.
Rather than downvoting you, I will speak up to say I don't see what you're seeing. Spaces around hyphens, yeah, sure, but LLMs prefer em dashes, and even that is unreliable, because it's borrowed from habits that real humans have had for many years.
For me, the more important indicator is the content. I see reports of personal experience, and thoughts that are not completely explained (because the reader is expected to draw the rest of the owl). I don't see smugly over-the-top piles of adjectives filling in for an inability to make critiques of any substance. I don't see wacky asides amounting to argumentum ad lapidem, accomplishing nothing beyond insulting readers who disagree with a baseless assertion.
I think it's likely you have drawn a false positive.
It saddens me a bit that this can't be distinguished by people on here. I encourage you to take a look at their profile and see if you are still as skeptical. Noticing em-dashes is facile and as you mention, common among human written text - but there are more subtle stylistic cues (although now that you mention it, this writer likely went out of their way to replace emdashes with hyphens).
I was raised in a family of professional writer-editors (but now am the tech-y black sheep) which might make the cues a bit more obvious to me. The degree to which this style of writing was common prior to 2022 is vastly overstated, the tells were actually not really that common.
A) you cannot tell B) you have said nothing productive toward discussion, you’ve just accused someone of using a tool (that you don’t know if they used)
I’d prefer actual criticism of the content. (I cannot downvote and would not if I could)
I am certain that they used a tool. As I said, I normally do not complain and typically engage on the merits -- but these have been among the top comments on every front page article I've read today and it gets tiresome! To me, if you cannot invest enough effort to remove the pretty obvious cues, why am I investing the effort in reading the comment?
After seeing your reply, I looked at their comment history which makes it even more obvious imo.
that is fair —- you’re claiming this person has a pattern of lazy, low-effort comments. I didn’t check and if you’re right, I appreciate you calling it out
just as you’re annoyed by low-effort LLM posts/comments, I’m annoyed by low-effort “this sounds like it was written by ChatGPT” comments (hence my response and at least a possible explanation of downvotes)
edit: I also scrolled through, you’re absolutely right! it does look like a low-effort bot
I have mostly been writing Rust in the last 10 years, but recently (1 year) I have been writing Go as well as Rust.
The typical Go story is to use a bunch of auto generation, so a small change quickly blows up as all of the auto generate code is checked into git. Like easily a 20x blowup.
Rust on the other hand probably does much more such code generation (build.rs for stuff like bindgen, macros for stuff like serde, and monomorphized generics for basically everything). But all of this code is never checked into git (with the exception of some build.rs tools which can be configured to run as commands as well), or at least 99% of the time it's not.
This difference has impact on the developer story. In go land, you need to manually invoke the auto generator and it's easy to forget until CI reminds you. The auto generator is usually quite slow, and probably has much less caching smartness than the Rust people have figured out.
In Rust land, the auto generation can, worst case, run at every build, best case the many cache systems take care of it (cargo level, rustc level). But still, everyone who does a git pull has to re-run this, while with the auto generation one can theoretically only have the folks run it who actually made changes that changed the auto generated code, everyone else gets it via git pull.
So in Go, your IDE is ready to go immediately after git pull and doesn't have to compile a tree of hundreds of dependencies. Go IDEs and compilers are so fast, it's almost like cheating from Rust POV. Rust IDEs are not as fast at all even if everything is cached, and in the worst case you have to wait a long long time.
On the other hand, these auto generation tools in Go are only somewhat standardized, you don't have a central tool that takes care of things (or at least I'm not aware of it). In Rust land, cargo creates some level of standardization.
You can always look at the auto generated Go code and understand it, while Rust's auto generated code usually is not IDE inspectable and needs special tools for access (except for the build.rs generated stuff which is usually put inside the target directory).
I wonder how a language that is designed from scratch would approach auto generation.
> On the other hand, these auto generation tools in Go are only somewhat standardized, you don't have a central tool that takes care of things (or at least I'm not aware of it).
https://pkg.go.dev/cmd/go#hdr-Generate_Go_files_by_processin...
FYI rust-analyzer can show expanded macros. It's not perfect because you only get syntax highlighting, but it works.
Yeah, this is a hard problem, and you're right that both have upsides and downsides. Metaprogramming isn't easy!
I know I don't want to have macros if I can avoid them, but I also don't forsee making code generation a-la-Go a first class thing. I'll figure it out.
> The typical Go story is to use a bunch of auto generation, so a small change quickly blows up as all of the auto generate code is checked into git. Like easily a 20x blowup.
Why do you think the typical Go story is to use a bunch of auto generation? This does not match my experience with the language at all. Most Go projects I've worked on, or looked at, have used little or no code generation.
I'm sure there are projects out there with a "bunch" of it, but I don't think they are "typical".
Same here. I've worked on one project that used code generation to implement a DSL, but that would have been the same in any implementation language, it was basically transpiring. And protobufs, of course, but again, that's true in all languages.
The only thing I can think of that Go uses a lot of generation for that other languages have other solutions for is mocks. But in many languages the solution is "write the mocks by hand", so that's hardly fair.
Me neither. My go code doesn't have any auto-generation. IMO it should be used sparingly, in cases where you need a practically different language for expressivity and correctness, such as a parser-generator.
Anything and everything related to Kubernetes in Go uses code generation. It is overwhelmingly "typical" to the point of extreme eye-rolling when you need to issue "make generate" three dozen times a day for any medium sized PR that deals with k8s types.
Auto generation? If you need to use that a lot, then the programming language is defective, I would say.
When Go was launched, it was said it was built specifically for building network services. More often than not that means using protobuf, and as such protobuf generated code ends up being a significant part of your application. You'd have that problem in any language, theoretically, due to the design of protobuf's ecosystem.
Difference is that other languages are built for things other than network services, so protobuf is much less likely to be a necessary dependency for their codebases.
What I've found over the years is that protobuf is actually not that widespread, and, given that, if you ignore gogoprotobuf package, it would generate terrible (for Go's GC) structs with pointers for every field, it's not been terribly popular in Go community either, despite both originating at Google
I'd say auto generation is just another instance of Greenspun's tenth rule.
The "just generate go code automatically then check it in" is a massive miswart from the language, and makes perfect sense because that pathological pattern is central to how google3 works.
A ton of google3 is generated, like output from javascript compilers, protobuf serialization/deserialization code, python/C++ wrappers, etc.
So its an established Google standard, which has tons of help from their CI/CD systems.
For everyone else, keeping checked-in auto-generated code is a continuous toil and maintenance burden. The Google go developers don't see it that way of course, because they are biased due to their google3 experience. Ditto monorepos. Ditto centralized package authorities for even private modules (my least fave feature of Go).
> For everyone else, keeping checked-in auto-generated code is a continuous toil and maintenance burden. The Google go developers don't see it that way of course, because they are biased due to their google3 experience.
The golang/go repo itself has various checked-in generated repo
If this language is supposed to be used for systems programming, doing a factorial isn't really a selling example of why Rue.
For sure. It's just such early days I don't have a lot of stuff that's useful yet. I'll get there.
Looks nice, but -> syntax always feels extremely off-putting. What does it get me?
I’m just copying Rust here because I care more about semantics at the moment. I may get rid of it, see some previous musings around this here https://steveklabnik.com/writing/too-many-words-about-rusts-...
I write a lot of go. I tried to write a lot of rust but fell into lifetime traps. I really want to leave C++ but I just can’t without something that’s also object oriented.
Not a dig at functional, it’s just my big codebases are logically defined as objects and systems that don’t lend itself to just being a struct or an interface.
Inheritance is why I’m stuck in C++ land.
I would love to have something like rust but that supports classes, virtual methods, etc. but I guess I’ll keep waiting.
In Rust you can have structs with any number of methods defined on them, which is functionally not that different from a class. You get interface like behavior with traitsz and you get encapsulation with private/public data and methods.
Does inheritance really matter that much?
Yes it does. Unless I can attach a trait to a struct without having to define all the methods of that trait for that struct. This is my issue with interfaces and go. I can totally separate out objects as interfaces but then I have to implement each implementation’s interface methods and it’s a serious chore when they’re always the same.
For example: Playable could be a trait that plays a sound when you interact with it. I would need to implement func interact for each object. Piano, jukebox, doorbell, etc. With inheritance, I write it once, add it to my class, and now all instances of that object have interact. Can I add instance variables to a trait?
This saves me time and keeps Claude out of my code. Otherwise I ask Claude to implement them all, modify them all, to try to keep them all logically the same.
I also don’t want to go type soup in order to abstract this into something workable.
You can provide default method implementations for traits. Any type with that trait gets the default behavior, unless you override it.
But that trait can’t have fields
You can use a struct that the other structs have as a field. The trait can then operate on that struct.
I'm not trying to convince you to use Rust. If you prefer C++ have at it. I was just trying to point out that most patterns in C++ have a fairly close analogy in Rust, just with different tradeoffs.
Yeah go has embedded structs. It’s ugly and allows one to address the fields on the parent and it exposes the struct (with the same fields) so it’s kind of a head scratcher.
To be honest, it’s been 3 years since I looked at rust and I might try again. I still prefer inheritance because some things just are-a thing. I also love ECS and components and see traits as that. I just wish I could store local state in those.
You can store state in the struct and then define a method in the trait to return the struct. Then all your default methods can use the "getter" to access the struct and it's state. The only thing you have to do is embed the struct and implement that one "getter" method to return it. I don't think it's much more boilerplate then utilizing inheritance.
Fyrox, a game engine written in Rust, uses an ECS and several object oriented patterns in their design. Might be a good reference if your interested. The rust book also has a section on OOP patterns in Rust.
I think it's Fyrox anyway. I remember the creator of a Rust game engine talking about it in an interview on Developer Voices. It could have been Bevy I guess, but I don't think so.
Yeah that’s what I did and it’s ugly. It works, and allows me to attach multiple behaviors but I would have to initialize them and write that boilerplate code to return them.
I think I might be able to do it with a macro but I’m not a rust guy so I’m limited by my knowledge.
I respect your preferences, but I am unlikely to add this sort of OOP. Ideally there'll be no subtyping at all in Rue. So you'll have to keep waiting, I'm afraid. Thanks for checking it out regardless!
As a long time C++ user, I’m curious why you like inheritance and virtual methods so much.
I maintain a medium sized, old-ish C++ code base. It uses classes and inheritance and virtual methods and even some multiple inheritance. I despise this stuff. Single Inheritance is great until you discover that you have a thing that doesn’t slot nicely into the hierarchy or when you realize that you want to decompose an interface (cough, base class) into a couple of non-hierarchically related things. Multiple inheritance is an absolute mess unless you strictly use base classes with pure virtual methods and no member variables. And forcing everything into an “is a” relationship instead of a “has a” relationship can be messy sometimes.
I often wish C++ had traits / or Haskell style type classes.
Protected and private inheritance are C++'s equivalent to traits, and they don't suffer from the usual issues of multiple inheritance. As for type classes, check out concepts. By no means am I trying to sell C++, I don't touch it myself, but it doesn't leave you completely adrift in those seas.
> Protected and private inheritance are C++'s equivalent to traits
How so? Maybe in a COM-like world where the user of an object needs to call a method to get an interface pointer.
I’ll grant that concepts are a massive improvement.
Ah, yes, multiple inheritance in C++: where order matters but sanity does not
Usually it takes some time to get used to borrow checker and lifetimes. After that, you stop noticing them.
What the world needs is a more expressive language than Go, that interops with Go's compilation model and libraries.
Something like Borgo https://borgo-lang.github.io/
Sadly, seems to be abandoned. Last commit a year ago.
Nah, we already have that in D, C#, and who knows maybe one day Java finally gets Valhala, or one can use Kotlin or Scala in the meantime.
It's amazing how often C# (or more broadly CLR/JVM) is the pragmatic answer, even when you feel uncool using it.
Indeed. :)
Any tentative ideas yet as to how you will manage the memory management? Sounds like a sort of magic 3rd way might be in the making/baking!
Something in the area of linear types and mutable value semantics.
Anything out there for reference or would you be implementing from theory/ideas here? God speed to you in terms of the project overall, it's exciting to see the beginnings of a rust-like-lang without the headaches!
Not implemented yet, I’m reading papers :)
Thanks!
How does this differ from Hylo [0]?
[0] https://hylo-lang.org
I am very interested in Hylo! I think they're playing in similar spaces. I'd like to explore mutable value semantics for Rue.
One huge difference is that Hylo is using LLVM, whereas I'm implementing my own backends. Another is that Hylo seems to know what they want to do with concurrency, whereas I really do not at all right now.
I think Hylo takes a lot of inspiration from Swift, whereas I take more inspiration from Rust. Swift and Rust are already very similar. So maybe Hylo and Rue will end up like this: sister languages. Or maybe they'll end up differently. I'm not sure! I'm just playing around right now.
How are Swift and Rust very similar? I can search, but want to hear your opinion.
And congrats on starting a language project, even if Just for Fun (Linux). ;)
https://frappe.io/blog/book-reviews/just-for-fun-a-book-on-l...
Thanks :)
Both are playing around in similar spaces, both shared team members for a while, both have a take an automatic memory management that isn’t a garbage collector, both were sponsored by a primary company for a while (Swift still is, I think). There’s a lot of differences too.
Any plans for adding algebraic data types (aka rust enums)?
I landed non-generic enums this evening. I'm not 100% sure what abstraction paths I want to go down. But I see sum types as just as important as product types, for sure.
In the intro text, the Ramsus is who ? a typo about php creator or a more obscure language creator?
It's a typo, I'm referring to https://en.wikipedia.org/wiki/Rasmus_Lerdorf. I'll fix it, thank you :)
How does it achieve memory safety?
Right now? By not even having heap allocation (though I'll be sending in the first PR for that soon.)
Eventually: through not having references, thanks to mutable value semantics. Also linear types.
But that's just ideas right now. It'll get there.
Interesting, for me the "between Rust and Go" would be a nice fit for Swift or Zig. I've always quite liked the language design of Swift, it's bad that it didn't really take off that much
One thing working on this project has already done is give me more appreciation for a lot of Zig's design.
Zig really aims to be great at things I don't imagine Rue being useful for, though. But there's lots of good stuff there.
And lots of respect to Swift as well, it and Hylo are also major inspiration for me here.
I think with Swift 6 Apple really took it in a wrong direction. Even coding agents can’t wrap their mind around some of the “safety” features (not to mention the now bloated syntax). If anything, Swift would go down as a “good example why language design shouldn’t happen by committee in yearly iterations”.
Checkout Borgo: https://github.com/borgo-lang/borgo
I also find that D is good between language. You can do high level or low level whenever you need it.
You can also do some inbetween systems programming in C# if you don’t care about a VM or msft.
> You can also do some inbetween systems programming in C# if you don’t care about a VM or msft.
C# Native AOT gets rid of the JIT and gives you a pretty good perf+memory profile compared to the past.
It's mostly the stigma of .NET Framework legacy systems that put people off, but modern C# projects are a breeze.
AFAIK there’s still holes like reflection and you have some work, but if that’s changed that’s really good. I suspect it’ll be hard for C# to escape the stench of “enterprise” though.
I’m looking forward to seeing how it shapes out over the next few years. Especially once they release union types.
FWIW JIT is rarely an issue, and enables strong optimizations not available in AOT (it has its own, but JIT is overall much better for throughput). RyuJIT can do the same speculative optimizations OpenJDK Hotspot does except the language has fewer abstractions which are cheaper and access to low-level programming which allows it to have much different performance profile.
NativeAOT's primary goal is reducing memory footprint, binary size, making "run many methods once or rarely" much faster (CLI and GUI applications, serverless functions) and also shipping to targets where JIT is not allowed or undesirable. It can also be used to ship native dynamically or statically (the latter is tricky) linked libraries.
I wince every time I see naive recursive fibonacci as a code example. It is a major turnoff because it hints at a lack of experience with tail call optimization, which I consider a must have for a serious language.
Would someone please explain to me why TCO—seemingly alone amongst the gajillions of optimization passes performed by modern compilers—is so singularly important to some people?
For people that like functional style and using recursion for everything, TCO is a must. Otherwise there’s no way around imperative loops if you want decent performance and not having to worry about the stack limit.
Perhaps calling it an “optimization” is misleading. Certainly it makes code faster, but more importantly it’s syntax sugar to translate recursion into loops.
You don't need full fledged TCO for that; see Clojure's recur for an example. Zig recently added something similar but strongly typed with match/continue. These all map exactly to a closed set of mutually recursive functions with a single entry point, which is quite sufficient (and then some) to fully replace iterative loops while still desugaring to the same exact code.
Indeed there are more explicit versions of such mechanisms, which I prefer, otherwise there’s always a bit of paranoia about recursion without assurance that the compiler will handle it properly.
TCO is less of an optimization (which are typically best-effort on the part of the compiler) and more of an actual semantic change that expands the set of valid programs. It's like a new control flow construct that lives alongside `while` loops.
When you have recursive data structures, it's nice when the algorithms have the same shape. TCO is also handy when you're writing fancy control flow operations and implement them with continuation-passing style.
functional programming background / SICP ?
It virtue-signals that they're part of the hip functional crowd.
(To be fair, if you are programming functionally, it is essential. But to flat-out state that a language that doesn't support isn't "serious" is a bit rude, at best.)
Supporting recursion only to a depth of 1000 (or whatever) is equivalent to supporting loops of up to 1000 iterations.
If I put out a language that crashed after 1000 iterations of a loop, I'd welcome the rudeness.
Plenty of languages, including very serious ones like C and Rust, have bounded recursion depth.
Then let me rephrase:
If every iteration of a while-loop cost you a whole stack frame, then I'd be very rude about that language.
This works, btw:
> If every iteration of a while-loop cost you a whole stack frame, then I'd be very rude about that language.
Well, sure, but real programmers know how to do while loops without invoking a function call.
I only have basic constant folding yet in terms of optimizations, but I'm very aware of TCO. I haven't decided if I want to require an annotation to guarantee it like Rust is going to.
Please require some form of annotation like an explicit `tailcall` operator or something similar. TCO wrecks havoc on actionable backtraces, so it should be opt-in rather than opt-out.
I am very sympathetic to this, for sure.
Plus we all know that fibs = 1 : 1 : zipWith (+) fibs (tail fibs) is the only serious Fibonacci implementation.
Who thunk of that one?
"Well you can judge the whole world on the sparkle that you think it lacks.
Yes, you can stare into the abyss, but it's staring right back"
Please, it supports a hole at best. Maybe a pit. No way will this let you construct an abyss.
How does this compare to Swift?
I don't plan on implementing ARC, I don't think. I do think Swift/Hylo mutable value semantics is a neat idea that I do want to play around with.
This is a bit silly but when i look at new languages coming up I always look at the syntax, which is usually horrible(Zig and Rust are good examples), and how much garbage there is. As someone that writes in Go, I can't stand semicolons and other crap that just pollutes the code and wastes time and space to write for absolutely no good reason whatsoever. And as this compares itself with Go, I just cannot but laugh when I see ";", "->" or ":" in the example. At least the semicolon seems optional. But still, it's an instant nope for me.
Weird, that's exactly how I feel reading Go:
}And this one doesn't even have the infamous error-checking.
You cherry picked a contrived example but that's one of the cleanest generics implementations.
Now imagine if it had semicolons, ->, ! and '.
I respect your preferences! I like punctuation.
Even though I have a Perl tattoo, it'll never get like that, though.
(Semicolon rules, for now at least, will be the same as Rust)
What would be better?
Remove all of that noise.
Take this:
The (n: i32) can be just (n i32), because there is no benefit to adding the colon there.The -> i32 can also be just i32 because, again, the -> serves no purpose in function/method definition syntax.
So you end up with simple and clean fn fib(n i32) i32 {}
And semicolons are an ancient relic that has been passed on to new languages for 80 fucking years without any good reason. We have modern lexers/tokenizers and compilers that can handle if you don't put a stupid ; at the end of every single effing line.
Just go and count how many of these useless characters are in your codebase and imagine how many keystrokes, compilation errors and wasted time it cost you, whilst providing zero value in return.
In a Hindley-Milner (or deriative) type system, types doesn't have to be explicit, making the number of arguments ambiguous here:
But even if they need to be written explicitly, type applications like `List a` would require syntax to disambiguate them.Personally, I would like a language that pushes the programmer to write the types as part of a doc comment.
Also think about returning lambda's. Should it look like this?
Of course the IDE could help by showing the typographic arrows and other delineations, but as plaintext this is completely unreadable. You still have to think about stuff like currying. You either delimit the line, or you use significant white space.> Also think about returning lambda's. Should it look like this? > > fn foo(n i32) (i32 i32) {}
It should be
It will also allow future implementation of named returns, like in Go: As for semicolon, that is needed only if you have inline expression: Or inline block, like in Go:Even if you your context-depended parser can recognize it, does the user? I agree that a language designer should minimize the amount of muscle damage, but he shouldn't forget that readability is perhaps even more critical.
____
1. Note, even if the parser can recognize this, for humans the '>' is confusing unless syntax highlighting takes care of it. One time it delimits a generic type argument, the other time it is part of '->'. This is also an argument for rendering these things as ligatures.
This is pointless discussion as Go has all of these things already implemented, so there is no point in going backwards.
The point is that a (new) syntax for any language needs to support any such implementation. The language implementation itself is not the point.
Yes, and I am saying Go has already solved all of this and it makes little sense to deviate too much from its syntax.
> The (n: i32) can be just (n i32), because there is no benefit to adding the colon there.
> The -> i32 can also be just i32 because, again, the -> serves no purpose in function/method definition syntax.
Well, there is, but it's more of a personal trait than a universal truth. Some human programmers (e.g. me) tend to read and parse (and even write, to some extent) source code more accurately when there is a sprinkle of punctuation thrown in into a long chain of nothing but identifiers and subtly nested parentheses. Some, e.g. you, don't need such assistance and find it annoying and frivolous.
Unfortunately, since we don't store the source code of our programs as binary AST blobs that could be rendered in a personalized matter, but as plain text instead, we have to accept the language designer's choices. Perhaps it actually has better consequences than the alternative; perhaps not.
the only reason why one might hold such an opinion is the lack of syntax highlighting.
What’s with all the periods, one at the end of each paragraph? Fully wasted.
I wish more languages would adopt Clojure’s approach to optional delimiters in collections.
[2 45 78]
It’s just a nicer thing to view and type in my experience.
Regarding syntax soup, I think Odin is probably syntactically the cleanest of the lower level languages I’ve played with.
oh, yeah. that looks good. i always hated using ", " delimiter for lists and the amount of typos it always takes to make clean(well, not with Go fmt).
Odin seems interesting but for me it has two deal-breakers: first one use the use of ^ for pointer de/reference. Not that it does not make sense, it's just that it is not an easy key to get to on my keyboard layout and i will not be changing that. The & and * are well known characters for this purpose and, at least for me, easily accessible on the keyboard. Second issue is the need to download many gigabytes of visual studio nonsense just so i am able to compile a program. Coming from Go, this is just a non-starter. Thirdly, and this is more about the type of work i do than the language, there are/were no db drivers, no http/s stack and other things i would need for my daily work. Other than that, Odin is interesting. Though I am not sure how I would fare without OOP after so many years with inheritance OOP and encapsulated OOP.
It's a Lisp thing, obviously, but also there's a benefit to explicit delimiters - it makes it possible to have an expression as an element without wrapping that in its own brackets, as S-exprs require.
when we are we getting a language that looks like python and runs 50 times faster than c++ /s