I really wish the community had coalesced around gevent.
- no async/await, instead every possible thing that could block in the standard library is monkey-patched to yield to an event loop
- this means that you can write the same exact code for synchronous and concurrent workflows, and immediately get levels of concurrency only bounded by memory limits
- you'll never accidentally use a synchronous API in an async context and block your entire event loop (well, you can, if you spin on a tight CPU-bound loop, but that's a problem in asyncio too)
- the ecosystem doesn't need to implement libraries for asyncio and blocking code, everything just works
There's a universe where the gevent patches get accepted into Python as something like a Python 4, breaking some niche compatibility but creating the first ever first-class language with green-threading at its core.
But in this universe, we're left with silly things like "every part of the Django ecosystem dealing with models must be rewritten with 'a' prefixes or be abandoned" and it's a sad place indeed.
While having the same code for sync and async sounds nice, monkey patching code at runtime seems hacky. Any library that wants to use a lower level implementation of network calls would need to handle the monkey patching themselves i assume.
The idea would be that if this was accepted into the core as the way to go forward, it wouldn't be "monkeypatched" anymore, it would just be in core, officially supported.
Java 1.1 had green threads, so does Java 21+ (though Java 21's green threads are actually an M:N threading model not the M:1 threading model of Java 1.1).
Agreed. I chose gevent over asyncio for a backend in 2019, and we still using it. Works pretty well. No plans to phase out gevent just yet.
Though the community has clearly centered on asyncio by now. So if I were to start a new backend today, it would reluctantly be asyncio. Unfortunately...
I worked quite a bit with gevent'd code about 10+ years ago and also agree. Dealing with function coloring is incredibly non productive. This is one of the things "go" got right.
libraries provide pure processing machines that pull data in through well defined input functions (and output through return), and then it's up to the callsite (or host code or however we want to call our code) to decide what kind of color to paint the whole thing eventually.
I like `gevent` but I think it may have been too hacky of a solution to be incorporated to the main runtime.
"creating the first ever first-class language with green-threading at its core."
... isn't that what Go is? I think out of all languages I use extensively, Go is the only one that doesn't suffer from the sync/async function coloring nightmare.
I'm with you that function "coloring" (monads in the type system) can be unergonomic and painful.
> ... isn't that what Go is? I think out of all languages I use extensively, Go is the only one that doesn't suffer from the […] coloring nightmare.
Because it doesn't have Future/Promise/async as a built-in abstraction?
If my function returns data via a channel, that's still incompatible with an alternate version of the function that returns data normally. The channel version doesn't block the caller, but the caller has to wait for results explicitly; meanwhile, the regular version would block the caller, but once it's done, consuming the result is trivial.
Much of the simplicity of Go comes at the expense of doing everything (awaiting results, handling errors, …) manually, every damn time, because there's no language facility for it and the type system isn't powerful enough to make your own monadic abstractions. I know proponents of Go tend to argue this is a good thing, and it has merits. But making colorful functions wear a black-and-white trenchcoat in public doesn't solve the underlying problem.
One of the largest problems identified in the original "what color is your function" article ( https://journal.stuffwithstuff.com/2015/02/01/what-color-is-... ) is that, if you make a function async, it becomes impossible to use in non-async code. Well, maybe you can call "then" or whatever, but there's no way to take an async function and turn it into something that synchronously returns its value.
But in Go, it's very easy to do this; you can just do "result := <- ch" to obtain the value from a channel in synchronous code. (This blocks the thread, but in Go's concurrency model this isn't a problem, unlike in JavaScript.) Similarly it's very easy to take a synchronous function and do "go func() { ch <- myFunction() }()" to make it return its result in a channel.
> But in Go, it's very easy to do this; you can just do "result := <- ch" to obtain the value from a channel in synchronous code.
What you call "synchronous code" is really asynchronous. To actually have something that resembles synchronous code in Go you have to use LockOSThread, but this has the same downsides as the usual escape hatches in other languages. This is also one of the reasons cgo has such a high overhead.
Hm. You and parent comment have made me realize something: as much as I dislike how many useful abstractions are missing from Go, async for blocking syscalls is not one of them, since the "green thread" model effectively makes all functions async for the purposes of blocking syscalls. So I retract my "you have to do it manually" comment in this case. I guess that's part of why people love Go's concurrency.
Of course, as you said, stackful coroutines come with runtime overhead. But that's the tradeoff, and I'm sure they are substantially more efficient (modulo FFI calls) than the equivalent async-everywhere code would be in typical JS or Python runtimes.
My "you have to do it manually" comment comes from some other peeves I have with Go. I guess the language designers were just hyper-focused on syscall concurrency and memory management (traditionally hard problems in server code), because Go does fare well on those specific fronts.
I remember this article in 2015 being revelatory. But it turned out that what we thought was an insurmountable amount of JS code written with callbacks in 2015 would end up getting dwarfed by promise-based code in the years to come. The “red functions” took over the ecosystem!
With Python, I’m sure some people expect the same thing to happen. I think Python is far more persistent, though. So much unmaintained code in the ecosystem that will never be updated to asyncio. We’ll see, I suppose, but it will be a painful transition.
All goroutines are async in some sense, so generally you don't need to return a channel. You can write the function as if it's synchronous, then the caller can call it in a goroutine and send to a channel if they want. This does force the caller to write some code, but the key is that you usually don't need to do this unless you're awaiting multiple results. If you're just awaiting a single result, you don't need anything explicit, and blocking on mutexes and IO will not block the OS thread running the goroutine. If you're awaiting multiple things, it's nice for the caller to handle it so they can use a single channel and an errorgroup. This is different from many async runtimes because of automatic cooperative yielding in goroutines. In many async runtimes, if you try to do this, the function that wasn't explicitly designed as async will block the executor and lead to issues, but in Go you can almost always just turn a "sync" function into explicitly async
> I like `gevent` but I think it may have been too hacky of a solution to be incorporated to the main runtime
gevent's monkey-patching might be hacky, but an official implementation of stackful coroutines (similar to Lua's) need not have been.
Instead, stackless coroutines were chosen - maybe on merit, but also maybe because C#'s async/await keywords were in-vogue at the time and Python copied them.
> asyncio has so many sharp corners and design issues [...] bordering on being fundamentally broken
> other languages and libraries did what asyncio does significantly better
> it is baffling how the library made it out of provisional status with such glaring flaws
Unfortunately this seems common in Python 3.x. The hype-cycle focuses on a feature from another language(s), and Python rushes to add its own version without sufficient attention to its design, nor how the feature complements the rest of the language (if at all).
Async, type hints and pattern matching all fit this description. `match` is probably the worst offender - it seems to have been added mainly so that novices choosing a language will not veto Python because it's missing the current feature-du-jour. [0]
Type hints are at least having their flaws addressed - but the flaws can't be removed due to back-compat. This leads to a C++-like situation of "this is the modern way, these other approaches are deprecated". So much for "one way to do it".
I feel like this started/greatly accelerated when Guido stepped down as BDFL. Python at is on a path where the essence of what made it popular (readable, well designed, productive) is being crushed under the weight of it’s popularity. The language now feels bloated and needlessly complex in areas that were previously limited, but simple.
I recently chased down a bug where something was accidentally made a class variable because a type hint was left off it by accident and it clicked for me that Python is not the same language I loved at the start of my career.
The problem with any project is that at some point it's essentially complete, and all we need is small maintenance to keep it going. Google used to have an algorithm that pretty much solved web search. Tinder solved dating. Spotify solved music delivery. The problem is, you're sitting with a hundred managers and a thousand engineers and all these people expect growth. So you have to keep going, even if the only direction is down, because if you don't, you'll be forced out of organization and replaced by someone who does. So you do go down. And then everyone's surprised and playing the blame game.
> I recently chased down a bug where something was accidentally made a class variable because a type hint was left off it by accident
That's the reverse situation to one I've come across - a novice accidentally wrote `a : 4` instead of `a = 4` and was surprised that `a` was still undefined. There was no error because the `4` was interpreted as a type hint.
It's funny because a lot of OSS suffers from neglect.
But Python and a few other popular OSS (TypeScript is another example) have the opposite problem: too much development time is spent on them, too many features are added, and the language bloats and becomes less nice to use
> I feel like this started/greatly accelerated when Guido stepped down as BDFL
Same, but I don't think that's the direct cause. Guido was actually in favor of all these features (as well as Walrus, of course) - so it's not like he would have vetoed them if he were still BDFL.
> I don't feel calling modern Match/Case syntax a "du-jour" feature is plausible
I do. To be clear: I'm not saying that (well-designed versions of) pattern-matching will become less useful over time, but that they will become less fashionable.
Python will then, no doubt, add another language construct to appeal to those who choose languages solely based on a checklist of features.
In my experience, the key to using asyncio is to use anyio. Anyio is an interface that you can use ontop of asyncio and fixes most of its shortcomings.
I have a couple of small tools which work well. One is basically a 200 loc script that coverts files.
The moment you need multiple files, the project is probably big enough where you really need a strong type system.
Java and C# fit extremely well for this. Both have caught up in terms of ease of use. The latest .net let's you run a C# file individually like a bash script.
The gift and curse of python is that it's really easy to do a lot. So you can be sloppy , just tossing stuff in, up until issues start to arise. Next thing you know you have a giant code base no one has any chance of understanding.
Still, it's a great first language. You can make small tools within a month. C isn't nearly as forgiving.
>"Still, it's a great first language. You can make small tools within a month. C isn't nearly as forgiving."
If C and Python are the only options I would strongly vote for C being the first. It will weed out those who are unfit.
Myself I use many languages including Python but I started programming by entering machine codes in hex. Obviously I do not do it now but I think it did benefit me in general.
Please do not give us other FP users a bad name :)
Python is high level insofar as it's very far from the metal, and insofar as you can define quite sophisticated abstractions. Would I use Python to write a compiler or sophisticated graph algorithms? Probably not if I had any choice in the matter. But it has proven to be a great language for experimenters in numerical analysis, data science, machine learning, etc. who don't care (and don't want to care) about how the machine works.
>"But it has proven to be a great language for experimenters in numerical analysis, data science, machine learning, etc. who don't care (and don't want to care) about how the machine works."
Sure, python is an ok glue / script to call bunch of specialized libs doing some calculations and display the results. This has nothing to do with professional programming. And I agree that data scientists do not and should not give a flying fuck about how machine works.
>"Please do not give us other FP users a bad name :)"
I can't decipher his attempt on humor but just in case: FU ;)
The really ironic thing about this industry is you can use a high level language and make an unholy amount of money.
And you also have a bunch of low-level embedded engineers barely making 60K or so.
I don't really care about how a computer works, like most of us probably don't care how a car works. We just know it gets us to where we need to be.
Python, can get you a working rest API in 10 loc. Try that in C.
The original sin of Python ( arguably JavaScript as well) is these languages were never meant to scale to large code bases. VS Code and other IDEs use various tricks to help, but it gets weird.
>"The really ironic thing about this industry is you can use a high level language and make an unholy amount of money.
And you also have a bunch of low-level embedded engineers barely making 60K or so."
What industry? I know low level stuff but do not get hung up in it. I run my own company and develop enterprise grade products for clients. I make well above 60k
Well if your running your own business you'll definitely make more than an IC.
In my experience it's much easier to write Python or JavaScript at a professional level vs C or another more difficult language.
I think it's calmed down, but for a while Ruby was hot and you had 300k TC jobs demanding it.
The highest paying job I've had so far had us using Python. Fintech loves Python. Maybe one day I'll get a job at Jane Street.
Python's not fun in a larger code base though...
Edit: I'm actually a bit intrigued as to what you consider professional programming. As far as I'm concerned if I write code and I get a paycheck that's professional.
- a reasonable package manager with lockfiles and transitive dependency versions
- package org namespaces
- steps towards hermetic builds with less linking against system libraries and dependencies
- cleanup and slimming down of the standard library. Removal of cruft. Less "batteries included" so people lean more on packages and enforcing good practices with packages.
"make depending on system python an antipattern" is a social problem, not a technical one. uv's virtualenvs are not special (aside from not bootstrapping pip; bootstrapping it is already optional, although it's the default); the standard library `venv` writes `pyvenv.cfg` files that tell Python whether or not to use the system `site-packages`.
Pip has attempted to resolve and install transitive dependencies since even before 2020. That's just when they introduced a new, backtracking resolver (which can be slow because it is thorough and simple-minded about its backtracking).
"hermetic builds" sounds to me like it's referring to packages that don't depend on system binaries being present, by the magic trick of separately providing them (even if redundant). That's... just a wheel.
But also: currently, Python 4 is out of the question, and even if it were in play, "we should take this opportunity to integrate package management into the language" would be completely out of the question. That's just not how Python's politics work internally. There is generally pretty broad agreement anyway that the reasons packaging problems are hard to solve in Python (in the places where they actually are) are: 1) people don't agree on the design for a solution; 2) all that code in other programming languages that people want to interface with from Python (which, to be clear, is not something they're trying to get rid of); 3) backwards compatibility concerns. Making a clean break is not going to help with the first two and is directly opposed to the third.
> An organization would parent all of its packages under its namespace.
If you're talking about what you write in the code to import it, nothing prevents anyone from doing this. They just don't, because PyPI is a free-for-all by design.
> A project should blow up if you attempt to use system python. It should not work.
Why?
> `python main.py` should handle all of this for you.
How could it, given the issues you've identified?
> You keep seeing this come up because it's still a systemic issue with the ecosystem. Python needs to enforce this top down.
A very large fraction of Python users (whom you'll likely never hear from because they have their own ways of sharing code amongst themselves) actively don't want that kind of enforcement.
uv is still pretty young, a year and a half since initial public release? It's gotten a lot of traction but yes, it hasn't eaten the "market" for python package management, quite yet. I'd bet that there are users and developers of those ML projects that use uv to manage their environment, but aren't checking in a lockfile (depending on what kind of project it is, it might not make sense to check in the lockfile, I guess).
A project should blow up if you attempt to use system python. It should not work.
Disagree here - an environment is an environment. Yes, the system python environment is kind of shit, but it does provide one - if a package cares about where exactly it gets dependencies, other than "here's a directory with packages", it's doing something wrong. I'm not sure how you'd enforce that, as nice as it sounds from the point of view of getting system python to go away.
`python main.py` should handle all of this for you.
Just to be clear - I agree with you. My impression of the python packaging ecosystem has been that it's kind of shit, various tries have been made at fixing it with various tools, all of them have had issues with one workflow or another. But things are now trending in a much more positive direction.
The tragedy of Python 3 is that they made the community go through a billion dollar migration but didn't tackle any of the hard stuff. And the reason they didn't was the Perl 6 debacle. So it's all Larry Wall's fault.
Python 3 took the exact same path as Larry Wall's Perl 6. But unlike Perl, it was able to come out largely unscathed.
Python could do with more major version upgrades, but small scoped upgrades that only change tiny, manageable pieces. Forcing a good package manager on all Python developers would be a good candidate for this.
> They do not enforce it. It's not about "can do". It's about defaults and enforcing stricter standards.
How exactly do you want to "enforce" an "optional" runtime type check for an interface, that is different from opting in by calling `isinstance`?
For that matter, `TypeError` exists and is raised non-optionally by all sorts of things.
> And in no world are Python builds and dependencies solved. It's a major headache.
For your specific set of dependencies, perhaps. Lots of people are using Python just fine. A large majority of what I'm interested in using would install fully (including transitive dependencies) from pre-built wheels on my system; and of those wheels, a large majority contain only Python code that doesn't actually require a build step. (Numpy is the odd one out here.)
In fact, of the top 10 largest packages in my system's `dist-packages` (all stuff that came provided with my Linux distro; I respect PEP 668 and don't let Python native packaging tools touch that environment), at least 9 have wheels on PyPI, and 4 of them have `none-any` wheels. (And of course, tons of the smaller ones are pure Python - there's simply no room for a compiled binary there.)
Look, is correct to understand that the BEST way, the BEST refactoring of all, is to change to a language + libraries that fix core issues (similar how you can't outrun a bad diet you can't outrun bad semantics)
BUT, also change it means lose a lot of nice things.
For example I move from python to F# and then Rust, for what I say that is now the best overall decision at all, BUT I miss dearly Django (and instant run and REPL)
IF I could get Django (that means transitively use python) and get better type system and perf from it, I could consider it and surely use it, like "python get as swift or go but with rust algebraic types, whatever it means to get here and yet is Django and python. ok?
Is unreasonable? but sincerely, it will nice if the best ideas get retrofitted in other languages because, well, I wish everyone get as more nice things as possible instead of keep with suboptimal options (that is what I say: breaking languages is not something that must be forbidden forever)
Is unreasonable? (I say again) could be, but I think is a good thing to explore. Maybe is not that much, and if at the end things get better, great!
Personally, I do; my only interactions with Python are unwilling (work or AI-related code). Given that I don't have much of a choice in the matter, I'd like to see Python improve as a language and as an ecosystem. I hope that one day I will not feel dread upon encountering it in the wild.
Setuptools comes to around 100kloc now (per a naive search and `wc` of the .py files in a new environment) and installs without a hitch for everyone. Pip, nearly double that. Yes, both of those are heavily using vendoring, but because of bootstrapping problems because they are the default build tool and package installer respectively. (And quite a bit of devendoring would be possible if some system maintainers didn't insist on "building from source" when there is only Python code. Yes, technically pre-compiling .py to .pyc can be considered "building", but pip would already do that during installation.) If they didn't have to worry about those issues (or the additional Pip issue of "how do you get the packages for Pip's dependencies if you don't already have Pip?"), they would install and run just fine from a series of wheels that are all pure Python code.
For that matter, one of those dependencies, `pyparsing`, is around 10k lines installed (and the repository is much larger with tests etc.). That's with it not having any dependencies of its own.
There is no real barrier anywhere around the 1k loc mark. The problems large projects run into have more subtle causes.
idk maybe im old, but most of the high scale serious systems i have seen are in a language like java or c++. big important python systems are mostly from startups that succeeded and have trouble migrating off
I'm working on an ML platform in Elixir. Orchestrating CPU-intensive Python processes is central to the project, so I created a session based pooler called `snakepit` [1].
The latest revision is building a batteries-included gRPC Python bridge that enables streaming, bi-directional tool use, as well as an innovative variables feature for experimental ML inspired by ideas from DSPy's team.
One of the later project goals: A Python client that can manage the Elixir orchestrator that manages pools of Python, in a distributed environment. Maybe I'll call that submodule `snakepits`. In this embodiment, it will be an effective albeit much more sophisticated replacement for `asyncio` for some use cases.
Agree. If genuine pre-emptive cancellation from the outside is needed, all languages support the primitive which allows that: a process.
Many of the tradeoffs that come from using a process rather than a thread descend directly from that property of support for arbitrary-point cancellation.
You can't always control what the thread is doing, like what code is being executed. Even if you put flags and ensure your code is periodically checking them, your code is not necessarily the only thing being executed by the thread. It may be calling into external code. You need a way to interrupt it.
> You can't always control what the thread is doing, like what code is being executed.
If this is the case then you can't use threads for this workload. If you're calling into some external code that could be doing whatever then one of the things it could be doing is holding locks. If you want a unit of execution you can murder with prejudice then you need the operating system's help which means a process.
On both Linux and Windows the situations where you can safely kill a thread is when you have complete control and knowledge of what's running in it, the very opposite of when people want to be able to kill threads.
No, that's exactly why you must not have "a way to interrupt it". Doing so could leave said external code in an invalid state and there would be no plausible way to do anything about it.
Yes, it clearly has a sharper learning curve than Python. But I think idiomatic Rust feels a lot like a good Python implementation of the same thing would, in ways I can’t quite explain. The borrow checker is certainly something to come to grips with, but it also nudges you to write code the way you should probably be writing it anyway. Don’t pass complex structures around and mutate them everywhere. Don’t return 5 kinds of object from the same function. Don’t go crazy with inheritance vs plain, understandable, testable functions.
Ruby is probably the most-similar popular language - a good choice for small projects/prototyping/exploratory programming. Personally I'm still using Python for this, but I'm basically still writing Python 2.7 (plus f-strings).
For larger projects, Go seems to follow the "Zen of Python" better than Python itself nowadays.
If you have access to f-strings then you have had to change quite a few things that are mutually incompatible between 2.x and 3.x. Perhaps you feel like your Python 3 code (which requires at least 3.6) is "basically still 2.7" because you aren't using things like type annotations (and actual tooling to check them), or the walrus operator, or the match statement. But all of these things are entirely optional, and plenty of people feel that they're actually un-Pythonic.
> you have had to change quite a few things that are mutually incompatible between 2.x and 3.x
I made those changes while still targeting 2.7: around 2018 I was writing code for 2.7 that was forward-compatible with 3.x, as was common at the time. Since then, f-strings have been worth adding to my toolbox - most of Python 3's other new features haven't been.
At this point, there are good solutions to most of the problems. It is just hard and awkward to avoid the standard library bits and the common tools (e.g. use uv instead of pip, requests instead of urllib, pytest instead of unittest, marimo instead of jupyter, trio instead of asyncio...).
Of course having a "batteries included" language where the batteries hurt you is not great.
Nim is often recommended as an alternative to Python, but how similar is it? I know it has Python-like significant indentation, but I'd have thought syntax is the least important consideration when choosing a language?
This answer's @pansa2's question in a way that more leans on one aspect of Nim's greater syntactic & code transformation flexibility (which is great!), but I suspect pansa2 was more interested in differences in "idiomatic Nim" from Python for which there are more details here: https://github.com/nim-lang/Nim/wiki/Nim-for-Python-Programm...
A big issue with asyncio is IME the documentation. The main document is a tutorial-style tour of the features, scarcely mentioning the important details. These are often hidden in comments on seemingly unrelated features or, more likely, reddit and SE.
Some Python documentation indeed includes a tutorial, but also has a comprehensive documentation. This is not my experience with asyncio
Only experience with this thing was when working with a node dev that was trying out python. What a mess, he didnt quite get how processes or threads worked, just learned node's model event / promise models and brought them to python
In all fairness, that's a pretty common experience with any transition between dislike platforms.
I'll never forget one Java developer who transitioned to JS frontend work (pre-typescript days). Try/catch everywhere.
There's a semi-accurate mindset that "good developers are good in any language" as if software engineers are fungible resources, but the transition time to a novel paradigm isn't really well appreciated.
As somebody that has written couple of services with Twisted in the late '00s, I feel asyncio a breath of fresh air. This not means that it's perfect, but it's a lot more usable than Twisted.
I know that it's not possible, but I would like to have in Python something similar to goroutines, waitgroups and channels that Go has.
“Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.” – Virding’s first rule of programming
It will still leave resources in unknown and possibly inconsistent state, moreover you need to use `pthread_cleanup_push` to register everything that may need to be done to clean up the thread... for languages with RAII this basically means the destructor of every local variable, which isn't really reasonable, so you end up with ton of code where it won't work. You can claim it's "standard", but de-facto it really isn't and most code is not "well-formed" from its point of view.
Cancellation is just an exception, so destructors work automatically. Any implementation of a language that uses RAII without integrating cancellation into it is fundamentally broken and should be avoided at all costs. `pthread_cleanup_push` (or `__attribute__((cleanup))` is only needed if you're implementing it by hand.
Since the set of functions that can be cancelled is documented (and pretty obvious - roughly, any blocking function but notably excluding mutex functions), there's no "unknown" involved. At most there is "I insisted on writing C but refused to take the effort to make my code correct in a threaded environment."
Notably, `malloc` is not a cancellation point so this all is much easier to deal with than signals.
I have a feeling that the core issue is that author is trying to implement complex communication mechanisms with a tool that was designed for much more basic usage. I don't understand why half of the article is about cancellations while cancellations as a concept are wrong. Have you ever tried to cancel peeing mid-pee? It just doesn't work.
Not sure if the other half of the article is worth reading if first half has completely wrong premise. Or does this guy live in some utopian future where it's reasonable to assume that everything can be cancelled at any point.
I really wish the community had coalesced around gevent.
- no async/await, instead every possible thing that could block in the standard library is monkey-patched to yield to an event loop
- this means that you can write the same exact code for synchronous and concurrent workflows, and immediately get levels of concurrency only bounded by memory limits
- you'll never accidentally use a synchronous API in an async context and block your entire event loop (well, you can, if you spin on a tight CPU-bound loop, but that's a problem in asyncio too)
- the ecosystem doesn't need to implement libraries for asyncio and blocking code, everything just works
There's a universe where the gevent patches get accepted into Python as something like a Python 4, breaking some niche compatibility but creating the first ever first-class language with green-threading at its core.
But in this universe, we're left with silly things like "every part of the Django ecosystem dealing with models must be rewritten with 'a' prefixes or be abandoned" and it's a sad place indeed.
While having the same code for sync and async sounds nice, monkey patching code at runtime seems hacky. Any library that wants to use a lower level implementation of network calls would need to handle the monkey patching themselves i assume.
The idea would be that if this was accepted into the core as the way to go forward, it wouldn't be "monkeypatched" anymore, it would just be in core, officially supported.
Java 1.1 had green threads, so does Java 21+ (though Java 21's green threads are actually an M:N threading model not the M:1 threading model of Java 1.1).
Agreed. I chose gevent over asyncio for a backend in 2019, and we still using it. Works pretty well. No plans to phase out gevent just yet.
Though the community has clearly centered on asyncio by now. So if I were to start a new backend today, it would reluctantly be asyncio. Unfortunately...
I worked quite a bit with gevent'd code about 10+ years ago and also agree. Dealing with function coloring is incredibly non productive. This is one of the things "go" got right.
the sans-IO approach seems roughly correct
libraries provide pure processing machines that pull data in through well defined input functions (and output through return), and then it's up to the callsite (or host code or however we want to call our code) to decide what kind of color to paint the whole thing eventually.
this helps testability and composability too.
(of course this is not new, there was Scala's Iteratee and there's full program interpretation ... https://zio.dev/1.0.18/overview/overview_background/ )
I like `gevent` but I think it may have been too hacky of a solution to be incorporated to the main runtime.
"creating the first ever first-class language with green-threading at its core."
... isn't that what Go is? I think out of all languages I use extensively, Go is the only one that doesn't suffer from the sync/async function coloring nightmare.
> the sync/async function coloring nightmare
I'm with you that function "coloring" (monads in the type system) can be unergonomic and painful.
> ... isn't that what Go is? I think out of all languages I use extensively, Go is the only one that doesn't suffer from the […] coloring nightmare.
Because it doesn't have Future/Promise/async as a built-in abstraction?
If my function returns data via a channel, that's still incompatible with an alternate version of the function that returns data normally. The channel version doesn't block the caller, but the caller has to wait for results explicitly; meanwhile, the regular version would block the caller, but once it's done, consuming the result is trivial.
Much of the simplicity of Go comes at the expense of doing everything (awaiting results, handling errors, …) manually, every damn time, because there's no language facility for it and the type system isn't powerful enough to make your own monadic abstractions. I know proponents of Go tend to argue this is a good thing, and it has merits. But making colorful functions wear a black-and-white trenchcoat in public doesn't solve the underlying problem.
(goroutines are nice, though.)
One of the largest problems identified in the original "what color is your function" article ( https://journal.stuffwithstuff.com/2015/02/01/what-color-is-... ) is that, if you make a function async, it becomes impossible to use in non-async code. Well, maybe you can call "then" or whatever, but there's no way to take an async function and turn it into something that synchronously returns its value.
But in Go, it's very easy to do this; you can just do "result := <- ch" to obtain the value from a channel in synchronous code. (This blocks the thread, but in Go's concurrency model this isn't a problem, unlike in JavaScript.) Similarly it's very easy to take a synchronous function and do "go func() { ch <- myFunction() }()" to make it return its result in a channel.
> But in Go, it's very easy to do this; you can just do "result := <- ch" to obtain the value from a channel in synchronous code.
What you call "synchronous code" is really asynchronous. To actually have something that resembles synchronous code in Go you have to use LockOSThread, but this has the same downsides as the usual escape hatches in other languages. This is also one of the reasons cgo has such a high overhead.
Hm. You and parent comment have made me realize something: as much as I dislike how many useful abstractions are missing from Go, async for blocking syscalls is not one of them, since the "green thread" model effectively makes all functions async for the purposes of blocking syscalls. So I retract my "you have to do it manually" comment in this case. I guess that's part of why people love Go's concurrency.
Of course, as you said, stackful coroutines come with runtime overhead. But that's the tradeoff, and I'm sure they are substantially more efficient (modulo FFI calls) than the equivalent async-everywhere code would be in typical JS or Python runtimes.
My "you have to do it manually" comment comes from some other peeves I have with Go. I guess the language designers were just hyper-focused on syscall concurrency and memory management (traditionally hard problems in server code), because Go does fare well on those specific fronts.
I remember this article in 2015 being revelatory. But it turned out that what we thought was an insurmountable amount of JS code written with callbacks in 2015 would end up getting dwarfed by promise-based code in the years to come. The “red functions” took over the ecosystem!
With Python, I’m sure some people expect the same thing to happen. I think Python is far more persistent, though. So much unmaintained code in the ecosystem that will never be updated to asyncio. We’ll see, I suppose, but it will be a painful transition.
All goroutines are async in some sense, so generally you don't need to return a channel. You can write the function as if it's synchronous, then the caller can call it in a goroutine and send to a channel if they want. This does force the caller to write some code, but the key is that you usually don't need to do this unless you're awaiting multiple results. If you're just awaiting a single result, you don't need anything explicit, and blocking on mutexes and IO will not block the OS thread running the goroutine. If you're awaiting multiple things, it's nice for the caller to handle it so they can use a single channel and an errorgroup. This is different from many async runtimes because of automatic cooperative yielding in goroutines. In many async runtimes, if you try to do this, the function that wasn't explicitly designed as async will block the executor and lead to issues, but in Go you can almost always just turn a "sync" function into explicitly async
> I like `gevent` but I think it may have been too hacky of a solution to be incorporated to the main runtime
gevent's monkey-patching might be hacky, but an official implementation of stackful coroutines (similar to Lua's) need not have been.
Instead, stackless coroutines were chosen - maybe on merit, but also maybe because C#'s async/await keywords were in-vogue at the time and Python copied them.
Stackless Python has existed for a long time. It’s “stackful” according to your definition, the term they refer to is the C stack.
> Stackless Python
Reminds me of ANTLR - another project whose name is the opposite of what it actually does.
Stackless python, we were not ready for it.
Erlang did it 20-30 years before Go
Green threads, now THAT's a buzzword I haven't heard for a long time...
> asyncio has so many sharp corners and design issues [...] bordering on being fundamentally broken
> other languages and libraries did what asyncio does significantly better
> it is baffling how the library made it out of provisional status with such glaring flaws
Unfortunately this seems common in Python 3.x. The hype-cycle focuses on a feature from another language(s), and Python rushes to add its own version without sufficient attention to its design, nor how the feature complements the rest of the language (if at all).
Async, type hints and pattern matching all fit this description. `match` is probably the worst offender - it seems to have been added mainly so that novices choosing a language will not veto Python because it's missing the current feature-du-jour. [0]
Type hints are at least having their flaws addressed - but the flaws can't be removed due to back-compat. This leads to a C++-like situation of "this is the modern way, these other approaches are deprecated". So much for "one way to do it".
[0] https://discuss.python.org/t/pep-8012-frequently-asked-quest...
I feel like this started/greatly accelerated when Guido stepped down as BDFL. Python at is on a path where the essence of what made it popular (readable, well designed, productive) is being crushed under the weight of it’s popularity. The language now feels bloated and needlessly complex in areas that were previously limited, but simple.
I recently chased down a bug where something was accidentally made a class variable because a type hint was left off it by accident and it clicked for me that Python is not the same language I loved at the start of my career.
The problem with any project is that at some point it's essentially complete, and all we need is small maintenance to keep it going. Google used to have an algorithm that pretty much solved web search. Tinder solved dating. Spotify solved music delivery. The problem is, you're sitting with a hundred managers and a thousand engineers and all these people expect growth. So you have to keep going, even if the only direction is down, because if you don't, you'll be forced out of organization and replaced by someone who does. So you do go down. And then everyone's surprised and playing the blame game.
> I recently chased down a bug where something was accidentally made a class variable because a type hint was left off it by accident
That's the reverse situation to one I've come across - a novice accidentally wrote `a : 4` instead of `a = 4` and was surprised that `a` was still undefined. There was no error because the `4` was interpreted as a type hint.
It's funny because a lot of OSS suffers from neglect.
But Python and a few other popular OSS (TypeScript is another example) have the opposite problem: too much development time is spent on them, too many features are added, and the language bloats and becomes less nice to use
> I feel like this started/greatly accelerated when Guido stepped down as BDFL
Same, but I don't think that's the direct cause. Guido was actually in favor of all these features (as well as Walrus, of course) - so it's not like he would have vetoed them if he were still BDFL.
I don't feel calling modern Match/Case syntax a "du-jour" feature is plausible. Are you referencing something else?
> I don't feel calling modern Match/Case syntax a "du-jour" feature is plausible
I do. To be clear: I'm not saying that (well-designed versions of) pattern-matching will become less useful over time, but that they will become less fashionable.
Python will then, no doubt, add another language construct to appeal to those who choose languages solely based on a checklist of features.
I liked Python’s pattern matching until I picked up Rust. Oh… that’s what we could have had? Sigh.
In my experience, the key to using asyncio is to use anyio. Anyio is an interface that you can use ontop of asyncio and fixes most of its shortcomings.
https://anyio.readthedocs.io
the real problem is python, great at the start of a project, by the end i am dying to rewrite in java.
It's fine for very very small projects.
I have a couple of small tools which work well. One is basically a 200 loc script that coverts files.
The moment you need multiple files, the project is probably big enough where you really need a strong type system.
Java and C# fit extremely well for this. Both have caught up in terms of ease of use. The latest .net let's you run a C# file individually like a bash script.
The gift and curse of python is that it's really easy to do a lot. So you can be sloppy , just tossing stuff in, up until issues start to arise. Next thing you know you have a giant code base no one has any chance of understanding.
Still, it's a great first language. You can make small tools within a month. C isn't nearly as forgiving.
If you love the Python syntax, Nim is here.
>"Still, it's a great first language. You can make small tools within a month. C isn't nearly as forgiving."
If C and Python are the only options I would strongly vote for C being the first. It will weed out those who are unfit.
Myself I use many languages including Python but I started programming by entering machine codes in hex. Obviously I do not do it now but I think it did benefit me in general.
Not everyone is going to become a Staff Engineer at Google.
Python can help a bartender write a small application to track her stock for example. In fact with a bit of computer vision this can work very well.
I originally just wanted to make small games, I was hacking together sloppy JavaScript. I don't like gate keeping.
The future is probably even a higher level language than Python. Maybe an embedded llm that sorta decides what to do at runtime.
I was talking about teaching programming on pro level. If one just wants to play anything goes. Whatever they feel like.
>" I don't like gate keeping."
I hate it too. Cheers.
>"higher level language than Python"
I do not see Python as a high level language. Sure it is not assembly but that's about it in my opinion.
Please do not give us other FP users a bad name :)
Python is high level insofar as it's very far from the metal, and insofar as you can define quite sophisticated abstractions. Would I use Python to write a compiler or sophisticated graph algorithms? Probably not if I had any choice in the matter. But it has proven to be a great language for experimenters in numerical analysis, data science, machine learning, etc. who don't care (and don't want to care) about how the machine works.
>"But it has proven to be a great language for experimenters in numerical analysis, data science, machine learning, etc. who don't care (and don't want to care) about how the machine works."
Sure, python is an ok glue / script to call bunch of specialized libs doing some calculations and display the results. This has nothing to do with professional programming. And I agree that data scientists do not and should not give a flying fuck about how machine works.
>"Please do not give us other FP users a bad name :)"
I can't decipher his attempt on humor but just in case: FU ;)
The really ironic thing about this industry is you can use a high level language and make an unholy amount of money.
And you also have a bunch of low-level embedded engineers barely making 60K or so.
I don't really care about how a computer works, like most of us probably don't care how a car works. We just know it gets us to where we need to be.
Python, can get you a working rest API in 10 loc. Try that in C.
The original sin of Python ( arguably JavaScript as well) is these languages were never meant to scale to large code bases. VS Code and other IDEs use various tricks to help, but it gets weird.
>"The really ironic thing about this industry is you can use a high level language and make an unholy amount of money. And you also have a bunch of low-level embedded engineers barely making 60K or so."
What industry? I know low level stuff but do not get hung up in it. I run my own company and develop enterprise grade products for clients. I make well above 60k
Well if your running your own business you'll definitely make more than an IC.
In my experience it's much easier to write Python or JavaScript at a professional level vs C or another more difficult language.
I think it's calmed down, but for a while Ruby was hot and you had 300k TC jobs demanding it.
The highest paying job I've had so far had us using Python. Fintech loves Python. Maybe one day I'll get a job at Jane Street.
Python's not fun in a larger code base though...
Edit: I'm actually a bit intrigued as to what you consider professional programming. As far as I'm concerned if I write code and I get a paycheck that's professional.
Python needs a version 4 to make a clean break:
- runtime optionally type checked interfaces
- make depending on system python an antipattern
- a reasonable package manager with lockfiles and transitive dependency versions
- package org namespaces
- steps towards hermetic builds with less linking against system libraries and dependencies
- cleanup and slimming down of the standard library. Removal of cruft. Less "batteries included" so people lean more on packages and enforcing good practices with packages.
uv solves or improves 2,3,5. (It's scary how often lately I find myself saying or seeing someone say that)
---
For posterity -
uv is virtual env, uv managed python executable first, opt in to system python packages only uv.lock, check. I think base pip also has transitive dependencies, though, and has since 2020? I know there's build isolation for dependencies in uv, I'm not sure it solves the problem you haveWhat is it you mean by package org namespaces?
"make depending on system python an antipattern" is a social problem, not a technical one. uv's virtualenvs are not special (aside from not bootstrapping pip; bootstrapping it is already optional, although it's the default); the standard library `venv` writes `pyvenv.cfg` files that tell Python whether or not to use the system `site-packages`.
Pip has attempted to resolve and install transitive dependencies since even before 2020. That's just when they introduced a new, backtracking resolver (which can be slow because it is thorough and simple-minded about its backtracking).
"hermetic builds" sounds to me like it's referring to packages that don't depend on system binaries being present, by the magic trick of separately providing them (even if redundant). That's... just a wheel.
But also: currently, Python 4 is out of the question, and even if it were in play, "we should take this opportunity to integrate package management into the language" would be completely out of the question. That's just not how Python's politics work internally. There is generally pretty broad agreement anyway that the reasons packaging problems are hard to solve in Python (in the places where they actually are) are: 1) people don't agree on the design for a solution; 2) all that code in other programming languages that people want to interface with from Python (which, to be clear, is not something they're trying to get rid of); 3) backwards compatibility concerns. Making a clean break is not going to help with the first two and is directly opposed to the third.
I agree that uv's virtual envs are not special, what's special is tooling around d it that works fast and is easy to use.
And yet `uv` isn't widely used. None of the ML projects I'm frequenting utilize it.
> What is it you mean by package org namespaces?
eg. `google/protobuf`. An organization would parent all of its packages under its namespace.
> uv is virtual env, uv managed python executable first, opt in to system python packages only
A project should blow up if you attempt to use system python. It should not work.
`python main.py` should handle all of this for you.
An ideal Python would work like Rust's cargo. And there would be only one way to use it.
You keep seeing this come up because it's still a systemic issue with the ecosystem. Python needs to enforce this top down.
> An organization would parent all of its packages under its namespace.
If you're talking about what you write in the code to import it, nothing prevents anyone from doing this. They just don't, because PyPI is a free-for-all by design.
If you're talking about names used on PyPI, see https://peps.python.org/pep-0752/.
But this is not a language feature.
> A project should blow up if you attempt to use system python. It should not work.
Why?
> `python main.py` should handle all of this for you.
How could it, given the issues you've identified?
> You keep seeing this come up because it's still a systemic issue with the ecosystem. Python needs to enforce this top down.
A very large fraction of Python users (whom you'll likely never hear from because they have their own ways of sharing code amongst themselves) actively don't want that kind of enforcement.
uv is still pretty young, a year and a half since initial public release? It's gotten a lot of traction but yes, it hasn't eaten the "market" for python package management, quite yet. I'd bet that there are users and developers of those ML projects that use uv to manage their environment, but aren't checking in a lockfile (depending on what kind of project it is, it might not make sense to check in the lockfile, I guess).
Disagree here - an environment is an environment. Yes, the system python environment is kind of shit, but it does provide one - if a package cares about where exactly it gets dependencies, other than "here's a directory with packages", it's doing something wrong. I'm not sure how you'd enforce that, as nice as it sounds from the point of view of getting system python to go away. Have you seen https://docs.astral.sh/uv/guides/scripts/ ?Just to be clear - I agree with you. My impression of the python packaging ecosystem has been that it's kind of shit, various tries have been made at fixing it with various tools, all of them have had issues with one workflow or another. But things are now trending in a much more positive direction.
> Python needs a version 4 to make a clean break
The tragedy of Python 3 is that they made the community go through a billion dollar migration but didn't tackle any of the hard stuff. And the reason they didn't was the Perl 6 debacle. So it's all Larry Wall's fault.
Python 3 took the exact same path as Larry Wall's Perl 6. But unlike Perl, it was able to come out largely unscathed.
Python could do with more major version upgrades, but small scoped upgrades that only change tiny, manageable pieces. Forcing a good package manager on all Python developers would be a good candidate for this.
> Less "batteries included" so people lean more on packages and enforcing good practices with packages.
No, just no. If anything, Python needs more batteries built-in.
The latest versions of Python 3 already have or can do most of these things.
They do not enforce it. It's not about "can do". It's about defaults and enforcing stricter standards.
Python is too flexible. It produces sloppy code.
And in no world are Python builds and dependencies solved. It's a major headache.
> > runtime optionally type checked interfaces
> They do not enforce it. It's not about "can do". It's about defaults and enforcing stricter standards.
How exactly do you want to "enforce" an "optional" runtime type check for an interface, that is different from opting in by calling `isinstance`?
For that matter, `TypeError` exists and is raised non-optionally by all sorts of things.
> And in no world are Python builds and dependencies solved. It's a major headache.
For your specific set of dependencies, perhaps. Lots of people are using Python just fine. A large majority of what I'm interested in using would install fully (including transitive dependencies) from pre-built wheels on my system; and of those wheels, a large majority contain only Python code that doesn't actually require a build step. (Numpy is the odd one out here.)
In fact, of the top 10 largest packages in my system's `dist-packages` (all stuff that came provided with my Linux distro; I respect PEP 668 and don't let Python native packaging tools touch that environment), at least 9 have wheels on PyPI, and 4 of them have `none-any` wheels. (And of course, tons of the smaller ones are pure Python - there's simply no room for a compiled binary there.)
If you want Python to be like TypeScript, why don't you just write a typescript then?
Because typescript is not python.
Look, is correct to understand that the BEST way, the BEST refactoring of all, is to change to a language + libraries that fix core issues (similar how you can't outrun a bad diet you can't outrun bad semantics)
BUT, also change it means lose a lot of nice things.
For example I move from python to F# and then Rust, for what I say that is now the best overall decision at all, BUT I miss dearly Django (and instant run and REPL)
IF I could get Django (that means transitively use python) and get better type system and perf from it, I could consider it and surely use it, like "python get as swift or go but with rust algebraic types, whatever it means to get here and yet is Django and python. ok?
Is unreasonable? but sincerely, it will nice if the best ideas get retrofitted in other languages because, well, I wish everyone get as more nice things as possible instead of keep with suboptimal options (that is what I say: breaking languages is not something that must be forbidden forever)
Is unreasonable? (I say again) could be, but I think is a good thing to explore. Maybe is not that much, and if at the end things get better, great!
Personally, I do; my only interactions with Python are unwilling (work or AI-related code). Given that I don't have much of a choice in the matter, I'd like to see Python improve as a language and as an ecosystem. I hope that one day I will not feel dread upon encountering it in the wild.
Either python needs to be a language geared at code no longer than 1000 loc, or it needs tools to enable safely building large scale apps.
Setuptools comes to around 100kloc now (per a naive search and `wc` of the .py files in a new environment) and installs without a hitch for everyone. Pip, nearly double that. Yes, both of those are heavily using vendoring, but because of bootstrapping problems because they are the default build tool and package installer respectively. (And quite a bit of devendoring would be possible if some system maintainers didn't insist on "building from source" when there is only Python code. Yes, technically pre-compiling .py to .pyc can be considered "building", but pip would already do that during installation.) If they didn't have to worry about those issues (or the additional Pip issue of "how do you get the packages for Pip's dependencies if you don't already have Pip?"), they would install and run just fine from a series of wheels that are all pure Python code.
For that matter, one of those dependencies, `pyparsing`, is around 10k lines installed (and the repository is much larger with tests etc.). That's with it not having any dependencies of its own.
There is no real barrier anywhere around the 1k loc mark. The problems large projects run into have more subtle causes.
Sounds like trading one problem for another
idk maybe im old, but most of the high scale serious systems i have seen are in a language like java or c++. big important python systems are mostly from startups that succeeded and have trouble migrating off
checkout java 23. it has come a long way
I'm working on an ML platform in Elixir. Orchestrating CPU-intensive Python processes is central to the project, so I created a session based pooler called `snakepit` [1].
The latest revision is building a batteries-included gRPC Python bridge that enables streaming, bi-directional tool use, as well as an innovative variables feature for experimental ML inspired by ideas from DSPy's team.
One of the later project goals: A Python client that can manage the Elixir orchestrator that manages pools of Python, in a distributed environment. Maybe I'll call that submodule `snakepits`. In this embodiment, it will be an effective albeit much more sophisticated replacement for `asyncio` for some use cases.
[1] https://github.com/nshkrdotcom/snakepit
> In the traditional model of concurrent programming using threads, there is no clean way to do cancellation.
The only solution is the one being complained about--for a thread to voluntarily check if it needs to be cancelled.
Java deprecated stop() for good reasons.
"Those who cannot remember the past are condemned to repeat it."
Agree. If genuine pre-emptive cancellation from the outside is needed, all languages support the primitive which allows that: a process.
Many of the tradeoffs that come from using a process rather than a thread descend directly from that property of support for arbitrary-point cancellation.
You can't always control what the thread is doing, like what code is being executed. Even if you put flags and ensure your code is periodically checking them, your code is not necessarily the only thing being executed by the thread. It may be calling into external code. You need a way to interrupt it.
> You can't always control what the thread is doing, like what code is being executed.
If this is the case then you can't use threads for this workload. If you're calling into some external code that could be doing whatever then one of the things it could be doing is holding locks. If you want a unit of execution you can murder with prejudice then you need the operating system's help which means a process.
On both Linux and Windows the situations where you can safely kill a thread is when you have complete control and knowledge of what's running in it, the very opposite of when people want to be able to kill threads.
No, that's exactly why you must not have "a way to interrupt it". Doing so could leave said external code in an invalid state and there would be no plausible way to do anything about it.
This seems like a microcosm of the general design problems that pervade Python. It is a fundamentally quirky language.
Any recommended alternative?
I’m bracing myself for the response, but.
Rust.
Yes, it clearly has a sharper learning curve than Python. But I think idiomatic Rust feels a lot like a good Python implementation of the same thing would, in ways I can’t quite explain. The borrow checker is certainly something to come to grips with, but it also nudges you to write code the way you should probably be writing it anyway. Don’t pass complex structures around and mutate them everywhere. Don’t return 5 kinds of object from the same function. Don’t go crazy with inheritance vs plain, understandable, testable functions.
Rust just feels right to my Pythonista brain.
Ruby is probably the most-similar popular language - a good choice for small projects/prototyping/exploratory programming. Personally I'm still using Python for this, but I'm basically still writing Python 2.7 (plus f-strings).
For larger projects, Go seems to follow the "Zen of Python" better than Python itself nowadays.
If you have access to f-strings then you have had to change quite a few things that are mutually incompatible between 2.x and 3.x. Perhaps you feel like your Python 3 code (which requires at least 3.6) is "basically still 2.7" because you aren't using things like type annotations (and actual tooling to check them), or the walrus operator, or the match statement. But all of these things are entirely optional, and plenty of people feel that they're actually un-Pythonic.
> you have had to change quite a few things that are mutually incompatible between 2.x and 3.x
I made those changes while still targeting 2.7: around 2018 I was writing code for 2.7 that was forward-compatible with 3.x, as was common at the time. Since then, f-strings have been worth adding to my toolbox - most of Python 3's other new features haven't been.
At this point, there are good solutions to most of the problems. It is just hard and awkward to avoid the standard library bits and the common tools (e.g. use uv instead of pip, requests instead of urllib, pytest instead of unittest, marimo instead of jupyter, trio instead of asyncio...).
Of course having a "batteries included" language where the batteries hurt you is not great.
Nim?
Nim is often recommended as an alternative to Python, but how similar is it? I know it has Python-like significant indentation, but I'd have thought syntax is the least important consideration when choosing a language?
https://github.com/nimpylib/nimpylib shows it can be made nearly identical.
This answer's @pansa2's question in a way that more leans on one aspect of Nim's greater syntactic & code transformation flexibility (which is great!), but I suspect pansa2 was more interested in differences in "idiomatic Nim" from Python for which there are more details here: https://github.com/nim-lang/Nim/wiki/Nim-for-Python-Programm...
Crystal.
[dead]
[dead]
A big issue with asyncio is IME the documentation. The main document is a tutorial-style tour of the features, scarcely mentioning the important details. These are often hidden in comments on seemingly unrelated features or, more likely, reddit and SE.
Some Python documentation indeed includes a tutorial, but also has a comprehensive documentation. This is not my experience with asyncio
Oh cool. I wrote this. 100% hit rate on random posts of mine ending up on HN.
Only experience with this thing was when working with a node dev that was trying out python. What a mess, he didnt quite get how processes or threads worked, just learned node's model event / promise models and brought them to python
In all fairness, that's a pretty common experience with any transition between dislike platforms.
I'll never forget one Java developer who transitioned to JS frontend work (pre-typescript days). Try/catch everywhere.
There's a semi-accurate mindset that "good developers are good in any language" as if software engineers are fungible resources, but the transition time to a novel paradigm isn't really well appreciated.
I have seen "you can write Java in any language" proven true many times.
The culture and idioms of Java impose great constraints than syntax.
Not meant as a knock on Java or Java developers.
One of the biggest problems with Python is how much Java is written in it.
Including in the standard library (the most egregious examples being `logging` and `unittest`).
And another big problem is how much C is written in it, even when the project also contains actual C.
As somebody that has written couple of services with Twisted in the late '00s, I feel asyncio a breath of fresh air. This not means that it's perfect, but it's a lot more usable than Twisted. I know that it's not possible, but I would like to have in Python something similar to goroutines, waitgroups and channels that Go has.
Asyncio is too dull. Twisted for the win! My favourite insane package of wonders.
And the best way to write unmaintainable and undebuggable spaghetti code.
“Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.” – Virding’s first rule of programming
For the (brief) C comparison, I'm really not sure why this talks about `pthread_kill` but not `pthread_cancel` ...
`pthread_cancel` still has most of the issues described.
It's standard, it's persistent, and it automatically handled by well-formed code. What's missing?
It will still leave resources in unknown and possibly inconsistent state, moreover you need to use `pthread_cleanup_push` to register everything that may need to be done to clean up the thread... for languages with RAII this basically means the destructor of every local variable, which isn't really reasonable, so you end up with ton of code where it won't work. You can claim it's "standard", but de-facto it really isn't and most code is not "well-formed" from its point of view.
Cancellation is just an exception, so destructors work automatically. Any implementation of a language that uses RAII without integrating cancellation into it is fundamentally broken and should be avoided at all costs. `pthread_cleanup_push` (or `__attribute__((cleanup))` is only needed if you're implementing it by hand.
Since the set of functions that can be cancelled is documented (and pretty obvious - roughly, any blocking function but notably excluding mutex functions), there's no "unknown" involved. At most there is "I insisted on writing C but refused to take the effort to make my code correct in a threaded environment."
Notably, `malloc` is not a cancellation point so this all is much easier to deal with than signals.
> Cancellation is just an exception
It's not guaranteed to be implemented that way, and even if it was, not all kind of unwinds are compatible.
How does httpx compare?
asyncio is for general concurrency; httpx is an http client. There is no comparison to make.
Sir, this is Wendy's.
I have a feeling that the core issue is that author is trying to implement complex communication mechanisms with a tool that was designed for much more basic usage. I don't understand why half of the article is about cancellations while cancellations as a concept are wrong. Have you ever tried to cancel peeing mid-pee? It just doesn't work.
Not sure if the other half of the article is worth reading if first half has completely wrong premise. Or does this guy live in some utopian future where it's reasonable to assume that everything can be cancelled at any point.