Both of the error messages we're given indicate that the "top" chunk of the heap was corrupted, which is a special internal allocation used by glibc malloc to represent any unused capacity from the last time malloc decided to grow the heap:
That likely indicates a heap buffer overflow. If a call to malloc() doesn't find a freed chunk, it will split the top chunk in two and return a pointer to the bottom portion. If you then write past the end of the returned allocation, you clobber the metadata of the top chunk, and get errors like the ones in the article.
With the exploit mitigations built into modern Linux and glibc, it's a lot of work to go from here to arbitrary code execution; but it very well may be possible, depending on exactly how much control the attacker has over what atop does. The attacker can probably trigger the heap buffer overflow multiple times by spawning multiple processes, and if the length and contents of the heap buffer overwrite are attacker-controlled, they can probably play some games to overwrite any data stored in the heap. If that's true, the only thing preventing full arbitrary-code-execution is ASLR; there are many clever ways to get around that, but it's often quite difficult and may or may not be possible here.
This year's LACTF had a challenge with essentially this exact setup. My solution writeup is a good example of what it takes to defeat the exploit mitigations and turn a heap buffer overflow into an RCE: https://jonathankeller.net/ctf/lamp/
I thought I would ask, does this mean that memory restricted environments can be a better target for attack than one with a large amount of memory available? In my mind it seems like this would be the case, but I'm not sure if there is anything in place to protect these types of environments, like intentionally breaking up and segmenting memory so it's not possible to read much linearly. I admit that I haven't touched low-level code since the early 2000's, and earlier than that for anything other than a course requirement, so I apologize if you explained it in your linked article and I don't understand.
Yes, with the caveat that virtual memory restrictions matter a lot more than physical memory restrictions.
Heap exploitation is difficult because 1) glibc malloc is hardened to try to defeat many common exploit strategies, and 2) ASLR means that even if you have the ability to corrupt a pointer, you might not be able to control what the pointer points to.
Regarding #1, memory constraints don't really make a difference until you're so constrained on memory that you can't run glibc anymore, so you use an allocator that is optimized for low overhead/code size rather than performance and security.
For #2: ASLR works by placing every section of the process (heap, stack, program binary, and each library) at a randomized virtual address so that attackers can't forge pointers. ASLR is much more effective on a 64-bit system than on a 32-bit system, simply because there's so much virtual address space available to choose. If memory addresses are only 32 bits, it's feasible to just brute force guess the memory address of some data you're interested in; with 64 bits, it's not. And if your system doesn't have virtual memory at all (like a microcontroller), you probably don't have any kind of ASLR.
> intentionally breaking up and segmenting memory so it's not possible to read much linearly
This is typically done at the level of individual sections (stack, heap, program binary, library binaries) but not within sections; mainly for performance, memory overhead, and cache locality reasons. The entire heap is contiguous, so if you overwrite past the end of one heap allocation you can overwrite adjacent allocations, but you can't overwrite the stack (without more work). Breaking up the heap into smaller chunks wouldn't really help that much; it just means an attacker has to manipulate the heap layout so they can be sure the allocation they're targeting ends up in the chunk they're targeting.
Exploit hardening in the heap allocator is something of a last-ditch stopgap measure: "we've already lost, but let's see if we can minimize the damage to the user/maximize the difficulty to the attacker". There's certainly much more you could do to harden the heap, but remember that any security measures implemented in glibc are applied to every program, with no way to opt out (except bringing your own libc). So glibc is designed to maximize performance and compatibility; exploit mitigations are only included when they doesn't compromise these goals.
If you don't mind giving up some performance for the sake of security, you probably would already be using a garbage-collected language instead of C. (And now that Rust is sufficiently mature for most use cases, you probably should be using that if you're in a situation where you need both performance and security).
Hey guys so I work on a tool call Bismuth along with my co-founder for finding and fixing bugs and we think we have this. At the very least we have a bug in atop which mimics what is being described.
We're going to throw this sha down right here: 1d53b12f3bc325dcfaff51a89011f01bffca951db9df363e6b5d6233f23248a5
And now we're going to go responsibly disclose what we have to the maintainers.
I'm inclined to expect that we should put the blame for that on whomever used legal channels to force Rachel to shut up, although obviously the jury is still out until we know more.
This is the only explanation that makes some sense, otherwise it would be just a dick move for someone to hint at the presence of an exploitable bug but then not say what exactly it is.
Not telling you how to hack a whole bunch of computers until the bug is fixed is called responsible disclosure. It's very popular, and depending on how the government is feeling that day, may be illegal not to do.
Then you are overlooking two things that provide important context: her previous behavior in similar circumstances of discovering bugs, and the opening sentence:
> My life as a mercenary sysadmin can be interesting.
To me this reads as "I was hired as a consutant for something that required a very restrictive NDA."
In that case: report the bug, keep your mouth shut until the bug is fixed, then talk about the bug once it is fixed or your responsible disclosure deadline has passed.
Turned out the bug was a 2.9 CVE nothingburger. It’s not like they found the FSB/Mossad/NSA had hooks in a remotely exploitable root level process.
- There is no commit with a SHA1 like that in atop Git history and what you shared is too long for a SHA1, it looks more like a SHA256. Did you share the right checksum? The only other way I can read this is that it's a SHA256 checksum of one of the past atop release tarballs or artifacts. I have not yet checked those.
- I have tried finding your tool Bismuth but all I find is things KDE and crypto currencies. Please share a link to the Bismuth that you are working on.
- You technically said that you are working on Bismuth /and/ found something, not that you found the bug /through/ Bismuth. Please clarify if and how that was the case.
- That SHA is just a proof marker so if it turns out we are correct we can prove we had it at that time
- Bismuth did indeed find the bug, our bug scanning feature in particular. Obviously we're going to sit on our hands until the maintainer gives the all clear but we'll write something up after this is all squared away
pretty sure it's just a hash of some text they can reveal later, to prove that they had something at this point in time. not referring to any release or commit
I gave her the benefit of the doubt initially, she usually posts good posts, but this is not the way to do things. Vagueposting about a security vulnerability without properly disclosing it to the mantainers: (1) damages their reputation, (2) sends every blackhat on the hunt like a real life worldwide CTF event, (3) leaves sysadmins in the dark unless they are following this specific random blog and (4) since the details aren't known even if they know it's impossible to determine if they really are affected.
Something like this would be justified if the maintainers were unresponsive and it was a remotely exploitable bug. Now it turns out this is probably a minor thing (local privilege escalation if you happen to be running atop as a privileged user).
It seems to me like an irresponsible, egocentric way to handle things.
At least on Debian, installing the `atop` package will automatically install a background service running atop as root. (by default, logging some stats to /var/log/atop/ every ten minutes)
> a minor thing (local privilege escalation if you happen to be running atop as a privileged user)
I seem to be hearing this sentiment a lot lately. How is local privilege escalation a minor thing?
If it's such a minor thing, is the old advice to not run as root considered passé? Should we just run everything as root? Should we discard the entire Unix security model and chmod all files to 0777?
In most scenarios, you are no longer running with multiple users on the same machine.
Either this is a server, which has an admin team, or a client machine, which _usually_ have a single user.
That isn't 100% true, and local privilege escalation matters, but it is a far cry from remote code execution or remote privilege escalation.
User privilege separation is a foundation that allows many container implementations to work, and for sandboxes software like Tor or, for however unlikely it is that you're running atop on it, Android use, etc.
If someone is running Tor to not end up in prison/dead, their Tor sandbox can be opened for anyone to own, for example.
All of that etiquette sounds nice if you're being paid to do this work, but I don't think anyone is obligated to "properly" disclose a vulnerability they found on their own time/dime, nor do I think it's a moral imperative for anyone to do so.
Eh, finger pointing does nobody any good, emphatically including this comment. Finger pointing towards someone who actually found a vulnerability is just bleak. I would not willingly associate with anyone who engaged in such behavior.
Maintaining software is hard, but this does not imply a right to be babied. People should simply lower their expectations of security to match reality. Vulnerabilities happen and only extremely rarely do they indicate personal flaws that should be held against the person who introduced it. But it's your job to fix them. Stop complaining.
>Finger pointing towards someone who actually found a vulnerability is just bleak. I would not willingly associate with anyone who engaged in such behavior.
Nobody is "finger pointing" Rachel for the vulnerability. They're calling her out for how she communicated it. I feel that's totally justified. For instance if someone found a critical RCE, but the report was a barely coherent stream of consciousness, it's totally fine to call the latter part out. That's not "finger pointing".
>But it's your job to fix them. Stop complaining.
It's the developers job to respond to bug reports in the form of vaguely written blog posts?
>But everyone is grilling the author for publishing
What's the alternative? Having no quality bar for vulnerability reports, and give no pushback for poorly written vulnerability reports, even if they're crayon scribbles on a napkin? I agree that not everyone can write a detailed and thoroughly researched bug report like the ones project zero puts out, but I think most can agree that "you might want to stop using [software]" is well below any reasonable quality bar.
>Maybe they should sell it next time, no negative reaction that way
Yeah I'm sure 0day groups are going to be paying top dollar for weird crashes.
How is this not justified? For that matter, how was rachelbythebay's first post not fingerpointing? "You might not want to be associated with nukem222, not even a little bit, if you know what I mean".
Fingerpointing is bad, but we have to have an honest conversation.
One person posted the vague post. They clearly did not expect the reaction it got, though they could have anticipated some of it, they are aware their blog is widely read. Their reaction is commendable, to quickly post a followup appealing for calm and sharing some details, to quell the problems caused by the intense vagueness.
What people from HN did, because of the vagueness, was assume this a super-secret-squirrel mega-vulnerability and Rachel is gagged by NDAs or the CIA or whatever... and they've gone off and harrassed the developers of atop while trying to find the issue.
Imagine a person of note saying "the people at 29 Acacia Road are suspicious", then a mob breaks down the door and start rifling through all the stuff there, muttering to themselves "hmm, this lamp looks suspicious... this fork looks suspicious"... absolute clowns, all of them.
No, you dummies, it's not going to be in the latest commit, or easily greppable.
This is exactly why CVEs, coordinated disclosure, and general security reporting practises exist. So every single issue doesn't result in mindless panic and speculation.
There's now even a CVE purely based on the vaguepost, assigned to a reporter who clearly knows fuck all about what the problem is: https://www.cve.org/CVERecord?id=CVE-2025-31160 - versions "0" through "2.11.0" vulnerable, eh? That would be all versions, and the reason the reporter chose that is because they don't know which versions are vulnerable, and they don't know what it's vulnerable to either. But somehow, "don't know", the absence of information, has become a concrete "versions 0 to 2.11.0 inclusive"... just spreading the panic.
I don't know why Rachel is vagueposting, but I can only hope she has reported this correctly, which is to:
1. Contact the security of the distro you're using. e.g. if you're using atop on debian, then email security@debian.org with the details.
2. Allow them to help coordinate a response with the packager, the upstream maintainer(s) if appropriate, and other distros, if appropriate. They have done this hundreds of times before. If it's critically important, it can be fixed and published within days, and your worries about people being vulnerable because you know something they don't can be relieved, all the more quickly.
I commend you for writing what you think should be done and not just complaining about what was done. It is more helpful to express the correct procedure than to only label things as the wrong procedure.
I never quite understood why computing is so different from literally all other branches of reality. Systems need to be secure, I get it. But if we have a bunch of folks dedicating their life to breaking your shit I don't get how that is in any way acceptable and why the weight of responsibility solely lies with people responsible for security.
We apparently have a society/world that normalizes breaking everyone's shit. That's not normal - IMO.
If I break into a factory or laboratory of some kind and just walk out again I have not found a "vulnerability" and I certainly won't be remunerated or awarded status or prestige in any way shape or form. I will be prosecuted. Everyone can break into stuff. It's not that stuff is unbreakable, it's that you just don't do that because the consequences are enormous (besides obvious issues with morality). Again, breaking stuff is the easy part.
I am certainly completely ignorant and should be drawn and quartered for it, but for me it is hard to put my finger where I'm so wrong.
I can see how the immaterial nature of software systems changes the nature of the defense, but I don't see how it immediately follows that breaking stuff that's not allowed to be broken by you is suddenly the norm and nothing can be done against that. We just have to shrug and accept our fate?
Leaving aside the ethics of vulnerability research in server-side software, you're neglecting the fact that atop runs on your own machine.
So it's not like breaking into a factory. It's like noticing that your dishwasher makes the deadbolts in your house stop working (yes...a weird analogy--there are ways software isn't like physical appliances).
Surely you have the right to explore the behavior of your own house's appliances and locks, and the manufacturer does not have the right to complain.
As for server side software, I think the argument is a simple consequentialist one. The system where vulnerability researchers find vulnerabilities and report them quietly (perhaps for a bounty, perhaps not) works better than the one where we leave it up to organized crime to find and exploit those issues. It generates more secure systems, and less harm to businesses and users.
I'm sorry for being the ignoramus and lazy ass that I am, having not read a single sentence of the article.
You are, of course, right. Examining stuff to be brought into your own home is categorically different from meticulously analyzing and publishing the security vulnerabilities of your local power plant.
I can get behind the consequentialist argument. Sometimes we've just gotta go with what works, but I wonder if we give up too easily..
If I buy a physical product, take it home, and then publish the various issues I find with it then ... nobody has a problem with that
I'm as sad as the next guy that the safe and trusting internet of academia is long gone, but the generally accepted view nowadays is that it's absolutely full to the gills with opportunistic criminals. Letting people know that their software is insecure so they don't get absolutely screwed by that ravening horde is a public service and should be appreciated as such.
Pen testing third party systems is a grey area. Pen testing publicly available software in your own environment and warning of the issues is not, particularly when the disclosure is done with care.
I agree and conforming to HN rules, guidelines and established practices I did not, in fact, read or engage with the article at all (and I apologize).
Your view is one I agree with completely for a device bought to bring into your own home.
What I find less understandable is how finding (and exploiting) security flaws in publicly facing structures is normalized to the degree that it is. I can easily analyze some public stucture and publish detailed records on how you would most efficiently break into my local hardware store. I'm not sure I'm seeing the net win for society.
How is it better to not look into or share such information when we know that a vast army of assholes are doing the same thing for nefarious purposes?
Yes, they might not spot it themselves, but we know that in practice they often do and the results are horrible. If we stop looking then they will definitely be the first to find vulnerabilities - as it is they are only sometimes the first (and the vulnerabilities they find are likely to be the lesser appalling ones).
Privately sharing the issue with the authors lets them fix it in a timely way, publicly announcing the issue after a reasonable period of time incentivises them to do so - corporate authors often won't bother unless their arms are twisted.
If those black-hat hackers were not really out there then I might agree with you, but they are, and they don't care that we don't like it.
In a way I am definitely seeing your perspective here. Letting "good guys" win this race ocassionally is an improvement over never letting them win.
It's just that I think we can do better, because I think the web is a hostile, vitriolic open sewer and must be governed properly before civilized business can be conducted on it. It was perhaps a great innovative place, but it now is a dumpster fire causing endless headaches and beyond redemption. I think it's time to face this reality instead of trying to dress up the turd.
Are you not aware that the internet is an international artefact? Will you institute a Great Firewall to prevent your citizens from seeing outside your borders?
An inconvenient question I often ask about proposed architecture changes is: "How will you get there from here?" - if you can't answer it then it's not going to happen.
Well also in the real world, if you look at history, people DID exploit the neighbouring tribe with impunity if they could not defend themselves ("what idiots don't have a guard during night"), or built stone fortresses with 3 metre stone walls.
When living under those conditions, people probably did put the responsibility to be safe on the victim..
We have been able to remove this waste due to the introduction of the national state, laws, "monopoly on violence", police...
It is THOSE things that allows the factory in your analogy to not spend resources on a 3 metre stone wall and armed guards 24/7.
Now on the internet the police, at least relatively to the physical world, almost completely lack the ability to either investigate or enforce anything. They may use what tools they can, but it does not give them much in the digital world compared to the physical.
If we want internet to be like the real world in this respect, we would have to develop ways to let the police see a lot more and enforce a lot more. Like they can in the physical world.
> If we want internet to be like the real world in this respect, we would have to develop ways to let the police see a lot more and enforce a lot more. Like they can in the physical world.
I agree and it's exactly this that's often so violently opposed by the technical community who are routinely frothing at the mouth at the suggestion that law enforcement needs access to be able to function while that community, and often especially that community with their fancy, expensive lives, enjoys widespread, comfortable physical and legal protection afforded by that very same law enforcement which is only made possible by this agency having far-reaching legal and lethal powers.
It can be abused and it will be abused, but I guess it comes down to do we want comfortable lives or do we want to be free?
IMO it's a matter of time before some nation-state level actor will unleash a digital shit-storm of astronomic proportions which will necessitate swift political decisions and it's my guess we better have an open, realistic discussion about it now instead of then.
"If I break into a factory or laboratory of some kind and just walk out" This is a weak analogy. In the situation you describe, right-and-wrong is easily understood by the layman, there is a common legal framework, there is muscle to enforce the legal framework.
In the computing space - if someone breaks the rules, it is only a bunch of us that understand what rule was broken, and even then we are likely to argue over the details of it. The people doing the breaks are often anonymous. There is no shared legal framework, or enforcement, or courts. The consequences of a break are usually weak. Consider the lack of jail time for anyone involved with Superfish. Many of these people were located in the developed world.
The computing world often resembles the lawlessness of earlier eras - where only locally-run fortifications separated civilian farmers from barbarian horsemen. A breach in this wall leads to catastrophe. It needs to be unbreakable. People who maintain fortifications shoulder a heavy responsibility.
Maybe it's more like analyzing and publishing the security vulnerabilities of said factory or laboratory. It's not trivially right or wrong to do so. It seems acceptable, because you are helping them make it more secure (right?) yet most societies are quite adamant that it's not, in fact, normal - and legal - to do so. You'll get yourself in quite a bit of trouble if you do that.
Just moving to Nigeria and publishing security bulletins on how to break into Walmarts is still a shaky proposition, but perhaps it's safer than I think it is. The international judiciary is opaque to me.
> The computing world often resembles the lawlessness of earlier eras - where only locally-run fortifications separated civilian farmers from barbarian horsemen. A breach in this wall leads to catastrophe. It needs to be unbreakable. People who maintain fortifications shoulder a heavy responsibility.
Sounds about right. I'm not too happy about it, although I guess this particular era has its advantages as well.
Lockpicking is probably a close analogy; and that is an perfectly accepted and legal hobby in all western countries, with thousand of youtube videos on how to pick common locks.
Computing is actually different. There are laws for example in Germany ("Hackerparagraph") that make it illegal to produce "hacking" tools.
We can lock down the Internet so hard that every IP packet is associated with a physical address, then go and arrest people who allow bad packets to be sent from their address. This is what many governments are persistently trying to do. Is it a good idea?
Not sure, but I don't see why we can't have a civil discussion about it and I'm not seeing much of that.
It's either A) we The People are completely free and nobody can intervene in any way or B) The Government is a tyrannical overlord that controls every packet that dares to enter the internet.
Absolute freedom never was and never will be a good idea. If we don't at least talk about it, somehow, someday, and maybe quite soon, They will ram it down our throats and we'll end up closer in scenerio B than A.
The internet reminds me in many ways of the international road network. There are clear boundaries and there are checks and, yes, they suck. It's not a complete free for all, yet it's workable. I know this analogy breaks down eventually, but I'm wondering if there's some middle ground here.
I guess I am jaded by some branches of "hacker culture" with a proclivity for taking pride in activities or mindsets - breaking in, finding exploits, destruction - that I don't find particularly palatable without understanding the social and eventual political backlash that will strip away your freedoms faster than you can say "papers, please".
> It's either A) we The People are completely free and nobody can intervene in any way or B) The Government is a tyrannical overlord that controls every packet that dares to enter the internet.
Do you mean the various efforts to weaken crypto stuff? I don't think I object in principle to law enforcement having access to the information for law enforcement purposes, but we know that any kind of access is subject to scope creep particularly when you lower the threshold for that access. First it's to enforce reasonable laws, then it's to enforce unreasonable laws, then it's because someone bribed a policeman. Not necessarily in that order.
Besides, the main problem with safety on the internet is not that law enforcement has no tools, it's that the crimes cross political borders. You can (in principle) identify the culprit in Russia easily enough when the money is laundered, but how exactly do you plan to bring them to justice?
Where I live, the police have raided people's homes for protesting things Israel did. And when I was a victim of an actual violent crime, they kept saying how they should arrest me - according to demographic profiling, I was the perpetrator (I was there, I wasn't), all the way up to the courtroom where the actual perpetrator barely avoided prison time. So no, I don't really trust them to access my private anything. Any society with a hope of stability obviously needs some way to enforce laws, but this isn't it.
But even if the police where you live were perfect, handing them the keys to the internet wouldn't resolve crimes committed outside their jurisdiction.
I see why the idea is appealing to politicians, but even they ought to think twice about the risks inherent in third parties accessing their most private communications - given that whatever sides of the political aisles they sit upon they are likely to be much more interesting targets to better resourced assailants than us average schmucks.
The analogy is not perfect, but physically the police already has extreme powers and they can (and ocassionally are) abused, but that's the price you pay for protection. If we don't, we accept bad guys will be running all over us for eternity and everybody and their mothers has to have, at minimum, a couple of AK-47s for basic safety.
I second this. The pompous holier-than-thou I-know-better attitude some members of the computer security community has always rubbed me the wrong way. This behaviour of complaining is a manifestation of the typical “putting down” and dismissing someone who isn’t part of the tribe.
This is a complete shot in the dark but wild speculation is fun. If atop had a buffer overflow when reading a process name (changeable at runtime using $0 in perl for example) this would be the kind of issue I expect.
Similarly, some other value that was expected to be null terminated but wasn't.
My guess is that it tries to decode the malloc metadata, which involves chasing pointers, and doesn't do enough sanity checking, so if a process accidentally or deliberately sets up corrupt metadata, atop will dereference an invalid pointer and explode itself.
It would be very surprised if this isn't just an atop bug.
Can it be exploited? Considering the error messages, the possibility is high.
I don't like to see it as "Problem with the heap". As someone who played lots of CTF and is quite sufficient in exploiting such bugs, i would much rather see those things framed as "Problems with the glibc allocator".
If we just wouldn't use inline metadata, or verify it's integrity (smth like scudo but not broken), all of the security issues with the heap would just be gone.
If you get to work with frameworks/languages that are more flexible when it comes to allocate memory (thinking about zig and it's amazing allocator abstraction here) you quickly realize that malloc and free are an insanely simplistic api for what is one of the most difficult and important problems in programming: memory allocation.
There is also no excuse for the error messages being that bad. Because the reality is that most systems programmers will have to debug those at some point.
I'm going from memory but I think when posts of hers get traction here she'll get comments that I think she refers to as 'The One' [0], and also comments that... you typically wouldn't get for a male poster.
You need to establish the causal link between user1 doing something and atop segfaulting. That link is what determines whether there's a potential exploitability to take advantage of. it's easy to think of a scenario where user1 is using almost all the memory on the system and user2 runs atop and segfaults because atop is missing a memory check and overcommit is disabled, or user1 deletes a file that can cause another app to crash given the wrong permissions on the file.
Yeah that's a half-assed apology. Yesterday's post might have unintentionally sent a lot of heads spinning, "this is not the way to do this" is not an apology, it's a double-down.
I understand there is a potential heap overflow with atop, thanks for letting everyone know; but you're also letting the people capable of taking advantage of it aware that there is possibility to do this in the wild. Due process is let the developers fix it and then tell everyone to upgrade.
Anyway a C process that runs as root for lifestyle purposes (e.g. not a critical service) is a big no-no. And I say this as I like to write C, I love C. But I wouldn't push my C code on anyone else's computer, especially requesting root access. I'm not that good.
(You are writing here under the name "Niten" which I am going to guess is not your full name. I am writing under the name "gjm11" which is also not my full name, though as it happens my full name is readily discoverable via my HN profile while yours is not. Obviously neither of us actually believes that there is something wrong with not stating your full name explicitly every time you write something.)
> Obviously neither of us actually believes that there is something wrong with not stating your full name explicitly every time you write something.
Who knows? Maybe "Niten" does believe that and has a massive shame and public-embarrassment kink. There's nothing wrong with that; that sort of thing is totally harmless.
I'd argue that it'd be a beneficial life lesson for the people who are freaking out over a "Hey, maybe stop using 'atop'." comment to learn how to enhance their calm.
Blowing one's stack over every little thing shortens your lifespan! It's best to learn how to take friendly warnings about bad things in stride.
I don't get comments and commenters like this. [0]
It's unquestionably an "improvement"
over the original, provided we measure that by the amount of information provided, which is the obvious way to measure it for every normal human being.
Yes, the author is very clearly dancing around something. No, nobody here knows why either. No, they don't have a damn concussion, and their post really isn't a difficult read at all.
Your ideas about NDAs are speculative (according to yourself). Why are you taking them at face value?
[0] But maybe this is just my variation on when people in YouTube comment sections get mad at each other for quoting the zingers from the corresponding videos, so eh...
Both of the error messages we're given indicate that the "top" chunk of the heap was corrupted, which is a special internal allocation used by glibc malloc to represent any unused capacity from the last time malloc decided to grow the heap:
That likely indicates a heap buffer overflow. If a call to malloc() doesn't find a freed chunk, it will split the top chunk in two and return a pointer to the bottom portion. If you then write past the end of the returned allocation, you clobber the metadata of the top chunk, and get errors like the ones in the article.With the exploit mitigations built into modern Linux and glibc, it's a lot of work to go from here to arbitrary code execution; but it very well may be possible, depending on exactly how much control the attacker has over what atop does. The attacker can probably trigger the heap buffer overflow multiple times by spawning multiple processes, and if the length and contents of the heap buffer overwrite are attacker-controlled, they can probably play some games to overwrite any data stored in the heap. If that's true, the only thing preventing full arbitrary-code-execution is ASLR; there are many clever ways to get around that, but it's often quite difficult and may or may not be possible here.
This year's LACTF had a challenge with essentially this exact setup. My solution writeup is a good example of what it takes to defeat the exploit mitigations and turn a heap buffer overflow into an RCE: https://jonathankeller.net/ctf/lamp/
I thought I would ask, does this mean that memory restricted environments can be a better target for attack than one with a large amount of memory available? In my mind it seems like this would be the case, but I'm not sure if there is anything in place to protect these types of environments, like intentionally breaking up and segmenting memory so it's not possible to read much linearly. I admit that I haven't touched low-level code since the early 2000's, and earlier than that for anything other than a course requirement, so I apologize if you explained it in your linked article and I don't understand.
Yes, with the caveat that virtual memory restrictions matter a lot more than physical memory restrictions.
Heap exploitation is difficult because 1) glibc malloc is hardened to try to defeat many common exploit strategies, and 2) ASLR means that even if you have the ability to corrupt a pointer, you might not be able to control what the pointer points to.
Regarding #1, memory constraints don't really make a difference until you're so constrained on memory that you can't run glibc anymore, so you use an allocator that is optimized for low overhead/code size rather than performance and security.
For #2: ASLR works by placing every section of the process (heap, stack, program binary, and each library) at a randomized virtual address so that attackers can't forge pointers. ASLR is much more effective on a 64-bit system than on a 32-bit system, simply because there's so much virtual address space available to choose. If memory addresses are only 32 bits, it's feasible to just brute force guess the memory address of some data you're interested in; with 64 bits, it's not. And if your system doesn't have virtual memory at all (like a microcontroller), you probably don't have any kind of ASLR.
> intentionally breaking up and segmenting memory so it's not possible to read much linearly
This is typically done at the level of individual sections (stack, heap, program binary, library binaries) but not within sections; mainly for performance, memory overhead, and cache locality reasons. The entire heap is contiguous, so if you overwrite past the end of one heap allocation you can overwrite adjacent allocations, but you can't overwrite the stack (without more work). Breaking up the heap into smaller chunks wouldn't really help that much; it just means an attacker has to manipulate the heap layout so they can be sure the allocation they're targeting ends up in the chunk they're targeting.
Exploit hardening in the heap allocator is something of a last-ditch stopgap measure: "we've already lost, but let's see if we can minimize the damage to the user/maximize the difficulty to the attacker". There's certainly much more you could do to harden the heap, but remember that any security measures implemented in glibc are applied to every program, with no way to opt out (except bringing your own libc). So glibc is designed to maximize performance and compatibility; exploit mitigations are only included when they doesn't compromise these goals.
If you don't mind giving up some performance for the sake of security, you probably would already be using a garbage-collected language instead of C. (And now that Rust is sufficiently mature for most use cases, you probably should be using that if you're in a situation where you need both performance and security).
Thank you for your reply, that makes quite a bit more sense to me!
It’s a lot easier to use an exploit if you know your chunk is from fast bins if I recall correctly.
Recent and related:
You might want to stop running atop - https://news.ycombinator.com/item?id=43477057 - March 2025 (131 comments)
Hey guys so I work on a tool call Bismuth along with my co-founder for finding and fixing bugs and we think we have this. At the very least we have a bug in atop which mimics what is being described.
We're going to throw this sha down right here: 1d53b12f3bc325dcfaff51a89011f01bffca951db9df363e6b5d6233f23248a5
And now we're going to go responsibly disclose what we have to the maintainers.
We did in fact find the bug:
https://news.ycombinator.com/item?id=43519522
We've reached out to the maintainer over e-mail.
Based on the bug you've found, do you think it's exploitable beyond DoS?
Thank you for doing this instead of just vagueposting and wasting everyone’s time.
I'm inclined to expect that we should put the blame for that on whomever used legal channels to force Rachel to shut up, although obviously the jury is still out until we know more.
This is the only explanation that makes some sense, otherwise it would be just a dick move for someone to hint at the presence of an exploitable bug but then not say what exactly it is.
Not telling you how to hack a whole bunch of computers until the bug is fixed is called responsible disclosure. It's very popular, and depending on how the government is feeling that day, may be illegal not to do.
I'm reading "I can go into why another time." like "I don't have time" personally, not like "I am not allowed to say".
Then you are overlooking two things that provide important context: her previous behavior in similar circumstances of discovering bugs, and the opening sentence:
> My life as a mercenary sysadmin can be interesting.
To me this reads as "I was hired as a consutant for something that required a very restrictive NDA."
It might have just been responsible disclosure.
In that case: report the bug, keep your mouth shut until the bug is fixed, then talk about the bug once it is fixed or your responsible disclosure deadline has passed.
Turned out the bug was a 2.9 CVE nothingburger. It’s not like they found the FSB/Mossad/NSA had hooks in a remotely exploitable root level process.
Hi! Three things:
- There is no commit with a SHA1 like that in atop Git history and what you shared is too long for a SHA1, it looks more like a SHA256. Did you share the right checksum? The only other way I can read this is that it's a SHA256 checksum of one of the past atop release tarballs or artifacts. I have not yet checked those.
- I have tried finding your tool Bismuth but all I find is things KDE and crypto currencies. Please share a link to the Bismuth that you are working on.
- You technically said that you are working on Bismuth /and/ found something, not that you found the bug /through/ Bismuth. Please clarify if and how that was the case.
Thank you!
- That SHA is just a proof marker so if it turns out we are correct we can prove we had it at that time
- Bismuth did indeed find the bug, our bug scanning feature in particular. Obviously we're going to sit on our hands until the maintainer gives the all clear but we'll write something up after this is all squared away
- https://www.bismuth.sh is our tool, we're still relatively new
pretty sure it's just a hash of some text they can reveal later, to prove that they had something at this point in time. not referring to any release or commit
This is exactly correct
I see, thanks!
Update: I found https://www.bismuth.sh/ at https://news.ycombinator.com/user?id=ianbutler .
I gave her the benefit of the doubt initially, she usually posts good posts, but this is not the way to do things. Vagueposting about a security vulnerability without properly disclosing it to the mantainers: (1) damages their reputation, (2) sends every blackhat on the hunt like a real life worldwide CTF event, (3) leaves sysadmins in the dark unless they are following this specific random blog and (4) since the details aren't known even if they know it's impossible to determine if they really are affected.
Something like this would be justified if the maintainers were unresponsive and it was a remotely exploitable bug. Now it turns out this is probably a minor thing (local privilege escalation if you happen to be running atop as a privileged user).
It seems to me like an irresponsible, egocentric way to handle things.
At least on Debian, installing the `atop` package will automatically install a background service running atop as root. (by default, logging some stats to /var/log/atop/ every ten minutes)
> a minor thing (local privilege escalation if you happen to be running atop as a privileged user)
I seem to be hearing this sentiment a lot lately. How is local privilege escalation a minor thing?
If it's such a minor thing, is the old advice to not run as root considered passé? Should we just run everything as root? Should we discard the entire Unix security model and chmod all files to 0777?
In most scenarios, you are no longer running with multiple users on the same machine. Either this is a server, which has an admin team, or a client machine, which _usually_ have a single user.
That isn't 100% true, and local privilege escalation matters, but it is a far cry from remote code execution or remote privilege escalation.
User privilege separation is a foundation that allows many container implementations to work, and for sandboxes software like Tor or, for however unlikely it is that you're running atop on it, Android use, etc.
If someone is running Tor to not end up in prison/dead, their Tor sandbox can be opened for anyone to own, for example.
Root privileges allow for a much wider attack surface for escaping out of a VM. Not using root everywhere still helps with defense in depth.
> Should we discard the entire Unix security model and chmod all files to 0777
It depends, but for most use cases... yes, actually.
All of that etiquette sounds nice if you're being paid to do this work, but I don't think anyone is obligated to "properly" disclose a vulnerability they found on their own time/dime, nor do I think it's a moral imperative for anyone to do so.
Eh, finger pointing does nobody any good, emphatically including this comment. Finger pointing towards someone who actually found a vulnerability is just bleak. I would not willingly associate with anyone who engaged in such behavior.
Maintaining software is hard, but this does not imply a right to be babied. People should simply lower their expectations of security to match reality. Vulnerabilities happen and only extremely rarely do they indicate personal flaws that should be held against the person who introduced it. But it's your job to fix them. Stop complaining.
>Finger pointing towards someone who actually found a vulnerability is just bleak. I would not willingly associate with anyone who engaged in such behavior.
Nobody is "finger pointing" Rachel for the vulnerability. They're calling her out for how she communicated it. I feel that's totally justified. For instance if someone found a critical RCE, but the report was a barely coherent stream of consciousness, it's totally fine to call the latter part out. That's not "finger pointing".
>But it's your job to fix them. Stop complaining.
It's the developers job to respond to bug reports in the form of vaguely written blog posts?
Yeah shame on the people irresponsibely publishing the vulnerability, but the people putting them in? Who cares
>but the people putting them in? Who cares
Literally nobody is arguing this.
But everyone is grilling the author for publishing. Maybe they should sell it next time, no negative reaction that way
>But everyone is grilling the author for publishing
What's the alternative? Having no quality bar for vulnerability reports, and give no pushback for poorly written vulnerability reports, even if they're crayon scribbles on a napkin? I agree that not everyone can write a detailed and thoroughly researched bug report like the ones project zero puts out, but I think most can agree that "you might want to stop using [software]" is well below any reasonable quality bar.
>Maybe they should sell it next time, no negative reaction that way
Yeah I'm sure 0day groups are going to be paying top dollar for weird crashes.
[flagged]
How is this not justified? For that matter, how was rachelbythebay's first post not fingerpointing? "You might not want to be associated with nukem222, not even a little bit, if you know what I mean".
Fingerpointing is bad, but we have to have an honest conversation.
One person posted the vague post. They clearly did not expect the reaction it got, though they could have anticipated some of it, they are aware their blog is widely read. Their reaction is commendable, to quickly post a followup appealing for calm and sharing some details, to quell the problems caused by the intense vagueness.
What people from HN did, because of the vagueness, was assume this a super-secret-squirrel mega-vulnerability and Rachel is gagged by NDAs or the CIA or whatever... and they've gone off and harrassed the developers of atop while trying to find the issue.
Imagine a person of note saying "the people at 29 Acacia Road are suspicious", then a mob breaks down the door and start rifling through all the stuff there, muttering to themselves "hmm, this lamp looks suspicious... this fork looks suspicious"... absolute clowns, all of them.
For example, this asshole who went straight in there with bad-faith assumptions on the first thing they saw: https://github.com/Atoptool/atop/issues/330#issuecomment-275...
No, you dummies, it's not going to be in the latest commit, or easily greppable.
This is exactly why CVEs, coordinated disclosure, and general security reporting practises exist. So every single issue doesn't result in mindless panic and speculation.
There's now even a CVE purely based on the vaguepost, assigned to a reporter who clearly knows fuck all about what the problem is: https://www.cve.org/CVERecord?id=CVE-2025-31160 - versions "0" through "2.11.0" vulnerable, eh? That would be all versions, and the reason the reporter chose that is because they don't know which versions are vulnerable, and they don't know what it's vulnerable to either. But somehow, "don't know", the absence of information, has become a concrete "versions 0 to 2.11.0 inclusive"... just spreading the panic.
I don't know why Rachel is vagueposting, but I can only hope she has reported this correctly, which is to:
1. Contact the security of the distro you're using. e.g. if you're using atop on debian, then email security@debian.org with the details.
2. Allow them to help coordinate a response with the packager, the upstream maintainer(s) if appropriate, and other distros, if appropriate. They have done this hundreds of times before. If it's critically important, it can be fixed and published within days, and your worries about people being vulnerable because you know something they don't can be relieved, all the more quickly.
I commend you for writing what you think should be done and not just complaining about what was done. It is more helpful to express the correct procedure than to only label things as the wrong procedure.
I never quite understood why computing is so different from literally all other branches of reality. Systems need to be secure, I get it. But if we have a bunch of folks dedicating their life to breaking your shit I don't get how that is in any way acceptable and why the weight of responsibility solely lies with people responsible for security.
We apparently have a society/world that normalizes breaking everyone's shit. That's not normal - IMO.
If I break into a factory or laboratory of some kind and just walk out again I have not found a "vulnerability" and I certainly won't be remunerated or awarded status or prestige in any way shape or form. I will be prosecuted. Everyone can break into stuff. It's not that stuff is unbreakable, it's that you just don't do that because the consequences are enormous (besides obvious issues with morality). Again, breaking stuff is the easy part.
I am certainly completely ignorant and should be drawn and quartered for it, but for me it is hard to put my finger where I'm so wrong.
I can see how the immaterial nature of software systems changes the nature of the defense, but I don't see how it immediately follows that breaking stuff that's not allowed to be broken by you is suddenly the norm and nothing can be done against that. We just have to shrug and accept our fate?
Leaving aside the ethics of vulnerability research in server-side software, you're neglecting the fact that atop runs on your own machine.
So it's not like breaking into a factory. It's like noticing that your dishwasher makes the deadbolts in your house stop working (yes...a weird analogy--there are ways software isn't like physical appliances).
Surely you have the right to explore the behavior of your own house's appliances and locks, and the manufacturer does not have the right to complain.
As for server side software, I think the argument is a simple consequentialist one. The system where vulnerability researchers find vulnerabilities and report them quietly (perhaps for a bounty, perhaps not) works better than the one where we leave it up to organized crime to find and exploit those issues. It generates more secure systems, and less harm to businesses and users.
I'm sorry for being the ignoramus and lazy ass that I am, having not read a single sentence of the article.
You are, of course, right. Examining stuff to be brought into your own home is categorically different from meticulously analyzing and publishing the security vulnerabilities of your local power plant.
I can get behind the consequentialist argument. Sometimes we've just gotta go with what works, but I wonder if we give up too easily..
I find your view bizarre.
If I buy a physical product, take it home, and then publish the various issues I find with it then ... nobody has a problem with that
I'm as sad as the next guy that the safe and trusting internet of academia is long gone, but the generally accepted view nowadays is that it's absolutely full to the gills with opportunistic criminals. Letting people know that their software is insecure so they don't get absolutely screwed by that ravening horde is a public service and should be appreciated as such.
Pen testing third party systems is a grey area. Pen testing publicly available software in your own environment and warning of the issues is not, particularly when the disclosure is done with care.
I agree and conforming to HN rules, guidelines and established practices I did not, in fact, read or engage with the article at all (and I apologize).
Your view is one I agree with completely for a device bought to bring into your own home.
What I find less understandable is how finding (and exploiting) security flaws in publicly facing structures is normalized to the degree that it is. I can easily analyze some public stucture and publish detailed records on how you would most efficiently break into my local hardware store. I'm not sure I'm seeing the net win for society.
How is it better to not look into or share such information when we know that a vast army of assholes are doing the same thing for nefarious purposes?
Yes, they might not spot it themselves, but we know that in practice they often do and the results are horrible. If we stop looking then they will definitely be the first to find vulnerabilities - as it is they are only sometimes the first (and the vulnerabilities they find are likely to be the lesser appalling ones).
Privately sharing the issue with the authors lets them fix it in a timely way, publicly announcing the issue after a reasonable period of time incentivises them to do so - corporate authors often won't bother unless their arms are twisted.
If those black-hat hackers were not really out there then I might agree with you, but they are, and they don't care that we don't like it.
In a way I am definitely seeing your perspective here. Letting "good guys" win this race ocassionally is an improvement over never letting them win.
It's just that I think we can do better, because I think the web is a hostile, vitriolic open sewer and must be governed properly before civilized business can be conducted on it. It was perhaps a great innovative place, but it now is a dumpster fire causing endless headaches and beyond redemption. I think it's time to face this reality instead of trying to dress up the turd.
That's an equivalent demand to expecting the world as a whole to be "governed properly" and thus won't be achieved for exactly the same reasons.
Not the world, our patch on it. We more or less succeeded in the physical world depending on who you ask. Don’t see the problem with the digital.
Are you not aware that the internet is an international artefact? Will you institute a Great Firewall to prevent your citizens from seeing outside your borders?
An inconvenient question I often ask about proposed architecture changes is: "How will you get there from here?" - if you can't answer it then it's not going to happen.
Well also in the real world, if you look at history, people DID exploit the neighbouring tribe with impunity if they could not defend themselves ("what idiots don't have a guard during night"), or built stone fortresses with 3 metre stone walls.
When living under those conditions, people probably did put the responsibility to be safe on the victim..
We have been able to remove this waste due to the introduction of the national state, laws, "monopoly on violence", police...
It is THOSE things that allows the factory in your analogy to not spend resources on a 3 metre stone wall and armed guards 24/7.
Now on the internet the police, at least relatively to the physical world, almost completely lack the ability to either investigate or enforce anything. They may use what tools they can, but it does not give them much in the digital world compared to the physical.
If we want internet to be like the real world in this respect, we would have to develop ways to let the police see a lot more and enforce a lot more. Like they can in the physical world.
> If we want internet to be like the real world in this respect, we would have to develop ways to let the police see a lot more and enforce a lot more. Like they can in the physical world.
I agree and it's exactly this that's often so violently opposed by the technical community who are routinely frothing at the mouth at the suggestion that law enforcement needs access to be able to function while that community, and often especially that community with their fancy, expensive lives, enjoys widespread, comfortable physical and legal protection afforded by that very same law enforcement which is only made possible by this agency having far-reaching legal and lethal powers.
It can be abused and it will be abused, but I guess it comes down to do we want comfortable lives or do we want to be free?
IMO it's a matter of time before some nation-state level actor will unleash a digital shit-storm of astronomic proportions which will necessitate swift political decisions and it's my guess we better have an open, realistic discussion about it now instead of then.
"If I break into a factory or laboratory of some kind and just walk out" This is a weak analogy. In the situation you describe, right-and-wrong is easily understood by the layman, there is a common legal framework, there is muscle to enforce the legal framework.
In the computing space - if someone breaks the rules, it is only a bunch of us that understand what rule was broken, and even then we are likely to argue over the details of it. The people doing the breaks are often anonymous. There is no shared legal framework, or enforcement, or courts. The consequences of a break are usually weak. Consider the lack of jail time for anyone involved with Superfish. Many of these people were located in the developed world.
The computing world often resembles the lawlessness of earlier eras - where only locally-run fortifications separated civilian farmers from barbarian horsemen. A breach in this wall leads to catastrophe. It needs to be unbreakable. People who maintain fortifications shoulder a heavy responsibility.
Maybe it's more like analyzing and publishing the security vulnerabilities of said factory or laboratory. It's not trivially right or wrong to do so. It seems acceptable, because you are helping them make it more secure (right?) yet most societies are quite adamant that it's not, in fact, normal - and legal - to do so. You'll get yourself in quite a bit of trouble if you do that.
Just moving to Nigeria and publishing security bulletins on how to break into Walmarts is still a shaky proposition, but perhaps it's safer than I think it is. The international judiciary is opaque to me.
> The computing world often resembles the lawlessness of earlier eras - where only locally-run fortifications separated civilian farmers from barbarian horsemen. A breach in this wall leads to catastrophe. It needs to be unbreakable. People who maintain fortifications shoulder a heavy responsibility.
Sounds about right. I'm not too happy about it, although I guess this particular era has its advantages as well.
Lockpicking is probably a close analogy; and that is an perfectly accepted and legal hobby in all western countries, with thousand of youtube videos on how to pick common locks.
Computing is actually different. There are laws for example in Germany ("Hackerparagraph") that make it illegal to produce "hacking" tools.
We can lock down the Internet so hard that every IP packet is associated with a physical address, then go and arrest people who allow bad packets to be sent from their address. This is what many governments are persistently trying to do. Is it a good idea?
Not sure, but I don't see why we can't have a civil discussion about it and I'm not seeing much of that.
It's either A) we The People are completely free and nobody can intervene in any way or B) The Government is a tyrannical overlord that controls every packet that dares to enter the internet.
Absolute freedom never was and never will be a good idea. If we don't at least talk about it, somehow, someday, and maybe quite soon, They will ram it down our throats and we'll end up closer in scenerio B than A.
The internet reminds me in many ways of the international road network. There are clear boundaries and there are checks and, yes, they suck. It's not a complete free for all, yet it's workable. I know this analogy breaks down eventually, but I'm wondering if there's some middle ground here.
I guess I am jaded by some branches of "hacker culture" with a proclivity for taking pride in activities or mindsets - breaking in, finding exploits, destruction - that I don't find particularly palatable without understanding the social and eventual political backlash that will strip away your freedoms faster than you can say "papers, please".
> It's either A) we The People are completely free and nobody can intervene in any way or B) The Government is a tyrannical overlord that controls every packet that dares to enter the internet.
Do you mean the various efforts to weaken crypto stuff? I don't think I object in principle to law enforcement having access to the information for law enforcement purposes, but we know that any kind of access is subject to scope creep particularly when you lower the threshold for that access. First it's to enforce reasonable laws, then it's to enforce unreasonable laws, then it's because someone bribed a policeman. Not necessarily in that order.
Besides, the main problem with safety on the internet is not that law enforcement has no tools, it's that the crimes cross political borders. You can (in principle) identify the culprit in Russia easily enough when the money is laundered, but how exactly do you plan to bring them to justice?
Where I live, the police have raided people's homes for protesting things Israel did. And when I was a victim of an actual violent crime, they kept saying how they should arrest me - according to demographic profiling, I was the perpetrator (I was there, I wasn't), all the way up to the courtroom where the actual perpetrator barely avoided prison time. So no, I don't really trust them to access my private anything. Any society with a hope of stability obviously needs some way to enforce laws, but this isn't it.
You and I vehemently agree.
But even if the police where you live were perfect, handing them the keys to the internet wouldn't resolve crimes committed outside their jurisdiction.
I see why the idea is appealing to politicians, but even they ought to think twice about the risks inherent in third parties accessing their most private communications - given that whatever sides of the political aisles they sit upon they are likely to be much more interesting targets to better resourced assailants than us average schmucks.
The analogy is not perfect, but physically the police already has extreme powers and they can (and ocassionally are) abused, but that's the price you pay for protection. If we don't, we accept bad guys will be running all over us for eternity and everybody and their mothers has to have, at minimum, a couple of AK-47s for basic safety.
I second this. The pompous holier-than-thou I-know-better attitude some members of the computer security community has always rubbed me the wrong way. This behaviour of complaining is a manifestation of the typical “putting down” and dismissing someone who isn’t part of the tribe.
as there seems to be some confusion, this is my interpretation:
atop is (for some reason) touching memory of processes it monitors.
atop is touching this in an insecure way. An executable can cause atop to corrupt its memory.
this has high potential (although not guaranteed) for allowing RCE within atop via a correctly crafted process that atop monitors.
atop is often run in root, and so this otherwise meaningless RCE becomes privilege escalation, which is bad
either this is correct, or cunningham’s law will bring out the correct interpretation
This is a complete shot in the dark but wild speculation is fun. If atop had a buffer overflow when reading a process name (changeable at runtime using $0 in perl for example) this would be the kind of issue I expect.
Similarly, some other value that was expected to be null terminated but wasn't.
My guess is that it tries to decode the malloc metadata, which involves chasing pointers, and doesn't do enough sanity checking, so if a process accidentally or deliberately sets up corrupt metadata, atop will dereference an invalid pointer and explode itself.
Same author in March 2014 was having segfault issues with atop apparently: https://rachelbythebay.com/w/2014/03/02/sync/
Rachel's been a reliable source of interesting issues like these for the better part of eternity now. Her blog's well worth reading.
[flagged]
[flagged]
After reading that: atop should've used SQLite.
It would be very surprised if this isn't just an atop bug.
Can it be exploited? Considering the error messages, the possibility is high.
I don't like to see it as "Problem with the heap". As someone who played lots of CTF and is quite sufficient in exploiting such bugs, i would much rather see those things framed as "Problems with the glibc allocator".
If we just wouldn't use inline metadata, or verify it's integrity (smth like scudo but not broken), all of the security issues with the heap would just be gone. If you get to work with frameworks/languages that are more flexible when it comes to allocate memory (thinking about zig and it's amazing allocator abstraction here) you quickly realize that malloc and free are an insanely simplistic api for what is one of the most difficult and important problems in programming: memory allocation.
There is also no excuse for the error messages being that bad. Because the reality is that most systems programmers will have to debug those at some point.
RachelByTheBay underestimated how much we hang on her every word over here, I think. Haha.
I believe she hasn't had the best behaviour from our members unfortunately
I expect she's had the best behavior that the members in question are capable of.
Can you expand on this? I'm new here and I have been binge reading her posts recently.
I'm going from memory but I think when posts of hers get traction here she'll get comments that I think she refers to as 'The One' [0], and also comments that... you typically wouldn't get for a male poster.
[0] https://rachelbythebay.com/w/2018/04/28/meta/
I think it has gotten better of late.
A vulnerability can be high risk, a vague disclosure means that people can only assume the worst.
Maybe don't vaguepost about vuln disclosures
This reads like it's referencing something specific. Am I out of the loop or is this just about heap exploits in general?
There was a post yesterday
https://news.ycombinator.com/item?id=43477057
Which was sort of strongly reacted to (atop is wide spread and the blog author has a bit of a following here).
A post from them yesterday made it to the front of HN.
https://news.ycombinator.com/item?id=43477057
Edited to be the HN post.
I found this hard to follow, like it was in the middle of something.
Responsible disclosure has been a thing for a long time. This is not a professional behavior.
You need to establish the causal link between user1 doing something and atop segfaulting. That link is what determines whether there's a potential exploitability to take advantage of. it's easy to think of a scenario where user1 is using almost all the memory on the system and user2 runs atop and segfaults because atop is missing a memory check and overcommit is disabled, or user1 deletes a file that can cause another app to crash given the wrong permissions on the file.
> Okay, first off, everybody breathe. Everyone is freaking out. This is not the way to do this.
Why are you doing it this way then?
Yeah that's a half-assed apology. Yesterday's post might have unintentionally sent a lot of heads spinning, "this is not the way to do this" is not an apology, it's a double-down.
I understand there is a potential heap overflow with atop, thanks for letting everyone know; but you're also letting the people capable of taking advantage of it aware that there is possibility to do this in the wild. Due process is let the developers fix it and then tell everyone to upgrade.
Anyway a C process that runs as root for lifestyle purposes (e.g. not a critical service) is a big no-no. And I say this as I like to write C, I love C. But I wouldn't push my C code on anyone else's computer, especially requesting root access. I'm not that good.
I would give her the benefit of the doubt and presume that the responsible parties were informed some time ago.
[flagged]
She does when she writes books (e.g. https://rachelbythebay.gumroad.com/l/bozo-loop-epub). She doesn't have her full name in the name of her blog because, well, whyever should she?
(You are writing here under the name "Niten" which I am going to guess is not your full name. I am writing under the name "gjm11" which is also not my full name, though as it happens my full name is readily discoverable via my HN profile while yours is not. Obviously neither of us actually believes that there is something wrong with not stating your full name explicitly every time you write something.)
> Obviously neither of us actually believes that there is something wrong with not stating your full name explicitly every time you write something.
Who knows? Maybe "Niten" does believe that and has a massive shame and public-embarrassment kink. There's nothing wrong with that; that sort of thing is totally harmless.
> Yeah that's a half-assed apology.
That makes sense. That's because it's not an apology; It's an admonition to those who are blowing their stacks.
> Why are you doing it this way then?
I'd argue that it'd be a beneficial life lesson for the people who are freaking out over a "Hey, maybe stop using 'atop'." comment to learn how to enhance their calm.
Blowing one's stack over every little thing shortens your lifespan! It's best to learn how to take friendly warnings about bad things in stride.
Check out this https://github.com/Atoptool/atop/issues/330 possibly related GitHub issue?
There haven't been any releases of atop since Jul 27, 2024, v2.11.0.
Are people building atop themselves?
https://archlinux.org/packages/extra/x86_64/atop/ shows 2.11.0-1, not updated since 2024-07-31 11:02 UTC
https://packages.debian.org/sid/atop & https://packages.ubuntu.com/oracular/atop show 2.10.0-3, not updated since Fri, 31 May 2024 13:42:28 +0200 (https://metadata.ftp-master.debian.org/changelogs//main/a/at..., https://changelogs.ubuntu.com/changelogs/pool/universe/a/ato...)
[dead]
[dead]
[flagged]
I don't get comments and commenters like this. [0]
It's unquestionably an "improvement" over the original, provided we measure that by the amount of information provided, which is the obvious way to measure it for every normal human being.
Yes, the author is very clearly dancing around something. No, nobody here knows why either. No, they don't have a damn concussion, and their post really isn't a difficult read at all.
Your ideas about NDAs are speculative (according to yourself). Why are you taking them at face value?
[0] But maybe this is just my variation on when people in YouTube comment sections get mad at each other for quoting the zingers from the corresponding videos, so eh...