Okay, since there’s so much stuff to digest here and apparently there are issues designated as wontfix by GnuPG maintainers, can someone more in the loop tell us whether using gpg signatures on git commits/tags is vulnerable? And is there any better alternative going forward? Like is signing with SSH keys considered more secure now? I certainly want to get rid of gpg from my life if I can, but I also need to make sure commits/tags bearing my name actually come from me.
One of those WONTFIX's is on an insane vulnerability: you can bitflip known plaintext in a PGP message to switch it into handling compression, allowing attackers to instruct GnuPG packet processing to look back to arbitrary positions in the message, all while suppressing the authentication failure message. GPG's position was: they print, in those circumstances, an error of some sort, and that's enough. It's an attack that reveals plaintext bytes!
Are you referring to "Encrypted message malleability checks are incorrectly enforced causing plaintext recovery attacks"?
Seems like a legitimate difference of opinion. The researcher wants a message with an invalid format to return an integrity failure message. Presumably the GnuPGP project thinks that would be better handled by some sort of bad format error.
The exploit here is a variation on the age old idea of tricking a PGP user into decrypting an encrypted message and then sending the result to the attacker. The novelty here is the idea of making the encrypted message look like a PGP key (identity) and then asking the victim to decrypt the fake key, sign it and then upload it to a keyserver.
Modifying a PGP message file will break the normal PGP authentication[1] (that was not acknowledged in the attack description). So here is the exploit:
* The victim receives a unauthenticated/anonymous (unsigned or with a broken signature) message from the attacker. The message looks like a public key.
* Somehow (perhaps in another anonymous message) the attacker claims they are someone the victim knows and asks them to decrypt, sign and upload the signed public key to a keyserver.
* They see nothing wrong with any of this and actually do what the attacker wants ignoring the error message about the bad message format.
So this attack is also quite unlikely. Possibly that affected the decision of the GnuPG project to not change behaviour in this case, particularly when such a change could possibly introduce other vulnerabilities.
Added: Wait. How would the victim import the bogus PGP key into GPG so they could sign it? There would normally be a preexisting key for that user so the bogus key would for sure fail to import. It would probably fail anyway. It will be interesting to see what the GnuPG project said about this in their response.
In the course of this attack, just in terms of what happens in the mechanics of the actual protocol, irrespective of the scenario in which these capabilities are abused, the attacker:
(1) Rewrites the ciphertext of a PGP message
(2) Introducing an entire new PGP packet
(3) That flips GPG into DEFLATE compression handling
(4) And then reroutes the handling of the subsequent real message
(5) Into something parsed as a plaintext comment
This happens without a security message, but rather just (apparently) a zlib error.
In the scenario presented at CCC, they used the keyserver example to demonstrate plaintext exfiltration. I kind of don't care. It's what's happening under the hood that's batshit; the "difference of opinion" is that the GnuPG maintainers (and, I guess, you) think this is an acceptable end state for an encryption tool.
Everything is better than PGP (not just GPG --- all PGP implementations).
The problem with PGP is that it's a Swiss Army Knife. It does too many things. The scissors on a Swiss Army Knife are useful in a pinch if you don't have real scissors, but tailors use real scissors.
Whatever it is you're trying to do with encryption, you should use the real tool designed for that task. Different tasks want altogether different cryptosystems with different tradeoffs. There's no one perfect multitasking tool.
When you look at the problem that way, surprisingly few real-world problems ask for "encrypt a file". People need backup, but backup demands backup cryptosystems, which do much more than just encrypt individual files. People need messaging, but messaging is wildly more complicated than file encryption. And of course people want packet signatures, ironically PGP's most mainstream usage, ironic because it relies on only a tiny fraction of PGP's functionality and still somehow doesn't work.
All that is before you get to the absolutely deranged 1990s design of PGP, which is a complex state machine that switches between different modes of operation based on attacker-controlled records (which are mostly invisible to users). Nothing modern looks like PGP, because PGP's underlying design predates modern cryptography. It survives only because nerds have a parasocial relationship with it.
I do it with FIDO2. It's inconvenient when having multiple Yubikeys (I always end up adding the entry manually with ssh-agent), and I have to touch the Yubikey everytime it signs. That makes it very annoying when rebasing a few tens of commits, for instance.
Sure, but then it is set to no-touch for every FIDO2 interaction I have. I don't want to touch for signing, but I want to touch when using it as a passkey, for instance.
The thing I can't get past with PGP / GPG is that it tries to work around MITM attacks by encouraging users to place their social network on the public record (via public key attestation).
This is so insane to me. The whole point of using cryptography is to keep private information private. Its hard to think of ways PGP could fail more as a security / privacy tool.
Do you mean keyservers? Keyservers have nothing to do with the identity verification required to prevent MITM attacks. There is only one method available for PGP. Comparison of key fingerprints/IDs.
Keyservers are simply a convenient way to get a public key (identity). Most people don't have to use them.
> Use Signal. Or Wire, or WhatsApp, or some other Signal-protocol-based secure messenger.
That's a "great" idea considering the recent legal developments in the EU, which OpenPGP, as bad as it is, doesn't suffer from. It would be great if the author updated his advice into something more future-proof.
There's no future-proof suggestion that's immune to the government declaring it a crime.
If you want a suggestion for secure messaging, it's Signal/WhatsApp. If you want to LARP at security with a handful of other folks, GPG is a fine way to do that.
Nobody decided that it's a crime, and it's unlikely to happen. Question is, what do you do with mandatory snooping of centralized proprietary services that renders them functionally useless aside from "just live with it". I was hoping for actual advice rather than a snarky non-response, yet here we are.
I gave you the answer that exists: I'm not aware of any existing or likely-to-exist secure messaging solution that would be a viable recommendation.
The available open-source options come nowhere close to the messaging security that Signal/Whatsapp provide. So you're left with either "find a way to access Signal after they pull out of whatever region has criminalized them operating with a backdoor on comms" or "pick any option that doesn't actually have strong messaging security".
Not the GP, but most of us want to communicate with other people, which means SMS or WhatsApp. No point have perfect one-time-pad encryption and no one to share pads with.
You're asking for a technical solution to a political problem.
The answer is not to live with it, but become politically active to try to support your principles. No software can save you from an authoritarian government - you can let that fantasy die.
Could you please link the source code for the WhatsApp client, so that we can see the cryptographic keys aren't being stored and later uploaded to Meta's servers, completely defeating the entire point of Signal's E2EE implementation and ratchet protocol?
This may shock you, but plenty of cutting-edge application security analysis doesn't start with source code.
There are many reasons, but one of them is that for the overwhelming majority of humans on the planet, their apps aren't being compiled from source on their device. So since you have to account for the fact that the app in the App Store may not be what's in some git repo, you may as well just start with the compiled/distributed app.
Whether or not other people build from source code has zero relevance to a discussion about the trustworthiness of security promises coming from former PRISM data providers about the closed-source software they distribute. Source availability isn't theater, even when most people never read it, let alone build from it. The existence of surreptitious backdoors and dynamic analysis isn't a knock against source availability.
Signal and WhatsApp do not belong in the same sentence together. One's open source software developed and distributed by a nonprofit foundation with a lengthy history of preserving and advancing accessible, trustworthy, verifiable encrypted calling and messaging going back to TextSecure and RedPhone, the other's a piece of proprietary software developed and distributed by a for-profit corporation whose entire business model is bulk harvesting of user data, with a lengthy history of misleading and manipulating their own users and distributing user data (including message contents) to shady data brokers and intelligence agencies.
To imply these two offer even a semblance of equivalent privacy expectations is misguided, to put it generously.
No, because there is no keyring and you have to supply people's public key each time. It is not suitable for large-scale public key management (with unknown recipients), and it does not support automatic discovery, trust management. Age does NOT SUPPORT signing at all either.
> you have to supply people's public key each time
Keyrings are awful. I want to supply people’s public keys each time. I have never, in my entire time using cryptography, wanted my tool to guess or infer what key to verify with. (Heck, JOSE has a long history of bugs because it infers the key type, which is also a mistake.)
I have an actual commercial use case that receives messages (which are, awkwardly, files sent over various FTP-like protocols, sigh), decrypts and verifies them, and further processes them. This is fully automated and runs as a service. For horrible legacy reasons, the files are in PGP format. I know the public key with which they are signed (provisioned out of band) and I have the private key for decryption (again, provisioned out of band).
This would be approximately two lines of code using any sane crypto library [0], but there really isn’t an amazing GnuPG alternative that’s compatible enough.
But GnuPG has keyrings, and it really wants to use them and to find them in some home directory. And it wants to identify keys by 32-bit truncated hashes. And it wants to use Web of Trust. And it wants to support a zillion awful formats from the nineties using wildly insecure C code. All of this is actively counterproductive. Even ignoring potential implementation bugs, I have far more code to deal with key rings than actual gpg invocation for useful crypto.
[0] I should really not have to even think about the interaction between decryption and verification. Authenticated decryption should be one operation, or possibly two. But if it’s two, it’s one operation to decapsulate a session key and a second operation to perform authenticated decryption using that key.
Some years ago I wrote "just a little script" to handle encrypting password-store secrets for multiple recipients. It got quite ugly and much more verbose than planned, switching gpg output parsing to Python for sanity.
I think I used a combination of --keyring <mykeyring> --no-default-keyring.
Never would encourage anyone to do this again.
>And it wants to identify keys by 32-bit truncated hashes.
That's 64 bits these days.
>I should really not have to even think about the interaction between decryption and verification.
Messaging involves two verifications. One to insure that you are sending the message to who you think you are sending the message. The other to insure that you know who you received a message from. That is an inherent problem. Yes, you can use a shared key for this but then you end up doing both verifications manually.
>> And it wants to identify keys by 32-bit truncated hashes.
> That's 64 bits these days.
The fact that it’s short enough that I even need to think about whether it’s a problem is, frankly, pathetic.
> Messaging involves two verifications. One to insure that you are sending the message to who you think you are sending the message. The other to insure that you know who you received a message from. That is an inherent problem. Yes, you can use a shared key for this but then you end up doing both verifications manually.
I can’t quite tell what you mean.
One can build protocols that do encrypt-then-sign, encrypt-and-sign, sign-then-encrypt, or something clever that combines encryption and signing. Encrypt-then-sign has a nice security proof, the other two combinations are often somewhat catastrophically wrong, and using a high quality combination can have good performance and nice security proofs.
But all of the above should be the job of the designer of a protocol, not the user of the software. If my peer sends me a message, I should provision keys, and then I should pass those keys to my crypto library along with a message I received (and perhaps whatever session state is needed to detect replays), and my library should either (a) tell me that the message is invalid and not give me a guess as to its contents or (b) tell me it’s valid and give me the contents. I should not need to separately handle decryption and verification, and I should not even be able to do them separately even if I want to.
Would "fetch a short-lived age public key" serve your use case? If so, then an age plugin that build atop the AuxData feature in my Fediverse Public Key Directory spec might be a solution. https://github.com/fedi-e2ee/public-key-directory-specificat...
But either way, you shouldn't have long-lived public keys used for confidentiality. It's a bad design to do that.
We need a keyring at a company. Because there's no other media for communicating, where you reach management and technical people in companies as well.
And we have massive issues due to the fact that the ongoing-decrying of "shut everything off" and the following non-improvement-without-an-alternative because we have to talk with people of other organizations (and every organization runs their own mailserver) and the only really common way of communication is Mail.
And when everyone has a GPG Key, you get.. what? an keyring.
You could say, we do not need gpg, because we control the mailserver, but what if a mailserver is compromised and the mails are still in mailboxes?
the public keys are not that public, only known to the contenders, still, it's an issue and we have a keyring
> you shouldn't have long-lived public keys used for confidentiality.
This statement is generic and misleading. Using long-lived keys for confidentiality is bad in real-time messaging, but for non-ephemeral use cases (file encryption, backups, archives) it is completely fine AND desired.
> Would "fetch a short-lived age public key" serve your use case?
(This is some_furry, I'm currently rate-limited. I thought this warranted a reply, so I switched to this account to break past the limit for a single comment.)
> This statement is generic and misleading.
It may be generic, but it's not misleading.
> Using long-lived keys for confidentiality is bad in real-time messaging, but for non-ephemeral use cases (file encryption, backups, archives) it is completely fine.
What exactly do you mean by "long-lived"?
The "lifetime" of a key being years (for a long-lived backup) is less important than how many encryptions are performed with said key.
The thing you don't want is to encrypt 2^50 messages under the same key. Even if it's cryptographically safe to do that, any post-compromise key rotation will be a fucking nightmare.
The primary reason to use short-lived public keys is to limit the blast radius. Consider these two companies:
Alice Corp. uses the same public key for 30+ years.
Bob Ltd. uses a new public key for each quarter over the same time period.
Both parties might retain the secret key indefinitely, so that if Bob Ltd. needs to retrieve a backup from 22 years ago, they still can.
Now consider what happens if both of them lose their currently-in-use secret key due to a Heartbleed-style attack. Alice has 30 years of disaster recovery to contend with, while Bob only has up to 90 days.
Additionally, file encryption, backups, and archives typically use ephemeral symmetric keys at the bottom of the protocol. Even when a password-based key derivation function is used (and passwords are, for whatever reason, reused), the password hashing function usually has a random salt, thereby guaranteeing uniqueness.
The idea that "backups" magically mean "long-lived" keys are on the table, without nuance, is extremely misleading.
> > Would "fetch a short-lived age public key" serve your use case?
> Sadly no.
shrug Then, ultimately, there is no way to securely satisfy your use case.
You introduced "short-lived" vs "long-lived", not me. Long-lived as wall-clock time (months, years) is the default interpretation in this context.
The Alice / Bob comparison is asymmetric in a misleading way. You state Bob Ltd retains all private keys indefinitely. A Heartbleed-style attack on their key storage infrastructure still compromises 30 years of backups, not 90 days. Rotation only helps if only the current operational key is exposed, which is an optimistic threat model you did not specify.
Additionally, your symmetric key point actually supports what I said. If data is encrypted with ephemeral symmetric keys and the asymmetric key only wraps those, the long-lived asymmetric key's exposure does not enable bulk decryption without obtaining each wrapped key individually.
> "There is no way to securely satisfy your use case"
No need to be so dismissive. Personal backup encryption with a long-lived key, passphrase-protected private key, and offline storage is a legitimate threat model. Real-world systems validate this: SSH host keys, KMS master keys, and yes, even PGP, all use long-lived asymmetric keys for confidentiality in non-ephemeral contexts.
And to add to this, incidentally, age (the tool you mentioned) was designed with long-lived recipient keys as the expected use case. There is no built-in key rotation or expiry mechanism because the authors considered it unnecessary for file encryption. If long-lived keys for confidentiality were inherently problematic, age would be a flawed design (so you might want to take it up with them, too).
In any case, yeah, your point about high-fan-out keys with large blast radius is correct. That is different from "long-lived keys are bad for confidentiality" (see above with regarding to "age").
That was just me being goofy in that bit (and only that), but I hope the rest of my message went across. :)
> In fact for file storage why not use an encrypted disk volume so you don't need to use PGP?
Different threat models. Disk encryption (LUKS, VeraCrypt, plain dm-crypt) protects against physical theft. Once mounted, everything is plaintext to any process with access. File-level encryption protects files at rest and in transit: backups to untrusted storage, sharing with specific recipients, storing on systems you do not fully control. You cannot send someone a LUKS volume to decrypt one file, and backups of a mounted encrypted volume are plaintext unless you add another layer.
> The only downside to Sigstore is it hasn’t been widely adopted yet.
Which, from where I stand, means that PGP is the only viable solution because I don't have a choice. I can't replace PGP with Sigstore when publishing to Maven. It's nice to tell me I'm dumb because I use PGP, but really it's not my choice.
> Use SSH Signatures, not PGP signatures.
Here I guess it's just me being dumb on my own. Using SSH signatures with my Yubikeys (FIDO2) is very inconvenient. Using PGP signatures with my Yubikeys literally just works.
> Encrypted Email: Don’t encrypt email.
I like this one, I keep seeing it. Sounds like Apple's developer support: if I need to do something and ask for help, the answer is often: "Don't do it. We suggest you only use the stuff that just works and be happy about it".
Sometimes I have to use emails, and cryptographers say "in that case just send everything in plaintext because eventually some of your emails will be sent in plaintext anyway". Isn't it like saying "no need to use Signal, eventually the phone of one of your contacts will be compromised anyway"?
as a recent dabbling reader of introductory popsci content in cryptography, I've been wondering about what are the different segmentation of expert roles in the field?
e.g. in Filippo's blogpost about Age he clarified that he's not a cryptographer but rather a cryptography engineer, is that also what your role is, what are the concrete divisions of labor, and what other related but separate positions exists in the overall landscape?
where is the cutoff point of "don't roll your own crypto" in the different levels of expertise?
It's a fundamentally bad idea to have a single key that applications are supposed to look for in a particular place, and then use to sign things.
There is inherent complexity involved in making multi-context key use safe, and it's better to just avoid it architecturally.
Keys (even quantum safe) are small enough that having one per application is not a problem at all.
If an application needs multi-context, they can handle it themselves.
If they do it badly, the damage is contained to that application.
If someone really wants to make an application that just signs keys for other applications to say "this is John Smith's key for git" and "this is John Smith's key for email" then they could do that.
Such an application would not need to concern itself with permissions for other applications calling into it.
The user could just copy and paste public keys, or fingerprints when they want to attest to their identity in a specific application.
The keyring circus (which is how GPG most commonly intrudes into my life) is crazy too.
All these applications insist on connecting to some kind of GPG keyring instead of just writing the secrets to the filesystem in their own local storage.
The disk is fully encrypted, and applications should be isolated from one another.
Nothing is really being accomplished by requiring the complexity of yet another program to "extra encrypt" things before writing them to disk.
I'm sure these bad ideas come from the busy work invented in corporate "security" circles, which invent complexity to keep people employed without any regard for an actual threat model.
> The disk is fully encrypted, and applications should be isolated from one another.
For most apps on non-mobile devices, there isn't filesystem isolation between apps. Disk/device-level encryption solves for a totally different threat model; Apple/Microsoft/Google all ship encrypted storage for secrets (Keychain, Credential Manager, etc), because restricting key material access within the OS has merit.
> I'm sure these bad ideas come from the busy work invented in corporate "security" circles, which invent complexity to keep people employed without any regard for an actual threat model.
Basically everything in PGP/GPG predates the existence of "corporate security circles".
> For most apps on non-mobile devices, there isn't filesystem isolation between apps.
If there isn't there should be. At least my Flatpaks are isolated from each other.
> Apple/Microsoft/Google all ship encrypted storage for secrets (Keychain, Credential Manager, etc), because restricting key material access within the OS has merit.
The Linux equivalents are suspicious and stuck in the past to say the least. Depending on them is extra tedious on top of the tediousness of any PGP keyrings, god forbid a combination of the two.
> Basically everything in PGP/GPG predates the existence of "corporate security circles".
These are not vulnerabilities in the "remote exploit" sense. They should be taken seriously, you should be careful not to run local software on untrusted data, and GPG should probably do more to protect users from shooting themselves in the foot, but the worst thing you could do is panic and throw out a process your partners and colleagues trust. There is nothing here that will disturb your workflow signing commits or apt-get install-ing from your distribution.
If you use crypographic command line tools to verify data sent to you, be mindful on what you are doing and make sure to understand the attacks presented here. One of the slides is titled "should we even use command line tools" and yes, we should because the alternative is worse, but we must be diligent in treating all untrusted data as adversarial.
A huge part of GPG’s purported use case is getting a signed/encrypted/both blob from somebody and using GPG to confirm it’s authentic. This is true for packages you download and for commits with signatures.
It is, and other software handling untrusted data should also treat it as adversarial. For example, your package tool should probably not output raw package metadata to the terminal.
I did the switch this year after getting yet another personal computer. I have 4 in total (work laptop, personal sofa laptop, Mac Mini, Linux Tower). I used Yubi keys with gpg and resident ssh keys. All is fine but the configuration needed to get it too work on all the machines. I also tend to forget the finer details and have to relearn the skills of fetching the public keys into the keychain etc. I got rid of this all by moving to 1Password ssh agent and git ssh signing. Removes a lot of headaches from my ssh setup. I still have the yubi key(s) though as a 2nd factor for certain web services. And the gpg agent is still running but only as a fallback. I will turn this off next year.
I’ve ended up the same place as you. I had previously set up my gpg key on a Yubikey and even used that gpg key to handle ssh authentication. Then at some point it just stopped working, maybe the hardware on my key broke. 2FA still works though.
In any case I figured storing an SSH key in 1Password and using the integrated SSH socket server with my ssh client and git was pretty nice and secure enough. The fact the private key never leaves the 1Password vault unencrypted and is synced between my devices is pretty neat. From a security standpoint it is indeed a step down from having my key on a physical key device, but the hassle of setting up a new Yubikey was not quite worth it.
I’m sure 1Password is not much better than having a passphrase-protected key on disk. But it’s a lot more convenient.
> I had previously set up my gpg key on a Yubikey and even used that gpg key to handle ssh authentication. Then at some point it just stopped working, maybe the hardware on my key broke
Did you try to SSH in verbose mode to ascertain any errors? Why did you assume the hardware "broke" without anyone objective qualifications of an actual failure condition?
> I figured storing an SSH key in 1Password and using the integrated SSH socket server with my ssh client and git was pretty nice and secure enough
How is trusting a closed-source, for-profit, subscription-based application with your SSH credential "secure enough"?
Choosing convenience over security is certainly not unreasonable, but claiming both are achieved without any compromise borders on ludicrous.
The keys never leave the 1Password store. So you don’t have the keys on the local file system. That and that these keys are shared over the cloud was the seller for me. I guess security wise it’s a bit of a downgrade compared to resident keys. But the agent support agent forwarding etc which wasn’t really working with yubi ssh resident keys.
Also worth mentioning that I use 1Password. Bitwarden has a similar feature as far as I know. For the ones who want to self host etc might be the even better solution.
> The keys never leave the 1Password store. So you don’t have the keys on the local file system.
Keychain and 1Password are doing variants of the same thing here: both store an encrypted vault and then give you credentials by decrypting the contents of that vault.
> I certainly want to get rid of gpg from my life if I can
I see this sentiment a lot, but you later hint at the problem. Any "replacement" needs to solve for secure key distribution. Signing isn't hard, you can use a lot of different things other than gpg to sign something with a key securely. If that part of gpg is broken, it's a bug, it can/should be fixed.
The real challenge is distributing the key so someone else can verify the signature, and almost every way to do that is fundamentally flawed, introduces a risk of operational errors or is annoying (web of trust, trust on first use, central authority, in-person, etc). I'm not convinced the right answer here is "invent a new one and the ecosystem around it".
It's not like GPG solves for secure key distribution. GPG keyservers are a mess, and you can't trust their contents anyways unless you have an out of band way to validate the public key. Basically nobody is using web-of-trust for this in the way that GPG envisioned.
This is why basically every modern usage of GPG either doesn't rely on key distribution (because you already know what key you want to trust via a pre-established channel) or devolves to the other party serving up their pubkey over HTTPS on their website.
Yes, not saying that web of trust ever worked. "Pre-established channel" are the other mechanisms I mentioned, like a central authority (https) or TOFU (just trust the first key you get). All of these have some issues, that any alternative must also solve for.
So if we need a pre-established channel anyways, why would people recommending a replacement for GPG workflows need to solve for secure key distribution?
This is a bit like looking at electric cars and saying ~"well you can't claim to be a viable replacement for gas cars until you can solve flight"
A lot of people are using PGP for things that don’t require any kind of key distribution. If you’re just using it to encrypt files (even between pointwise parties), you can probably just switch to age.
(We’re also long past the point where key distribution has been a significant component of the PGP ecosystem. The PGP web of trust and original key servers have been dead and buried for years.)
> As a practical implementation of "six degrees of Kevin Bacon", you could get an organic trust chain to random people.
GPG is terrible at that.
0. Alice's GPG trusts Alice's key tautologically.
1. Alice's GPG can trust Bob's key because it can see Alice's signature.
2. Alice's GPG can trust Carol's key because Alice has Bob's key, and Carol's key is signed by Bob.
After that, things break. GPG has no tools for finding longer paths like Alice -> Bob -> ??? -> signature on some .tar.gz.
I'm in the "strong set", I can find a path to damn near anything, but only with a lot of effort.
The good way used to be using the path finder, some random website maintained by some random guy that disappeared years ago. The bad way is downloading a .tar.gz, checking the signature, fetching the key, then fetching every key that signed in, in the hopes somebody you know signed one of those, and so on.
And GPG is terrible at dealing with that, it hates having tens of thousands of keys in your keyring from such experiments.
GPG never grew into the modern era. It was made for persons who mostly know each other directly. Addressing the problem of finding a way to verify the keys of random free software developers isn't something it ever did well.
What's funny about this is that the whole idea of the "web of trust" was (and, as you demonstrate, is) literally PGP punting on this problem. That's how they talked about it at the time, in the 90s, when the concept was introduced! But now the precise mechanics of that punt have become a critically important PGP feature.
I don't think it punted as much as it never had that as an intended usage case.
I vaguely recall the PGP manuals talking about scenarios like a woman secretly communicating with her lover, or Bob introducing Carol to Alice, and people reading fingerprints over the phone. I don't think long trust chains and the use case of finding a trust path to some random software maintainer on the other side of the planet were part of the intended design.
I think to the extent the Web of Trust was supposed to work, it was assumed you'd have some familiarity with everyone along the chain, and work through it step by step. Alice would known Bob, who'd introduce his friend Carol, who'd introduce her friend Dave.
In a signature context, you probably want someone else to know that "you" signed it (I can think of other cases, but that's the usual one). The way to do that requires them to know that the key which signed the data belongs to you. My only point is that this is actually the hard part, which any "replacement" crypto system needs to solve for, and that solving that is hard (none of the methods are particularly good).
> The way to do that requires them to know that the key which signed the data belongs to you.
This is something S/MIME does and I wouldn't say it doesn't do so well. You can start from mailbox validation and that already beats everything PGP has to offer in terms of ownership validation. If you do identity validation or it's a national PKI issuing the certificate (like in some countries) it's a very strong guarantee of ownership. Coughing baby (PGP) vs hydrogen bomb level of difference.
It much more sounds to me like an excuse to use PGP when it doesn't even remotely offer what you want from a replacement.
I’m not sure I completely agree here. For private use, this seems fine. However, this isn’t how email encryption is typically implemented in an enterprise environment. It’s usually handled at the mail gateway rather than on a per-user basis. Enterprises also ensure that the receiving side supports email encryption as well.
There's like one or two use cases where encrypting email could work. The best case I've come across--Bugzilla has the ability to let the user upload a public key to encrypt emails for updates to non-public bugs. It's not a big use case--pretty much the intersection of "must use email" and "can establish identity out of band," which does not describe most communication that uses email. (As tptacek notes in a sibling comment, you pretty much have to limit this to one-and-done stuff too, not anything that's going to be in an ongoing discussion, because leaks via unencrypted replies are basically guaranteed).
Your mail either needs to be encrypted reliably against real adversaries or it doesn't. A private emailing circle doesn't change that. If the idea here is, a private group of friends can just agree never to put anything in their subjects, or to accidentally send unencrypted replies, I'll just say I ran just such a private circle at Matasano, where we used encrypted mail to communicate about security assessment projects, and unencrypted replies happened.
> Your mail either needs to be encrypted reliably against real adversaries or it doesn't.
It is, GPG take care of that.
> If the idea here is, a private group of friends can just agree never to put anything in their subjects, or to accidentally send unencrypted replies
That’s not what I’m talking about. It’s an enterprise - you cannot send non-encrypted emails from your work mail account, the gateway takes care of it. It has many rules, including such based on the sender and recipient.
Surely, someone can print the mail and carry it out of the company’s premises, but at this point it’s intentional and the cat’s already out of the bag.
Even my doctor's office and local government agencies support PGP encrypted emails, and refuse to send personal data via unencrypted email, but tech nerds still claim no one can use it?
I'm yet to finish watching the talk, but it starts with them confirming the demo fraudulent .iso with sequoia also (they call it out by name), so this really makes me think. :)
Sequioa hasn't fixed the attack from the beginning of the talk, the one where they convert between cleartext and full signature formats and inject unsigned bytes into the output because of the confusion.
The latest version of a bad standard is still bad.
This page is a pretty direct indicator that GPG's foundation is fundamentally broken: you're not going to get to a good outcome trying to renovate the 2nd story.
There are people who use GPG for more than that. Those that are fine with just those two features, sure. Heck, you can encrypt with "openssh", no need for age. :D I have a bash function for encryption and decryption!
Why does Fedora / RPM still rely on GPG keys for verifying packages?
This is a staggering ecosystem failure. If GPG has been a known-lost cause for decades, then why haven't alternatives ^W replacements been produced for decades?
This feels pretty unsatisfying: something that’s been “considered harmful” for three decades should be deprecated and then removed in a responsible ecosystem.
(PGP/GPG are of course hamstrung by their own decision to be a Swiss Army knife/only loosely coupled to the secure operation itself. So the even more responsible thing to do is to discard them for purposes that they can’t offer security properties for, which is the vast majority of things they get used for.)
It is, in fact, signed by the author. It's just a PKI, so you intermediate trust in the author through an authority.
This is exactly analogous to the Web PKI, where you trust CAs to identify individual websites, but the websites themselves control their keypairs. The CA's presence intermediates the trust but does not somehow imply that the CA itself does the signing for TLS traffic.
Trusted Publishing doesn’t involve any signing keys (well, there’s an IdP, but the IdP’s signature is over a JWT that the index verifies, not an end signature). You’re thinking of attestations, which do indeed involve a local ephemeral private key.
Again, I must emphasize that this is identical in construction to the Web PKI; that was intentional. There are good criticisms of PKIs on grounds of centrality, etc., but “the end entity doesn’t control the private key” is facially untrue and sounds more like conspiracy than anything else.
On my web server where the certificate is signed by letsencrypt I do have a file which contains a private key. On pypi there is no such thing. I don't think the parallel is correct.
With Let’s Encrypt, your private key is (typically) rotated every 90 days. It’s kept on disk because 90 days is too long to reliably keep a private key resident in memory on unknown hardware.
With attestations on PyPI, the issuance window is 15 minutes instead of 90 days. So the private key is kept in memory and discarded as soon as the signing operation is complete, since the next signing flow will create a new one.
At no point does the private key leave your machine. The only salient differences between the two are file versus memory and the validity window, but in both cases PyPI’s implementation of attestations prefers the more ideal thing with respect to reducing the likelihood of local private key disclosure.
No? With let's encrypt the certificate is rotated, but the private key remains the same, and importantly, let's encrypt never gets to see it, and anything is logged.
I mean, it’s an ephemeral VM that you have root on. You don’t own it, but you control it in every useful sense of the word.
But also, that’s an implementation detail. There’s no reason why PyPI couldn’t accept attestations from local machines (using email identities) using this scheme; it’s just more engineering and design work to determine what that would actually communicate.
It might be worthwhile for someone to do this engineering work; e.g., to make attestations work even for folks that use platforms like Codeberg or self-hosted git.
Yeah, completely agreed. I think there's a strong argument to be made for Codeberg as a federated identity provider, which would allow attestations from their runners.
(This would of course require Codeberg to become an IdP + demonstrate the ability to maintain a reasonable amount of uptime and hold their own signing keys. But I think that's the kind of responsibility they're aiming for.)
Can you provide a source this? To my understanding, the GnuPG project (and by extension PGP as an ecosystem) considers itself very much alive, even though practically speaking it’s effectively moribund and irrelevant.
(So I agree that it’s de facto dead, but that’s not the same thing as formal deprecation. The latter is what you do explicitly to responsibly move people away from something that’s not suitable for use anymore.)
I would be very much surprised if GPG has ever really achieved anything other than allowing crypto nerds to proclaim that things were encrypted or signed. Good for them I guess, but not of any practical importance, unlike SSH, TLS, 7Zip encryption, etc.
Maybe the site is overloaded. But as for the "brb, were on it!!!!" - this page had the live stream of the talk when it was happening. Hopefully they'll replace it with the recording when media.ccc.de posts it, which should be within a couple hours.
For anyone relatedly wondering about the "schism", i.e. GnuPG abandoning the OpenPGP standard and doing their own self-governed thing, I found this email particularly insightful on the matter:
https://lists.gnupg.org/pipermail/gnupg-devel/2025-September...
> As others have pointed out, GnuPG is a C codebase with a long history (going on 28 years). On top of that, it's a codebase that is mostly uncovered by tests, and has no automated CI. If GnuPG were my project, I
would also be anxious about each change I make. I believe that because of this the LibrePGP draft errs on the side of making minimal changes, with the unspoken goal of limiting risks of breakage in a brittle codebase with practically no tests. (Maybe the new formats in RFC 9580 are indeed "too radical" of an evolutionary step to safely implement in GnuPG. But that's surely not a failing of RFC 9580.)
Nothing has improved and everything has gotten worse since I wrote that. Both factions are sleepwalking into an interoperability disaster. Supporting one faction or the other just means you are part of the problem. The users have to resist being made pawns in this pointless war.
>Maybe the new formats in RFC 9580 are indeed "too radical" of an evolutionary step to safely implement in GnuPG.
Traditionally the OpenPGP process has been based on minimalism and rejected everything without a strong justification. RFC-9580 is basically everything that was rejected by the LibrePGP faction (GnuPG) in the last attempt to come up with a new standard. It contains a lot of poorly justified stuff and some straight up pointless stuff. So just supporting RFC-9580 is not the answer here. It would require significant cleaning up. But again, just supporting LibrePGP is not the answer either. The process has failed yet again and we need to recognize that.
Is anyone else worried that a lot of people coming from the Rust world contribute to free software and mindlessly slap on it MIT license because it's "the default license"? (Yes, I've had someone say this to me, no joke)
GnuPG for all its flaws has a copyleft license (GPL3) making it difficult to "embrace extend extinguish". If you replace it with a project that becomes more successful but has a less protective (for users) license, "we the people" might lose control of it.
You are attributing a general trend to a particular language community. I also believe that you are unjustifiably unfairly interpreting “default license” just because you disagree with what they think the “default license” is. We all know what is means by this. It just sounds like you think it should be something GPL
No, you're guessing what I'm thinking. I'm telling you that a person I spoke to TOLD ME verbatim "I chose MIT because it's the default lincense". I'm not guessing that's what they did, that's what they TOLD ME. Do you understand the concept or literally telling someone something?
> Is anyone else worried that a lot of people coming from the Rust world contribute to free software and mindlessly slap on it MIT license
Yeah; I actually used to do that to (use the "default license"), but eventually came to the same realisation and have been moving all my projects to full copyleft.
I'm not worried it might be the case. I'm certain that ubuntu and everyone else replacing gnu stuff with rust MIT stuff is done with the sole purpose of getting rid of copyleft components.
If the new components were GPL licensed there would be less opposition, but we just get called names and our opinions discarded. After all such companies have more effective marketing departments.
I find that this is something reflective of most modern language ecosystems, not just Rust. I actually first started noticing the pervasiveness of MIT on npm.
For me, I am of two minds. On one hand, the fact that billion-dollar empires are built on top of what is essentially unpaid volunteer work does rankle and makes me much more appreciative of copyleft.
On the other hand, most of my hobbyist programming work has continued to be released under some form of permissive license, and this is more of a reality of the fact that I work in ecosystems where use of the GPL isn't merely inconvenient, but legally impossible, and the pragmatism of permissive licenses win out.
I do wish that weak copyleft like the Mozilla Public License had caught on as a sort of middle ground, but it seems like those licenses are rare enough to where their use would invite as much scrutiny as the GPL, even if it was technically allowed. Perhaps the FSF could have advocated more strongly for weak copyleft in area where GPL was legally barred, but I suppose they were too busy not closing the network hole in the GPLv3 to bother.
I love the MPL and I use it wherever I get the opportunity. IMO it has all the advantages of the GPL and lacks the disadvantages (the viral part) that makes the GPL so difficult to use.
The vast majority of open-source software is written by people whose day job is building empires on top other open-source software, at zero cost and without releasing modifications, which is harder to do with the GPL.
Or maybe the users are just not aware. Licenses flame wars were a thing over 20 years ago, people nowadays can totally don't know about what can happen to a MIT-licensed software.
Hey, this is a completely unacceptable comment on HN. Please read the guidelines and make an effort to observe them if you want to participate here. We have to ban accounts that do this repeatedly. https://news.ycombinator.com/newsguidelines.html
Obviously I am aware that not all user actions represent choices, but the hypothetical being proposed was specifically in the context of good established free software alternatives existing. In that context users switching to software with more permissive licenses would imply a choice on the users part. It is reasonable to assume this choice implies the users value something about the other software more than they value what the GPL incumbent has to offer. Of course such a choice could be motivated by many things like newer features, slick website, the author’s marketing, but whatever the case if the license was not sufficient enticement to stay, this feels significant.
A company adopts some software with a free but not copyleft license. Adopts means they declare "this is good, we will use it".
Developers help develop the software (free of charge) and the company says thank you very much for the free labour.
Company puts that software into everything it does, and pushes it into the infrastructure of everything it does.
Some machines run that software because an individual developer put it there, other machines run that software because a company put it there, some times by exerting some sort of power for it to end up there (for example, economic incentives to vendors, like android).
A some point the company says "you know what, we like this software so much that we're going to fork it, but the fork isn't going to be free or open source. It's going to be just ours, and we're not going to share the improvements we made"
But now that software is already running in a lot of machines.
Then the company says "we're going to tweak the software a bit, so that it's no longer inter-operable with the free version. You have to install our proprietary version, or you're locked out" (out of whatever we're discussing hypothetically. Could be a network, a standard, a protocol, etc).
Developers go "shit, I guess we need to run the proprietary version now. we lost control of it."
This is what happened e.g. with chrome. There's chromium, anyone can build it. But that's not chrome. And chrome is what everybody uses because google has lock-in power. Then google says "oh I'm going to disallow you running the extensions you like, so we can show you more ads". Then they make tweaks to chrome so that websites only get rendered well if they use certain APIs, so now competitors to Chrome are forced to implement those APIs, but those aren't public.
And all of this was initially build by free labour, which google took, by people who thought they were contributing to some commons in a sense.
Copyleft licenses protect against this. Part of the license says: if you use these licenses, and you make changes to the software, you have to share the changes as well, you can't keep them for yourself".
> This is what happened e.g. with chrome. There's chromium, anyone can build it. But that's not chrome. And chrome is what everybody uses because google has lock-in power.
Because Google has their attention. You can use chromium, but most people don't and pick the first thing they see. Also, Chrome is a much better name, err, not better but easier to say.
> Then google says "oh I'm going to disallow you running the extensions you like, so we can show you more ads". Then they make tweaks to chrome so that websites only get rendered well if they use certain APIs, so now competitors to Chrome are forced to implement those APIs, but those aren't public.
You and I have a different definition of "forced". But, are you speculating this might happen, or do you have an example of it happening?
> And all of this was initially build by free labour, which google took, by people who thought they were contributing to some commons in a sense.
Do you have an example of a site that works better in chrome, than it does in chromium? I'll even take an example of a site that works worse in the version of chromium before manifest v2 was disabled, compared to whatever version of chrome you choose?
> Copyleft licenses protect against this. Part of the license says: if you use these licenses, and you make changes to the software, you have to share the changes as well, you can't keep them for yourself".
Is chromium not still foss? Other than branding, what APIs or features are missing from the FOSS version? You mentioned manifest v3, but I'm using firefox because of it, so I don't find that argument too compelling. I don't think FOSS is worse, I think google is making a bad bet.
Large parts of Chrome are actually GPL AFAIK, which is one reason both Apple and Google made it open source in the first place.
> chrome is what everybody uses because google has lock-in power.
Incorrect. At least on Windows, Chrome is not the default browser, it is the browser that most users explicitly choose to install, despite Microsoft's many suggestions to the contrary.
This is what most pro-antitrust arguments miss. Even when consumers have to go out of their way to pick Google, they still do. To me, this indicates that Google is what people actually want, but that's an inconvenient fact which doesn't fit the prevailing political narrative.
> so that websites only get rendered well if they use certain APIs, so now competitors to Chrome are forced to implement those APIs, but those aren't public.
What is a Chrome API that web developers could possibly implement but that "isn't public?" What would that even mean in this context?
> google says "oh I'm going to disallow you running the extensions you like, so we can show you more ads".
And that could have happened just as well if Chrome was 100% open source and GPL.
Even if you accept the claim that Manifest V3's primary purpose was not increasing user security at face value (and that's a tenuous claim at best), it was perfectly possible for all third-party browsers (notably including Edge, which has 0 dependency on Google's money) to fork Chromium in a way that kept old extensions working. However, open source does not mean that features will magically appear in your software. If Google is the primary maintainer and Google wishes to remove some feature, maintaining that feature in your fork requires upkeep, upkeep that most Chromium forkers were apparently unwilling to provide. This has nothing to do with whether Chrome is open source or not.
>> A some point the company says "you know what, we like this software so much that we're going to fork it, but the fork isn't going to be free or open source. It's going to be just ours, and we're not going to share the improvements we made"
Right. So at that point all those contributing developers are free to fork, and maintain the fork. You have just as much control as you always did.
And of course being MIT or GPL doesn't make a difference, the company is permitted to change the license either way. [1]
So here's the thing, folk are free to use the company product or not. Folk are free to fork or not.
In practice of course the company version tends to win because products need revenue to survive. And OSS has little to zero revenue. (The big revenue comes from, you know, companies who typically sell commercial software.)
Even with the outcome you hypothesize (and clearly that is a common outcome) OSS is still ahead because they have the code up to the fork. And yes, they may have contributed to earn this fork.
But projects are free to change license. That's just built into how licenses work. Assuming that something will be GPL or MIT or whatever [2] forever is on you, not them.
[1] I'm assuming CLA us in play because without that your explanation won't work.
[2] yes, I think GPL sends a signal of intention more than MIT, but it's just a social signal, it doesn't mean it can't change. Conversely making it GPL makes it harder for other developers to adopt in the first place since most are working in non-GPL environments.
> Right. So at that point all those contributing developers are free to fork, and maintain the fork. You have just as much control as you always did.
Yep. And we've seen this happen. Eg, MariaDB forked off from MySQL. Illumos forked from Solaris. Etc. Its not a nice thing to have to do, but its hardly a doomsday situation.
> Is anyone else worried that [...] the Rust world [...] slap on it MIT license because it's [reason you don't like]?
No... I don't think that's how software works. Do you have an example of that happening? Has any foss project lost control of the "best" version of some software?
> Not everything in software is about features.
I mean, I would happily make the argument that the ability to use code however I want without needing to give you, (the people,) permission to use my work without following my rules a feature. But then, stopping someone from using something in a way you don't like, is just another feature of GPL software too, is it not?
I don't know the legals in detail, but I cant imagine that GPL would do something about how you use it in your home? How is that enforceable?
Again don't now the legals but I think in practical terms this affects companies trying to own a project.
> using something in a way you don't like
You're mischaracterizing what I'm saying. For one thing you're talking about "someone" when I'm taking about "someone with power". Copyleft isn't about two people, one gaining power over the other. It's about lots of people with no power protecting themselves again one entity with a lot of power to impose themselves.
> Do you have an example of that happening?
Are you new to HN? Every month there's news of projects trying to arrest power contributors using various shenanigans. Copyleft protects against a class of such attacks.
I don't mind gpg. I still use it a lot especially with the private keys on openpgp smartcards or yubikeys.
It's a pretty great ecosystem, most hardware smartcards are surrounded by a lot of black magic and secret handshakes and stuff like pkcs#11 and opensc/openct are much much harder to configure.
I use it for many things but not for email. Encrypted backups, password manager, ssh keys. For some there are other hardware options like fido2 but not for all usecases and not the same one for each usecase. So I expect to be using gpg for a long time to come.
A thru-line of some of the gnarliest vulnerabilities here is PGP's insane packet system, where a PGP message is a practically arbitrary stream of packets, some control and some data, with totally incoherent cryptographic bindings. It's like something in between XMLDSIG (which pulls cryptographic control data out of random places in XML messages according to attacker-controlled tags) and SSL2 (with no coherent authentication of the complete handshake).
The attack on detached signatures (attack #1) happens because GnuPG needs to run a complicated state machine that can put processing into multiple different modes, among them three different styles of message signature. In GPG, that whole state machine apparently collapses down to a binary check of "did we see any data so that we'd need to verify a signature?", and you can selectively flip that predicate back and forth by shoving different packets into message stream, even if you've already sent data that needs to be verified.
The malleability bug (attack #4) is particularly slick. Again, it's an incoherent state machine issue. GPG can "fail" to process a packet because it's cryptographically invalid. But it can also fail because the message framing itself is corrupted. Those latter non-cryptographic failures are handled by aborting the processing of the message, putting GPG into an unexpected state where it's handling an error and "forgetting" to check the message authenticator. You can CBC-bitflip known headers to force GPG into processing DEFLATE compression, and mangle the message such that handling the message prints the plaintext in its output.
The formfeed bug (#3) is downright weird. GnuPG has special handling for `\f`; if it occurs at the end of a line, you can inject arbitrary unsigned data, because of GnuPG's handling of line truncation. Why is this even a feature?
Some of these attacks look situational, but that's deceptive, because PGP is (especially in older jankier systems) used as an encryption backend for applications --- Mallory getting Alice to sign or encrypt something on her behalf is an extremely realistic threat model (it's the same threat model as most cryptographic attacks on secure cookies: the app automatically signs stuff for users).
There is no reason for a message encryption system to have this kind of complexity. It's a deep architectural flaw in PGP. You want extremely simple, orthogonal features in the format, ideally treating everything as clearly length-delimited opaque binary blobs. Instead you get a Weird Machine, and talks like this one.
I'm working on a multi sig file authentication solution based on minisign. Anyone knows the response of the dev regarding minisign's listed vulnerability? If I'm not mistaken, the response of the authors are not included in the vulnerabilities' descriptions.
AFAICT this is GnuPG specific and not OpenPGP related? Since GnuPG has pulled out of standards compliance anyway there are many better options. Sequoia chameleon even has drop in tooling for most workflows.
They presented critical parser flaws in all major PGP implementations, not just GNU PGP, also sequoia, minisign and age. But gpg made the worst impression to us. wontfix
Sequoia is mentioned in only one vulnerability for supporting lines much longer than gpg. gpg silently truncates and discards long base64 lines and sequoia does not. So the vulnerability is in ability to feed more data to sequoia which doesn't have the silent data loss of gpg.
In all other cases they only used sequoia as a tool to build data for demonstrating gpg vulnerabilities.
The vulnerability that opens the talk, where they walk through verifying a Linux ISO's signature and hash and then boot into a malicious image, impacts both GnuPG and Sequoia.
They're not, but the flaws they found are independent of PGP. Mainly invalid handling of strings in C and allowing untrusted ANSI codes in terminal output.
I think it would be more accurate (and more helpful) to say that the two factions in the OpenPGP standards schism[1] have pulled away from the idea of consensus. There is a fundamental philosophical difference here. The LiberePGP faction (GnuPGP) is following the traditional PGP minimalism when it comes to changes and additions to the standard. The RFC-9580 faction (Sequoia) is following a kind of maximalist approach where any potential issue might result in a change/addition.
Fortunately, it turned out that there wasn't anything particularly wrong with the current standards so we can just do that for now and avoid the standards war entirely. Then we will have interoperability across the various implementations. If some weakness comes up that actually requires a standards change then I suspect that consensus will be much easier to find.
Some of these are suggesting that an attacker might trick the victim into decrypting a message before sending to the attacker. If that is really the best sort of attack you can do against PGP then, yeah, that is the kind of vibe you might get.
The specific bugs are with GPG, but a lot of the reason they can exist to begin with is PGP’s convoluted architecture which, IMO, makes these sorts of issues inevitable. I think they are effectively protocol bugs.
From what I can piece together while the site is down, it seems like they've uncovered 14 exploitable vulnerabilities in GnuPG, of which most remain unpatched. Some of those are apparently met by refusal to patch by the maintainer. Maybe there are good reasons for this refusal, maybe someone else can chime in on that?
Is this another case of XKCD-2347? Or is there something else going on? Pretty much every Linux distro depends on PGP being pretty secure. Surely IBM & co have a couple of spare developers or spare cash to contribute?
> Surely IBM & co have a couple of spare developers or spare cash to contribute?
A major part of the problem is that GPG’s issues aren’t cash or developer time. It’s fundamentally a bad design for cryptographic usage. It’s so busy trying to be a generic Swiss Army knife for every possible user or use case that it’s basically made of developer and user footguns.
The way you secure this is by moving to alternative, purpose-built tools. Signal/WhatsApp for messaging, age for file encryption, minisign for signatures, etc.
If by "pretty much every Linux distro depends on PGP being pretty secure" you're referring to its use to sign packages in Linux package managers, it's worth noting that they use PGP in fairly narrowly constrained ways; in particular, the data is often already trusted because it was downloaded over HTTPS from a trusted server (making PGP kind of redundant in some ways). So most PGP vulnerabilities don't affect them.
If there were a PGP vulnerability that actually made it possible to push unauthorized updates to RHEL or Fedora systems, then probably IBM would care, but if they concluded that PGP's security problems were a serious threat then I suspect they'd be more likely to start a migration away from PGP than to start investing in making PGP secure; the former seems more tractable and would have maintainability benefits besides.
The other is that if you get access to one of the mirrors and replace a package, it's the signature that stops you. Https is only relevant for mitm attacks.
> they'd be more likely to start a migration away from PGP
Debian and most Debian derivatives have HTTP-only mirrors. Which I've found absolutely crazy for years. Though nobody seems to care. Maybe it'll change this time around.
Though this type of juggling knives is not unique to Linux. AMD and many other hardware vendors ship executables over unencrypted connections for Windows. All just hoping that not a single similar vulnerability or confusion can be found.
Downloading over HTTPS does not help with that (although it can prevent spies from seeing what files you are downloading) unless you can independently verify the server's keys. The certificate is intended to do this but the way that standard certificate authorities work will only verify the domain name, and has some other limitations. TLS does have other benefits, but it does a different thing. Using only TLS to verify the packages is not very good, especially with the existing public certificate authorities.
If you only need a specific version and you already know what that one is, then using a cryptographic hash will be a better way to verify packages, although that only applies for one specific version of one specific package. So, using an encrypted protocol (HTTPS or any other one) alone will not help, although it will help in combination with other things; you will need to do other things as well, to improve the security.
It was a cleartext signature, not a detached signature.
Edit: even better. It was both. There is a signature type confusion attack going on here. I still didn't watch the entire thing, but it seems that unlike gpg, they do have to specify --cleartext explicitly for Sequoia, so there is no confusion going on that case.
ffmpeg doesn't have a cargo-cult of self-proclaimed "privacy experts" that tell activists and whistleblowers to use their thing instead of other tools cryptographers actually recommend.
Yeah, instead they have a cargo-cult of self-proclaimed OSS contribution experts who harass anyone that critiques or challenges ffmpeg's twitter account.
If mass use of GPG benefited Microsoft, Amazon, Google and all the other assholes it would be polished, slick, and part of 9th grade curriculum. They call it “Face ID” that’s the Orwellian shit that makes money so that’s what we get instead. These things take resources, don’t blame the projects.
Don't you think it's time to update it, given you start by saying that "If someone, while trying to sell you some high security mechanical system, told you that the system had remained unbreached for the last 20 years you would take that as a compelling argument"?
Because you're clearly presenting it as a defense of PGP on a thread from a presentation clearly delineating breaks in it using exactly the kind of complexity that the article you're responding to predicts would cause it to break.
Nope. Not yet enabled. It was submitted to HN right after the talk where they promised to make it public "really soon" after the talk. We all saw the talk live or on the stream
Okay, since there’s so much stuff to digest here and apparently there are issues designated as wontfix by GnuPG maintainers, can someone more in the loop tell us whether using gpg signatures on git commits/tags is vulnerable? And is there any better alternative going forward? Like is signing with SSH keys considered more secure now? I certainly want to get rid of gpg from my life if I can, but I also need to make sure commits/tags bearing my name actually come from me.
One of those WONTFIX's is on an insane vulnerability: you can bitflip known plaintext in a PGP message to switch it into handling compression, allowing attackers to instruct GnuPG packet processing to look back to arbitrary positions in the message, all while suppressing the authentication failure message. GPG's position was: they print, in those circumstances, an error of some sort, and that's enough. It's an attack that reveals plaintext bytes!
Are you referring to "Encrypted message malleability checks are incorrectly enforced causing plaintext recovery attacks"?
Seems like a legitimate difference of opinion. The researcher wants a message with an invalid format to return an integrity failure message. Presumably the GnuPGP project thinks that would be better handled by some sort of bad format error.
The exploit here is a variation on the age old idea of tricking a PGP user into decrypting an encrypted message and then sending the result to the attacker. The novelty here is the idea of making the encrypted message look like a PGP key (identity) and then asking the victim to decrypt the fake key, sign it and then upload it to a keyserver.
Modifying a PGP message file will break the normal PGP authentication[1] (that was not acknowledged in the attack description). So here is the exploit:
* The victim receives a unauthenticated/anonymous (unsigned or with a broken signature) message from the attacker. The message looks like a public key.
* Somehow (perhaps in another anonymous message) the attacker claims they are someone the victim knows and asks them to decrypt, sign and upload the signed public key to a keyserver.
* They see nothing wrong with any of this and actually do what the attacker wants ignoring the error message about the bad message format.
So this attack is also quite unlikely. Possibly that affected the decision of the GnuPG project to not change behaviour in this case, particularly when such a change could possibly introduce other vulnerabilities.
[1] https://articles.59.ca/doku.php?id=pgpfan:pgpauth
Added: Wait. How would the victim import the bogus PGP key into GPG so they could sign it? There would normally be a preexisting key for that user so the bogus key would for sure fail to import. It would probably fail anyway. It will be interesting to see what the GnuPG project said about this in their response.
In the course of this attack, just in terms of what happens in the mechanics of the actual protocol, irrespective of the scenario in which these capabilities are abused, the attacker:
(1) Rewrites the ciphertext of a PGP message
(2) Introducing an entire new PGP packet
(3) That flips GPG into DEFLATE compression handling
(4) And then reroutes the handling of the subsequent real message
(5) Into something parsed as a plaintext comment
This happens without a security message, but rather just (apparently) a zlib error.
In the scenario presented at CCC, they used the keyserver example to demonstrate plaintext exfiltration. I kind of don't care. It's what's happening under the hood that's batshit; the "difference of opinion" is that the GnuPG maintainers (and, I guess, you) think this is an acceptable end state for an encryption tool.
Is there a better alternative to GPG?
Everything is better than PGP (not just GPG --- all PGP implementations).
The problem with PGP is that it's a Swiss Army Knife. It does too many things. The scissors on a Swiss Army Knife are useful in a pinch if you don't have real scissors, but tailors use real scissors.
Whatever it is you're trying to do with encryption, you should use the real tool designed for that task. Different tasks want altogether different cryptosystems with different tradeoffs. There's no one perfect multitasking tool.
When you look at the problem that way, surprisingly few real-world problems ask for "encrypt a file". People need backup, but backup demands backup cryptosystems, which do much more than just encrypt individual files. People need messaging, but messaging is wildly more complicated than file encryption. And of course people want packet signatures, ironically PGP's most mainstream usage, ironic because it relies on only a tiny fraction of PGP's functionality and still somehow doesn't work.
All that is before you get to the absolutely deranged 1990s design of PGP, which is a complex state machine that switches between different modes of operation based on attacker-controlled records (which are mostly invisible to users). Nothing modern looks like PGP, because PGP's underlying design predates modern cryptography. It survives only because nerds have a parasocial relationship with it.
> It survives only because nerds have a parasocial relationship with it.
I really would like to replace PGP with the "better" tool, but:
* Using my Yubikey for signing (e.g. for git) has a better UX with PGP instead of SSH
* I have to use PGP to sign packages I send to Maven
Maybe I am a nerd emotionally attached to PGP, but after a year signing with SSH, I went back to PGP and it was so much better...
> better UX with PGP instead of SSH
This might be true of comparing GPG to SSH-via-PIV, but there's a better way with far superior UX: derive an SSH key from a FIDO2 slot on the YubiKey.
I do it with FIDO2. It's inconvenient when having multiple Yubikeys (I always end up adding the entry manually with ssh-agent), and I have to touch the Yubikey everytime it signs. That makes it very annoying when rebasing a few tens of commits, for instance.
With GPG it just works.
For what it's worth: You can set no-touch-required on a key (it's a generation-time option though).
Sure, but then it is set to no-touch for every FIDO2 interaction I have. I don't want to touch for signing, but I want to touch when using it as a passkey, for instance.
The thing I can't get past with PGP / GPG is that it tries to work around MITM attacks by encouraging users to place their social network on the public record (via public key attestation).
This is so insane to me. The whole point of using cryptography is to keep private information private. Its hard to think of ways PGP could fail more as a security / privacy tool.
Do you mean keyservers? Keyservers have nothing to do with the identity verification required to prevent MITM attacks. There is only one method available for PGP. Comparison of key fingerprints/IDs.
Keyservers are simply a convenient way to get a public key (identity). Most people don't have to use them.
Now can you give us a list of all the features of PGP and a tool that does one specific thing really well?
https://www.latacora.com/blog/2019/07/16/the-pgp-problem/#th...
> Use Signal. Or Wire, or WhatsApp, or some other Signal-protocol-based secure messenger.
That's a "great" idea considering the recent legal developments in the EU, which OpenPGP, as bad as it is, doesn't suffer from. It would be great if the author updated his advice into something more future-proof.
There's no future-proof suggestion that's immune to the government declaring it a crime.
If you want a suggestion for secure messaging, it's Signal/WhatsApp. If you want to LARP at security with a handful of other folks, GPG is a fine way to do that.
Nobody decided that it's a crime, and it's unlikely to happen. Question is, what do you do with mandatory snooping of centralized proprietary services that renders them functionally useless aside from "just live with it". I was hoping for actual advice rather than a snarky non-response, yet here we are.
> Nobody decided that it's a crime, and it's unlikely to happen.
Which jurisdiction are you on about? [1] Pick your poison.
For example, UK has a law forcing suspects to cooperate. This law has been used to convict suspects who weren't cooperating.
NL does not, but police can use force to have a suspect unlock a device using finger or face.
[1] https://en.wikipedia.org/wiki/Key_disclosure_law
I gave you the answer that exists: I'm not aware of any existing or likely-to-exist secure messaging solution that would be a viable recommendation.
The available open-source options come nowhere close to the messaging security that Signal/Whatsapp provide. So you're left with either "find a way to access Signal after they pull out of whatever region has criminalized them operating with a backdoor on comms" or "pick any option that doesn't actually have strong messaging security".
> messaging security
> WhatsApp
Eh?
There are alternatives, try Ricochet (Refresh) or Cwtch.
I stand by what I said.
I mean... why?
Not the GP, but most of us want to communicate with other people, which means SMS or WhatsApp. No point have perfect one-time-pad encryption and no one to share pads with.
You're asking for a technical solution to a political problem.
The answer is not to live with it, but become politically active to try to support your principles. No software can save you from an authoritarian government - you can let that fantasy die.
Could you please link the source code for the WhatsApp client, so that we can see the cryptographic keys aren't being stored and later uploaded to Meta's servers, completely defeating the entire point of Signal's E2EE implementation and ratchet protocol?
This may shock you, but plenty of cutting-edge application security analysis doesn't start with source code.
There are many reasons, but one of them is that for the overwhelming majority of humans on the planet, their apps aren't being compiled from source on their device. So since you have to account for the fact that the app in the App Store may not be what's in some git repo, you may as well just start with the compiled/distributed app.
Whether or not other people build from source code has zero relevance to a discussion about the trustworthiness of security promises coming from former PRISM data providers about the closed-source software they distribute. Source availability isn't theater, even when most people never read it, let alone build from it. The existence of surreptitious backdoors and dynamic analysis isn't a knock against source availability.
Signal and WhatsApp do not belong in the same sentence together. One's open source software developed and distributed by a nonprofit foundation with a lengthy history of preserving and advancing accessible, trustworthy, verifiable encrypted calling and messaging going back to TextSecure and RedPhone, the other's a piece of proprietary software developed and distributed by a for-profit corporation whose entire business model is bulk harvesting of user data, with a lengthy history of misleading and manipulating their own users and distributing user data (including message contents) to shady data brokers and intelligence agencies.
To imply these two offer even a semblance of equivalent privacy expectations is misguided, to put it generously.
Saw it, not impressed, GnuPG has a lot of more features than signing and file encryption.
And there are lots of tools for file encryption anyways. I have a bash function using openssh, sometimes I use croc (also uses PAKE), etc.
I need an alternative to "gpg --encrypt --armor --recipient <foo>". :)
I guess we'll have to live with you being unimpressed.
> I need an alternative to "gpg --encrypt --armor --recipient <foo>"
That's literally age.
https://github.com/FiloSottile/age
No, because there is no keyring and you have to supply people's public key each time. It is not suitable for large-scale public key management (with unknown recipients), and it does not support automatic discovery, trust management. Age does NOT SUPPORT signing at all either.
> you have to supply people's public key each time
Keyrings are awful. I want to supply people’s public keys each time. I have never, in my entire time using cryptography, wanted my tool to guess or infer what key to verify with. (Heck, JOSE has a long history of bugs because it infers the key type, which is also a mistake.)
I have an actual commercial use case that receives messages (which are, awkwardly, files sent over various FTP-like protocols, sigh), decrypts and verifies them, and further processes them. This is fully automated and runs as a service. For horrible legacy reasons, the files are in PGP format. I know the public key with which they are signed (provisioned out of band) and I have the private key for decryption (again, provisioned out of band).
This would be approximately two lines of code using any sane crypto library [0], but there really isn’t an amazing GnuPG alternative that’s compatible enough.
But GnuPG has keyrings, and it really wants to use them and to find them in some home directory. And it wants to identify keys by 32-bit truncated hashes. And it wants to use Web of Trust. And it wants to support a zillion awful formats from the nineties using wildly insecure C code. All of this is actively counterproductive. Even ignoring potential implementation bugs, I have far more code to deal with key rings than actual gpg invocation for useful crypto.
[0] I should really not have to even think about the interaction between decryption and verification. Authenticated decryption should be one operation, or possibly two. But if it’s two, it’s one operation to decapsulate a session key and a second operation to perform authenticated decryption using that key.
Some years ago I wrote "just a little script" to handle encrypting password-store secrets for multiple recipients. It got quite ugly and much more verbose than planned, switching gpg output parsing to Python for sanity. I think I used a combination of --keyring <mykeyring> --no-default-keyring. Never would encourage anyone to do this again.
>And it wants to identify keys by 32-bit truncated hashes.
That's 64 bits these days.
>I should really not have to even think about the interaction between decryption and verification.
Messaging involves two verifications. One to insure that you are sending the message to who you think you are sending the message. The other to insure that you know who you received a message from. That is an inherent problem. Yes, you can use a shared key for this but then you end up doing both verifications manually.
>> And it wants to identify keys by 32-bit truncated hashes.
> That's 64 bits these days.
The fact that it’s short enough that I even need to think about whether it’s a problem is, frankly, pathetic.
> Messaging involves two verifications. One to insure that you are sending the message to who you think you are sending the message. The other to insure that you know who you received a message from. That is an inherent problem. Yes, you can use a shared key for this but then you end up doing both verifications manually.
I can’t quite tell what you mean.
One can build protocols that do encrypt-then-sign, encrypt-and-sign, sign-then-encrypt, or something clever that combines encryption and signing. Encrypt-then-sign has a nice security proof, the other two combinations are often somewhat catastrophically wrong, and using a high quality combination can have good performance and nice security proofs.
But all of the above should be the job of the designer of a protocol, not the user of the software. If my peer sends me a message, I should provision keys, and then I should pass those keys to my crypto library along with a message I received (and perhaps whatever session state is needed to detect replays), and my library should either (a) tell me that the message is invalid and not give me a guess as to its contents or (b) tell me it’s valid and give me the contents. I should not need to separately handle decryption and verification, and I should not even be able to do them separately even if I want to.
Why is a keyring important to you?
Would "fetch a short-lived age public key" serve your use case? If so, then an age plugin that build atop the AuxData feature in my Fediverse Public Key Directory spec might be a solution. https://github.com/fedi-e2ee/public-key-directory-specificat...
But either way, you shouldn't have long-lived public keys used for confidentiality. It's a bad design to do that.
We need a keyring at a company. Because there's no other media for communicating, where you reach management and technical people in companies as well.
And we have massive issues due to the fact that the ongoing-decrying of "shut everything off" and the following non-improvement-without-an-alternative because we have to talk with people of other organizations (and every organization runs their own mailserver) and the only really common way of communication is Mail.
And when everyone has a GPG Key, you get.. what? an keyring.
You could say, we do not need gpg, because we control the mailserver, but what if a mailserver is compromised and the mails are still in mailboxes?
the public keys are not that public, only known to the contenders, still, it's an issue and we have a keyring
> you shouldn't have long-lived public keys used for confidentiality.
This statement is generic and misleading. Using long-lived keys for confidentiality is bad in real-time messaging, but for non-ephemeral use cases (file encryption, backups, archives) it is completely fine AND desired.
> Would "fetch a short-lived age public key" serve your use case?
Sadly no.
(This is some_furry, I'm currently rate-limited. I thought this warranted a reply, so I switched to this account to break past the limit for a single comment.)
> This statement is generic and misleading.
It may be generic, but it's not misleading.
> Using long-lived keys for confidentiality is bad in real-time messaging, but for non-ephemeral use cases (file encryption, backups, archives) it is completely fine.
What exactly do you mean by "long-lived"?
The "lifetime" of a key being years (for a long-lived backup) is less important than how many encryptions are performed with said key.
The thing you don't want is to encrypt 2^50 messages under the same key. Even if it's cryptographically safe to do that, any post-compromise key rotation will be a fucking nightmare.
The primary reason to use short-lived public keys is to limit the blast radius. Consider these two companies:
Alice Corp. uses the same public key for 30+ years.
Bob Ltd. uses a new public key for each quarter over the same time period.
Both parties might retain the secret key indefinitely, so that if Bob Ltd. needs to retrieve a backup from 22 years ago, they still can.
Now consider what happens if both of them lose their currently-in-use secret key due to a Heartbleed-style attack. Alice has 30 years of disaster recovery to contend with, while Bob only has up to 90 days.
Additionally, file encryption, backups, and archives typically use ephemeral symmetric keys at the bottom of the protocol. Even when a password-based key derivation function is used (and passwords are, for whatever reason, reused), the password hashing function usually has a random salt, thereby guaranteeing uniqueness.
The idea that "backups" magically mean "long-lived" keys are on the table, without nuance, is extremely misleading.
> > Would "fetch a short-lived age public key" serve your use case?
> Sadly no.
shrug Then, ultimately, there is no way to securely satisfy your use case.
You introduced "short-lived" vs "long-lived", not me. Long-lived as wall-clock time (months, years) is the default interpretation in this context.
The Alice / Bob comparison is asymmetric in a misleading way. You state Bob Ltd retains all private keys indefinitely. A Heartbleed-style attack on their key storage infrastructure still compromises 30 years of backups, not 90 days. Rotation only helps if only the current operational key is exposed, which is an optimistic threat model you did not specify.
Additionally, your symmetric key point actually supports what I said. If data is encrypted with ephemeral symmetric keys and the asymmetric key only wraps those, the long-lived asymmetric key's exposure does not enable bulk decryption without obtaining each wrapped key individually.
> "There is no way to securely satisfy your use case"
No need to be so dismissive. Personal backup encryption with a long-lived key, passphrase-protected private key, and offline storage is a legitimate threat model. Real-world systems validate this: SSH host keys, KMS master keys, and yes, even PGP, all use long-lived asymmetric keys for confidentiality in non-ephemeral contexts.
And to add to this, incidentally, age (the tool you mentioned) was designed with long-lived recipient keys as the expected use case. There is no built-in key rotation or expiry mechanism because the authors considered it unnecessary for file encryption. If long-lived keys for confidentiality were inherently problematic, age would be a flawed design (so you might want to take it up with them, too).
In any case, yeah, your point about high-fan-out keys with large blast radius is correct. That is different from "long-lived keys are bad for confidentiality" (see above with regarding to "age").
An intended use case for FOKS (https://foks.pub) is to allow long-lived durable shared secrets between users and teams with key rotation when needed.
>Personal backup encryption with a long-lived key, passphrase-protected private key, and offline storage is a legitimate threat model
... If you're going to use a passphrase anyway why not just use a symmetric cipher?
In fact for file storage why not use an encrypted disk volume so you don't need to use PGP?
That was just me being goofy in that bit (and only that), but I hope the rest of my message went across. :)
> In fact for file storage why not use an encrypted disk volume so you don't need to use PGP?
Different threat models. Disk encryption (LUKS, VeraCrypt, plain dm-crypt) protects against physical theft. Once mounted, everything is plaintext to any process with access. File-level encryption protects files at rest and in transit: backups to untrusted storage, sharing with specific recipients, storing on systems you do not fully control. You cannot send someone a LUKS volume to decrypt one file, and backups of a mounted encrypted volume are plaintext unless you add another layer.
sq (sequoia) should be able to sort that.
I know, I have been using it recently.
This is exactly that, in more detail than you could possibly ever ask for:
https://soatok.blog/2024/11/15/what-to-use-instead-of-pgp/
https://soatok.blog/2024/11/15/what-to-use-instead-of-pgp/
I wrote this to answer this exact question last year.
> The only downside to Sigstore is it hasn’t been widely adopted yet.
Which, from where I stand, means that PGP is the only viable solution because I don't have a choice. I can't replace PGP with Sigstore when publishing to Maven. It's nice to tell me I'm dumb because I use PGP, but really it's not my choice.
> Use SSH Signatures, not PGP signatures.
Here I guess it's just me being dumb on my own. Using SSH signatures with my Yubikeys (FIDO2) is very inconvenient. Using PGP signatures with my Yubikeys literally just works.
> Encrypted Email: Don’t encrypt email.
I like this one, I keep seeing it. Sounds like Apple's developer support: if I need to do something and ask for help, the answer is often: "Don't do it. We suggest you only use the stuff that just works and be happy about it".
Sometimes I have to use emails, and cryptographers say "in that case just send everything in plaintext because eventually some of your emails will be sent in plaintext anyway". Isn't it like saying "no need to use Signal, eventually the phone of one of your contacts will be compromised anyway"?
offtopic question:
as a recent dabbling reader of introductory popsci content in cryptography, I've been wondering about what are the different segmentation of expert roles in the field?
e.g. in Filippo's blogpost about Age he clarified that he's not a cryptographer but rather a cryptography engineer, is that also what your role is, what are the concrete divisions of labor, and what other related but separate positions exists in the overall landscape?
where is the cutoff point of "don't roll your own crypto" in the different levels of expertise?
You did not ask me, but you should do your due diligence because there are way too many armchair cryptographers around here.
Depending on what you are after, an alternative could be using SSH keys for signatures and age[1] for encryption targeting SSH keys.
[1] <https://github.com/FiloSottile/age>
sq (sequoia) is compatible and is available in your favorite distro. It's the recommended replacement.
https://book.sequoia-pgp.org/about_sequoia.html
This is the right answer.
The problem mostly concerns the oldest parts of PGP (the protocol), which gpg (the implementation) doesn't want or cannot get rid of.
age
It's a fundamentally bad idea to have a single key that applications are supposed to look for in a particular place, and then use to sign things. There is inherent complexity involved in making multi-context key use safe, and it's better to just avoid it architecturally.
Keys (even quantum safe) are small enough that having one per application is not a problem at all. If an application needs multi-context, they can handle it themselves. If they do it badly, the damage is contained to that application. If someone really wants to make an application that just signs keys for other applications to say "this is John Smith's key for git" and "this is John Smith's key for email" then they could do that. Such an application would not need to concern itself with permissions for other applications calling into it. The user could just copy and paste public keys, or fingerprints when they want to attest to their identity in a specific application.
The keyring circus (which is how GPG most commonly intrudes into my life) is crazy too. All these applications insist on connecting to some kind of GPG keyring instead of just writing the secrets to the filesystem in their own local storage. The disk is fully encrypted, and applications should be isolated from one another. Nothing is really being accomplished by requiring the complexity of yet another program to "extra encrypt" things before writing them to disk.
I'm sure these bad ideas come from the busy work invented in corporate "security" circles, which invent complexity to keep people employed without any regard for an actual threat model.
> The disk is fully encrypted, and applications should be isolated from one another.
For most apps on non-mobile devices, there isn't filesystem isolation between apps. Disk/device-level encryption solves for a totally different threat model; Apple/Microsoft/Google all ship encrypted storage for secrets (Keychain, Credential Manager, etc), because restricting key material access within the OS has merit.
> I'm sure these bad ideas come from the busy work invented in corporate "security" circles, which invent complexity to keep people employed without any regard for an actual threat model.
Basically everything in PGP/GPG predates the existence of "corporate security circles".
> For most apps on non-mobile devices, there isn't filesystem isolation between apps.
If there isn't there should be. At least my Flatpaks are isolated from each other.
> Apple/Microsoft/Google all ship encrypted storage for secrets (Keychain, Credential Manager, etc), because restricting key material access within the OS has merit.
The Linux equivalents are suspicious and stuck in the past to say the least. Depending on them is extra tedious on top of the tediousness of any PGP keyrings, god forbid a combination of the two.
> Basically everything in PGP/GPG predates the existence of "corporate security circles".
Then we know where this stuff came from.
> Then we know where this stuff came from.
I can’t figure out what you mean by this.
and now certain people in corporate security only trust gpg, because they grew up with it :D
These are not vulnerabilities in the "remote exploit" sense. They should be taken seriously, you should be careful not to run local software on untrusted data, and GPG should probably do more to protect users from shooting themselves in the foot, but the worst thing you could do is panic and throw out a process your partners and colleagues trust. There is nothing here that will disturb your workflow signing commits or apt-get install-ing from your distribution.
If you use crypographic command line tools to verify data sent to you, be mindful on what you are doing and make sure to understand the attacks presented here. One of the slides is titled "should we even use command line tools" and yes, we should because the alternative is worse, but we must be diligent in treating all untrusted data as adversarial.
A huge part of GPG’s purported use case is getting a signed/encrypted/both blob from somebody and using GPG to confirm it’s authentic. This is true for packages you download and for commits with signatures.
Handling untrusted input is core to that.
It is, and other software handling untrusted data should also treat it as adversarial. For example, your package tool should probably not output raw package metadata to the terminal.
I think you’re missing the forest for the trees.
It reads to me like attempting to verify a malicious ascii-armoured signature is potential RCE.
I did the switch this year after getting yet another personal computer. I have 4 in total (work laptop, personal sofa laptop, Mac Mini, Linux Tower). I used Yubi keys with gpg and resident ssh keys. All is fine but the configuration needed to get it too work on all the machines. I also tend to forget the finer details and have to relearn the skills of fetching the public keys into the keychain etc. I got rid of this all by moving to 1Password ssh agent and git ssh signing. Removes a lot of headaches from my ssh setup. I still have the yubi key(s) though as a 2nd factor for certain web services. And the gpg agent is still running but only as a fallback. I will turn this off next year.
I’ve ended up the same place as you. I had previously set up my gpg key on a Yubikey and even used that gpg key to handle ssh authentication. Then at some point it just stopped working, maybe the hardware on my key broke. 2FA still works though.
In any case I figured storing an SSH key in 1Password and using the integrated SSH socket server with my ssh client and git was pretty nice and secure enough. The fact the private key never leaves the 1Password vault unencrypted and is synced between my devices is pretty neat. From a security standpoint it is indeed a step down from having my key on a physical key device, but the hassle of setting up a new Yubikey was not quite worth it.
I’m sure 1Password is not much better than having a passphrase-protected key on disk. But it’s a lot more convenient.
> I had previously set up my gpg key on a Yubikey and even used that gpg key to handle ssh authentication. Then at some point it just stopped working, maybe the hardware on my key broke
Did you try to SSH in verbose mode to ascertain any errors? Why did you assume the hardware "broke" without anyone objective qualifications of an actual failure condition?
> I figured storing an SSH key in 1Password and using the integrated SSH socket server with my ssh client and git was pretty nice and secure enough
How is trusting a closed-source, for-profit, subscription-based application with your SSH credential "secure enough"?
Choosing convenience over security is certainly not unreasonable, but claiming both are achieved without any compromise borders on ludicrous.
> 1Password ssh agent and git ssh signing
I’m still working through how to use this but I have it basically setup and it’s great!
How is 1password safer than the local keychain?
The keys never leave the 1Password store. So you don’t have the keys on the local file system. That and that these keys are shared over the cloud was the seller for me. I guess security wise it’s a bit of a downgrade compared to resident keys. But the agent support agent forwarding etc which wasn’t really working with yubi ssh resident keys. Also worth mentioning that I use 1Password. Bitwarden has a similar feature as far as I know. For the ones who want to self host etc might be the even better solution.
> The keys never leave the 1Password store. So you don’t have the keys on the local file system.
Keychain and 1Password are doing variants of the same thing here: both store an encrypted vault and then give you credentials by decrypting the contents of that vault.
> I certainly want to get rid of gpg from my life if I can
I see this sentiment a lot, but you later hint at the problem. Any "replacement" needs to solve for secure key distribution. Signing isn't hard, you can use a lot of different things other than gpg to sign something with a key securely. If that part of gpg is broken, it's a bug, it can/should be fixed.
The real challenge is distributing the key so someone else can verify the signature, and almost every way to do that is fundamentally flawed, introduces a risk of operational errors or is annoying (web of trust, trust on first use, central authority, in-person, etc). I'm not convinced the right answer here is "invent a new one and the ecosystem around it".
It's not like GPG solves for secure key distribution. GPG keyservers are a mess, and you can't trust their contents anyways unless you have an out of band way to validate the public key. Basically nobody is using web-of-trust for this in the way that GPG envisioned.
This is why basically every modern usage of GPG either doesn't rely on key distribution (because you already know what key you want to trust via a pre-established channel) or devolves to the other party serving up their pubkey over HTTPS on their website.
Yes, not saying that web of trust ever worked. "Pre-established channel" are the other mechanisms I mentioned, like a central authority (https) or TOFU (just trust the first key you get). All of these have some issues, that any alternative must also solve for.
So if we need a pre-established channel anyways, why would people recommending a replacement for GPG workflows need to solve for secure key distribution?
This is a bit like looking at electric cars and saying ~"well you can't claim to be a viable replacement for gas cars until you can solve flight"
A lot of people are using PGP for things that don’t require any kind of key distribution. If you’re just using it to encrypt files (even between pointwise parties), you can probably just switch to age.
(We’re also long past the point where key distribution has been a significant component of the PGP ecosystem. The PGP web of trust and original key servers have been dead and buried for years.)
This is not the first time I see "secure key distribution" mentioned in HN+(GPG alternatives) context and I'm a bit puzzled.
What do you mean? Web of Trust? Keyservers? A combination of both? Under what use case?
I'm assuming they mean the old way of signing each others signatures.
As a practical implementation of "six degrees of Kevin Bacon", you could get an organic trust chain to random people.
Or at least, more realistically, to few nerds. I think I signed 3-4 peoples signatures.
The process had - as they say - a low WAF.
> As a practical implementation of "six degrees of Kevin Bacon", you could get an organic trust chain to random people.
GPG is terrible at that.
0. Alice's GPG trusts Alice's key tautologically. 1. Alice's GPG can trust Bob's key because it can see Alice's signature. 2. Alice's GPG can trust Carol's key because Alice has Bob's key, and Carol's key is signed by Bob.
After that, things break. GPG has no tools for finding longer paths like Alice -> Bob -> ??? -> signature on some .tar.gz.
I'm in the "strong set", I can find a path to damn near anything, but only with a lot of effort.
The good way used to be using the path finder, some random website maintained by some random guy that disappeared years ago. The bad way is downloading a .tar.gz, checking the signature, fetching the key, then fetching every key that signed in, in the hopes somebody you know signed one of those, and so on.
And GPG is terrible at dealing with that, it hates having tens of thousands of keys in your keyring from such experiments.
GPG never grew into the modern era. It was made for persons who mostly know each other directly. Addressing the problem of finding a way to verify the keys of random free software developers isn't something it ever did well.
What's funny about this is that the whole idea of the "web of trust" was (and, as you demonstrate, is) literally PGP punting on this problem. That's how they talked about it at the time, in the 90s, when the concept was introduced! But now the precise mechanics of that punt have become a critically important PGP feature.
I don't think it punted as much as it never had that as an intended usage case.
I vaguely recall the PGP manuals talking about scenarios like a woman secretly communicating with her lover, or Bob introducing Carol to Alice, and people reading fingerprints over the phone. I don't think long trust chains and the use case of finding a trust path to some random software maintainer on the other side of the planet were part of the intended design.
I think to the extent the Web of Trust was supposed to work, it was assumed you'd have some familiarity with everyone along the chain, and work through it step by step. Alice would known Bob, who'd introduce his friend Carol, who'd introduce her friend Dave.
In a signature context, you probably want someone else to know that "you" signed it (I can think of other cases, but that's the usual one). The way to do that requires them to know that the key which signed the data belongs to you. My only point is that this is actually the hard part, which any "replacement" crypto system needs to solve for, and that solving that is hard (none of the methods are particularly good).
> The way to do that requires them to know that the key which signed the data belongs to you.
This is something S/MIME does and I wouldn't say it doesn't do so well. You can start from mailbox validation and that already beats everything PGP has to offer in terms of ownership validation. If you do identity validation or it's a national PKI issuing the certificate (like in some countries) it's a very strong guarantee of ownership. Coughing baby (PGP) vs hydrogen bomb level of difference.
It much more sounds to me like an excuse to use PGP when it doesn't even remotely offer what you want from a replacement.
I think it should be mostly ad-hoc methods:
if you have a website put your keys in a dedicated page and direct people there
If you are in an org there can be whatever kind of centralised repo
Add the hashes to your email signature and/or profile bios
There might be a nice uniform solution using DNS and derived keys like certificate chains? I am not sure but I think it might not be necessary
Zero-days from the CCC talk https://fahrplan.events.ccc.de/congress/2025/fahrplan/event/...
But trust in Werner Koch is gone. Wontfix??
I am curious what you mean by "trust in Werner Koch is gone". Can you elaborate?
OP is complaining about GPG team rejecting issues with "wontfix" statuses.
To be frank, at this point, GPG has been a lost cause for basically decades.
People who are serious about security use newer, better tools that replace GPG. But keep in mind, there’s no “one ring to rule them all”.
What are those better tools? I've been broadly looking into this space, but never ventured too deep.
https://www.latacora.com/blog/2019/07/16/the-pgp-problem/#th... lists a bunch of them.
> Encrypting email
> Don't.
https://www.latacora.com/blog/2019/07/16/the-pgp-problem/#en...
I’m not sure I completely agree here. For private use, this seems fine. However, this isn’t how email encryption is typically implemented in an enterprise environment. It’s usually handled at the mail gateway rather than on a per-user basis. Enterprises also ensure that the receiving side supports email encryption as well.
edit: formatting
There's like one or two use cases where encrypting email could work. The best case I've come across--Bugzilla has the ability to let the user upload a public key to encrypt emails for updates to non-public bugs. It's not a big use case--pretty much the intersection of "must use email" and "can establish identity out of band," which does not describe most communication that uses email. (As tptacek notes in a sibling comment, you pretty much have to limit this to one-and-done stuff too, not anything that's going to be in an ongoing discussion, because leaks via unencrypted replies are basically guaranteed).
Your mail either needs to be encrypted reliably against real adversaries or it doesn't. A private emailing circle doesn't change that. If the idea here is, a private group of friends can just agree never to put anything in their subjects, or to accidentally send unencrypted replies, I'll just say I ran just such a private circle at Matasano, where we used encrypted mail to communicate about security assessment projects, and unencrypted replies happened.
> Your mail either needs to be encrypted reliably against real adversaries or it doesn't.
It is, GPG take care of that.
> If the idea here is, a private group of friends can just agree never to put anything in their subjects, or to accidentally send unencrypted replies
That’s not what I’m talking about. It’s an enterprise - you cannot send non-encrypted emails from your work mail account, the gateway takes care of it. It has many rules, including such based on the sender and recipient.
Surely, someone can print the mail and carry it out of the company’s premises, but at this point it’s intentional and the cat’s already out of the bag.
Even my doctor's office and local government agencies support PGP encrypted emails, and refuse to send personal data via unencrypted email, but tech nerds still claim no one can use it?
In general the userbase here is startuppers, they hate distributed solutions and love centralisation.
s/tech nerds/Arm-chair self-proclaimed cryptographers here on HN/
Sequoia for example has been doing a great job and implements the latest version of the standard which brings a lot of cryptography up to date
I'm yet to finish watching the talk, but it starts with them confirming the demo fraudulent .iso with sequoia also (they call it out by name), so this really makes me think. :)
Sequioa hasn't fixed the attack from the beginning of the talk, the one where they convert between cleartext and full signature formats and inject unsigned bytes into the output because of the confusion.
The latest version of a bad standard is still bad.
This page is a pretty direct indicator that GPG's foundation is fundamentally broken: you're not going to get to a good outcome trying to renovate the 2nd story.
That's just not true. Nothing in this page is a problem with the standard and everything in this page is the outdated parts of the old standard.
So then why do a bunch of these affect Sequoia as well?
ssh or minisign for signing age for file encryption
There are people who use GPG for more than that. Those that are fine with just those two features, sure. Heck, you can encrypt with "openssh", no need for age. :D I have a bash function for encryption and decryption!
Those people should perhaps ponder if it’s a reasonable thing to insist on using this broken standard/tool in 2025.
Yeah, well, I wish I could convince people to use 2-4 different tools when one does it "just fine".
I thought the whole unix philosophy was to have a bunch of tools that each do one thing well, and to compose them into the workflow you want.
And I thought most projects would be licensed as GNU by now but alas.
The gpg.fail page mentions minisign vulns too.
> To be frank, at this point, GPG has been a lost cause for basically decades.
Why do high-profile projects, such as Linux and QEMU, still use GPG for signing pull requests / tags?
https://docs.kernel.org/process/maintainer-pgp-guide.html
https://www.qemu.org/docs/master/devel/submitting-a-pull-req...
Why does Fedora / RPM still rely on GPG keys for verifying packages?
This is a staggering ecosystem failure. If GPG has been a known-lost cause for decades, then why haven't alternatives ^W replacements been produced for decades?
Let's not conflate GPG and PGP-in-general. RPM doesn't use GPG, it uses Sequoia PGP.
GPG is what GP is referring to as a lost cause. Now, it can be debated whether PGP-in-general is a lost cause too, but that's not what GP is claiming.
> it can be debated whether PGP-in-general is a lost cause too, but that's not what GP is claiming
It is though what both the fine article, and tptacek in these comments, are claiming!
Werner Koch from GnuPG recently (2025-12-26) posted this on their blog: https://www.gnupg.org/blog/20251226-cleartext-signatures.htm...
Archive link: https://web.archive.org/web/20251227174414/https://www.gnupg...
This feels pretty unsatisfying: something that’s been “considered harmful” for three decades should be deprecated and then removed in a responsible ecosystem.
(PGP/GPG are of course hamstrung by their own decision to be a Swiss Army knife/only loosely coupled to the secure operation itself. So the even more responsible thing to do is to discard them for purposes that they can’t offer security properties for, which is the vast majority of things they get used for.)
Well python discarded signing entirely so that's one way to solve it :)
Both CPython and distributions on PyPI are more effectively signed than they were before.
(I think you already know this, but want to relitigate something that’s not meaningfully controversial in Python.)
Being signed by some entity which is not the author is hardly more effective.
(I think you already know this as well)
It is, in fact, signed by the author. It's just a PKI, so you intermediate trust in the author through an authority.
This is exactly analogous to the Web PKI, where you trust CAs to identify individual websites, but the websites themselves control their keypairs. The CA's presence intermediates the trust but does not somehow imply that the CA itself does the signing for TLS traffic.
Not really, uploading via trusted publishers I don't own any private key, as you probably know having implemented it yourself I presume.
Trusted Publishing doesn’t involve any signing keys (well, there’s an IdP, but the IdP’s signature is over a JWT that the index verifies, not an end signature). You’re thinking of attestations, which do indeed involve a local ephemeral private key.
Again, I must emphasize that this is identical in construction to the Web PKI; that was intentional. There are good criticisms of PKIs on grounds of centrality, etc., but “the end entity doesn’t control the private key” is facially untrue and sounds more like conspiracy than anything else.
Conspiracy in what way? Can you explain?
On my web server where the certificate is signed by letsencrypt I do have a file which contains a private key. On pypi there is no such thing. I don't think the parallel is correct.
With Let’s Encrypt, your private key is (typically) rotated every 90 days. It’s kept on disk because 90 days is too long to reliably keep a private key resident in memory on unknown hardware.
With attestations on PyPI, the issuance window is 15 minutes instead of 90 days. So the private key is kept in memory and discarded as soon as the signing operation is complete, since the next signing flow will create a new one.
At no point does the private key leave your machine. The only salient differences between the two are file versus memory and the validity window, but in both cases PyPI’s implementation of attestations prefers the more ideal thing with respect to reducing the likelihood of local private key disclosure.
No? With let's encrypt the certificate is rotated, but the private key remains the same, and importantly, let's encrypt never gets to see it, and anything is logged.
I think you are conflating a CI runner I don't really control with my machine?
I mean, it’s an ephemeral VM that you have root on. You don’t own it, but you control it in every useful sense of the word.
But also, that’s an implementation detail. There’s no reason why PyPI couldn’t accept attestations from local machines (using email identities) using this scheme; it’s just more engineering and design work to determine what that would actually communicate.
It might be worthwhile for someone to do this engineering work; e.g., to make attestations work even for folks that use platforms like Codeberg or self-hosted git.
Yeah, completely agreed. I think there's a strong argument to be made for Codeberg as a federated identity provider, which would allow attestations from their runners.
(This would of course require Codeberg to become an IdP + demonstrate the ability to maintain a reasonable amount of uptime and hold their own signing keys. But I think that's the kind of responsibility they're aiming for.)
GPG is indeed deprecated.
Most people have never heard of it and never used it.
Can you provide a source this? To my understanding, the GnuPG project (and by extension PGP as an ecosystem) considers itself very much alive, even though practically speaking it’s effectively moribund and irrelevant.
(So I agree that it’s de facto dead, but that’s not the same thing as formal deprecation. The latter is what you do explicitly to responsibly move people away from something that’s not suitable for use anymore.)
Ah. I meant in the de facto sense.
I would be very much surprised if GPG has ever really achieved anything other than allowing crypto nerds to proclaim that things were encrypted or signed. Good for them I guess, but not of any practical importance, unlike SSH, TLS, 7Zip encryption, etc.
They allow some kind of nerd to claim that, but nobody who nerds out on cryptography defends PGP. Cryptographers hate PGP.
This doesn't explain why he decided to WONTFIX what is obviously a parser bug that allows injection of data into output through the headers.
But werner at this point has a history of irresponsible decisions like this, so it's sadly par for the course by now.
Another particularly egregious example: https://dev.gnupg.org/T4493
Seems to be down? Here's a thread with a summary of exploits presented in the talk: https://bsky.app/profile/filippo.abyssdomain.expert/post/3ma...
Maybe the site is overloaded. But as for the "brb, were on it!!!!" - this page had the live stream of the talk when it was happening. Hopefully they'll replace it with the recording when media.ccc.de posts it, which should be within a couple hours.
> this page had the live stream of the talk when it was happening
As they said, they were on it...
Took me a second but I got your joke
it's online now
Also expect contents referred in the slides (every "chapter" of the presentation referred to a url such as https://gpg.fail/clearsig or https://gpg.fail/minisig and so on)
For anyone relatedly wondering about the "schism", i.e. GnuPG abandoning the OpenPGP standard and doing their own self-governed thing, I found this email particularly insightful on the matter: https://lists.gnupg.org/pipermail/gnupg-devel/2025-September...
> As others have pointed out, GnuPG is a C codebase with a long history (going on 28 years). On top of that, it's a codebase that is mostly uncovered by tests, and has no automated CI. If GnuPG were my project, I would also be anxious about each change I make. I believe that because of this the LibrePGP draft errs on the side of making minimal changes, with the unspoken goal of limiting risks of breakage in a brittle codebase with practically no tests. (Maybe the new formats in RFC 9580 are indeed "too radical" of an evolutionary step to safely implement in GnuPG. But that's surely not a failing of RFC 9580.)
Here is my take on the OpenPGP standards schism:
* https://articles.59.ca/doku.php?id=pgpfan:schism
Nothing has improved and everything has gotten worse since I wrote that. Both factions are sleepwalking into an interoperability disaster. Supporting one faction or the other just means you are part of the problem. The users have to resist being made pawns in this pointless war.
>Maybe the new formats in RFC 9580 are indeed "too radical" of an evolutionary step to safely implement in GnuPG.
Traditionally the OpenPGP process has been based on minimalism and rejected everything without a strong justification. RFC-9580 is basically everything that was rejected by the LibrePGP faction (GnuPG) in the last attempt to come up with a new standard. It contains a lot of poorly justified stuff and some straight up pointless stuff. So just supporting RFC-9580 is not the answer here. It would require significant cleaning up. But again, just supporting LibrePGP is not the answer either. The process has failed yet again and we need to recognize that.
Is anyone else worried that a lot of people coming from the Rust world contribute to free software and mindlessly slap on it MIT license because it's "the default license"? (Yes, I've had someone say this to me, no joke)
GnuPG for all its flaws has a copyleft license (GPL3) making it difficult to "embrace extend extinguish". If you replace it with a project that becomes more successful but has a less protective (for users) license, "we the people" might lose control of it.
Not everything in software is about features.
You are attributing a general trend to a particular language community. I also believe that you are unjustifiably unfairly interpreting “default license” just because you disagree with what they think the “default license” is. We all know what is means by this. It just sounds like you think it should be something GPL
No, you're guessing what I'm thinking. I'm telling you that a person I spoke to TOLD ME verbatim "I chose MIT because it's the default lincense". I'm not guessing that's what they did, that's what they TOLD ME. Do you understand the concept or literally telling someone something?
> Is anyone else worried that a lot of people coming from the Rust world contribute to free software and mindlessly slap on it MIT license
Yeah; I actually used to do that to (use the "default license"), but eventually came to the same realisation and have been moving all my projects to full copyleft.
Thank you.
I'm not worried it might be the case. I'm certain that ubuntu and everyone else replacing gnu stuff with rust MIT stuff is done with the sole purpose of getting rid of copyleft components.
If the new components were GPL licensed there would be less opposition, but we just get called names and our opinions discarded. After all such companies have more effective marketing departments.
I find that this is something reflective of most modern language ecosystems, not just Rust. I actually first started noticing the pervasiveness of MIT on npm.
For me, I am of two minds. On one hand, the fact that billion-dollar empires are built on top of what is essentially unpaid volunteer work does rankle and makes me much more appreciative of copyleft.
On the other hand, most of my hobbyist programming work has continued to be released under some form of permissive license, and this is more of a reality of the fact that I work in ecosystems where use of the GPL isn't merely inconvenient, but legally impossible, and the pragmatism of permissive licenses win out.
I do wish that weak copyleft like the Mozilla Public License had caught on as a sort of middle ground, but it seems like those licenses are rare enough to where their use would invite as much scrutiny as the GPL, even if it was technically allowed. Perhaps the FSF could have advocated more strongly for weak copyleft in area where GPL was legally barred, but I suppose they were too busy not closing the network hole in the GPLv3 to bother.
I love the MPL and I use it wherever I get the opportunity. IMO it has all the advantages of the GPL and lacks the disadvantages (the viral part) that makes the GPL so difficult to use.
The vast majority of open-source software is written by people whose day job is building empires on top other open-source software, at zero cost and without releasing modifications, which is harder to do with the GPL.
Which is why I use copyleft licenses when I'm not getting paid
Well then the software needs to have its bugs fixed if it wants to have a chance at longer term survival.
I think that's a feature not a bug for upstream projects encouraging these rewrites.
It's harmful if the license of the rewrites if less protective of users, and then the rewrite ends up being very popular.
Seems like the users are voting with their feet, right? Maybe respect the users wishes and stop preaching what users should be wanting?
Users aren't voting. A few people who work at some huge corporations are making these decisions.
Or maybe the users are just not aware. Licenses flame wars were a thing over 20 years ago, people nowadays can totally don't know about what can happen to a MIT-licensed software.
This, thank you.
[flagged]
Hey, this is a completely unacceptable comment on HN. Please read the guidelines and make an effort to observe them if you want to participate here. We have to ban accounts that do this repeatedly. https://news.ycombinator.com/newsguidelines.html
Obviously I am aware that not all user actions represent choices, but the hypothetical being proposed was specifically in the context of good established free software alternatives existing. In that context users switching to software with more permissive licenses would imply a choice on the users part. It is reasonable to assume this choice implies the users value something about the other software more than they value what the GPL incumbent has to offer. Of course such a choice could be motivated by many things like newer features, slick website, the author’s marketing, but whatever the case if the license was not sufficient enticement to stay, this feels significant.
No. You can always take the MIT-licensed source. And GnuPG got used through a CLI “API” anyway.
GnuPG should be extended (incrementally rewritten into something much better and turned into a library) and the original GnuPG should be extinguished.
How would MIT make anyone lose control of it?
The way it works is:
A company adopts some software with a free but not copyleft license. Adopts means they declare "this is good, we will use it".
Developers help develop the software (free of charge) and the company says thank you very much for the free labour.
Company puts that software into everything it does, and pushes it into the infrastructure of everything it does.
Some machines run that software because an individual developer put it there, other machines run that software because a company put it there, some times by exerting some sort of power for it to end up there (for example, economic incentives to vendors, like android).
A some point the company says "you know what, we like this software so much that we're going to fork it, but the fork isn't going to be free or open source. It's going to be just ours, and we're not going to share the improvements we made"
But now that software is already running in a lot of machines.
Then the company says "we're going to tweak the software a bit, so that it's no longer inter-operable with the free version. You have to install our proprietary version, or you're locked out" (out of whatever we're discussing hypothetically. Could be a network, a standard, a protocol, etc).
Developers go "shit, I guess we need to run the proprietary version now. we lost control of it."
This is what happened e.g. with chrome. There's chromium, anyone can build it. But that's not chrome. And chrome is what everybody uses because google has lock-in power. Then google says "oh I'm going to disallow you running the extensions you like, so we can show you more ads". Then they make tweaks to chrome so that websites only get rendered well if they use certain APIs, so now competitors to Chrome are forced to implement those APIs, but those aren't public.
And all of this was initially build by free labour, which google took, by people who thought they were contributing to some commons in a sense.
Copyleft licenses protect against this. Part of the license says: if you use these licenses, and you make changes to the software, you have to share the changes as well, you can't keep them for yourself".
> This is what happened e.g. with chrome. There's chromium, anyone can build it. But that's not chrome. And chrome is what everybody uses because google has lock-in power.
Because Google has their attention. You can use chromium, but most people don't and pick the first thing they see. Also, Chrome is a much better name, err, not better but easier to say.
> Then google says "oh I'm going to disallow you running the extensions you like, so we can show you more ads". Then they make tweaks to chrome so that websites only get rendered well if they use certain APIs, so now competitors to Chrome are forced to implement those APIs, but those aren't public.
You and I have a different definition of "forced". But, are you speculating this might happen, or do you have an example of it happening?
> And all of this was initially build by free labour, which google took, by people who thought they were contributing to some commons in a sense.
Do you have an example of a site that works better in chrome, than it does in chromium? I'll even take an example of a site that works worse in the version of chromium before manifest v2 was disabled, compared to whatever version of chrome you choose?
> Copyleft licenses protect against this. Part of the license says: if you use these licenses, and you make changes to the software, you have to share the changes as well, you can't keep them for yourself".
Is chromium not still foss? Other than branding, what APIs or features are missing from the FOSS version? You mentioned manifest v3, but I'm using firefox because of it, so I don't find that argument too compelling. I don't think FOSS is worse, I think google is making a bad bet.
Large parts of Chrome are actually GPL AFAIK, which is one reason both Apple and Google made it open source in the first place.
> chrome is what everybody uses because google has lock-in power.
Incorrect. At least on Windows, Chrome is not the default browser, it is the browser that most users explicitly choose to install, despite Microsoft's many suggestions to the contrary.
This is what most pro-antitrust arguments miss. Even when consumers have to go out of their way to pick Google, they still do. To me, this indicates that Google is what people actually want, but that's an inconvenient fact which doesn't fit the prevailing political narrative.
> so that websites only get rendered well if they use certain APIs, so now competitors to Chrome are forced to implement those APIs, but those aren't public.
What is a Chrome API that web developers could possibly implement but that "isn't public?" What would that even mean in this context?
> google says "oh I'm going to disallow you running the extensions you like, so we can show you more ads".
And that could have happened just as well if Chrome was 100% open source and GPL.
Even if you accept the claim that Manifest V3's primary purpose was not increasing user security at face value (and that's a tenuous claim at best), it was perfectly possible for all third-party browsers (notably including Edge, which has 0 dependency on Google's money) to fork Chromium in a way that kept old extensions working. However, open source does not mean that features will magically appear in your software. If Google is the primary maintainer and Google wishes to remove some feature, maintaining that feature in your fork requires upkeep, upkeep that most Chromium forkers were apparently unwilling to provide. This has nothing to do with whether Chrome is open source or not.
>> A some point the company says "you know what, we like this software so much that we're going to fork it, but the fork isn't going to be free or open source. It's going to be just ours, and we're not going to share the improvements we made"
Right. So at that point all those contributing developers are free to fork, and maintain the fork. You have just as much control as you always did.
And of course being MIT or GPL doesn't make a difference, the company is permitted to change the license either way. [1]
So here's the thing, folk are free to use the company product or not. Folk are free to fork or not.
In practice of course the company version tends to win because products need revenue to survive. And OSS has little to zero revenue. (The big revenue comes from, you know, companies who typically sell commercial software.)
Even with the outcome you hypothesize (and clearly that is a common outcome) OSS is still ahead because they have the code up to the fork. And yes, they may have contributed to earn this fork.
But projects are free to change license. That's just built into how licenses work. Assuming that something will be GPL or MIT or whatever [2] forever is on you, not them.
[1] I'm assuming CLA us in play because without that your explanation won't work.
[2] yes, I think GPL sends a signal of intention more than MIT, but it's just a social signal, it doesn't mean it can't change. Conversely making it GPL makes it harder for other developers to adopt in the first place since most are working in non-GPL environments.
> Right. So at that point all those contributing developers are free to fork, and maintain the fork. You have just as much control as you always did.
Yep. And we've seen this happen. Eg, MariaDB forked off from MySQL. Illumos forked from Solaris. Etc. Its not a nice thing to have to do, but its hardly a doomsday situation.
> Is anyone else worried that [...] the Rust world [...] slap on it MIT license because it's [reason you don't like]?
No... I don't think that's how software works. Do you have an example of that happening? Has any foss project lost control of the "best" version of some software?
> Not everything in software is about features.
I mean, I would happily make the argument that the ability to use code however I want without needing to give you, (the people,) permission to use my work without following my rules a feature. But then, stopping someone from using something in a way you don't like, is just another feature of GPL software too, is it not?
I don't know the legals in detail, but I cant imagine that GPL would do something about how you use it in your home? How is that enforceable?
Again don't now the legals but I think in practical terms this affects companies trying to own a project.
> using something in a way you don't like
You're mischaracterizing what I'm saying. For one thing you're talking about "someone" when I'm taking about "someone with power". Copyleft isn't about two people, one gaining power over the other. It's about lots of people with no power protecting themselves again one entity with a lot of power to impose themselves.
> Do you have an example of that happening?
Are you new to HN? Every month there's news of projects trying to arrest power contributors using various shenanigans. Copyleft protects against a class of such attacks.
Eg Oracle and open office, red hat and centos.
Edit this is literally on HN right. Is this your first day here or something? https://old.reddit.com/r/linux/comments/1puojsr/the_device_t...
Not really, gpg isn't something worth losing.
I don't mind gpg. I still use it a lot especially with the private keys on openpgp smartcards or yubikeys.
It's a pretty great ecosystem, most hardware smartcards are surrounded by a lot of black magic and secret handshakes and stuff like pkcs#11 and opensc/openct are much much harder to configure.
I use it for many things but not for email. Encrypted backups, password manager, ssh keys. For some there are other hardware options like fido2 but not for all usecases and not the same one for each usecase. So I expect to be using gpg for a long time to come.
Lots of issues follow the pattern "ANSI escape code inside untrusted text". It feels like XSS but for terminal.
https://media.ccc.de/v/39c3-to-sign-or-not-to-sign-practical...
A thru-line of some of the gnarliest vulnerabilities here is PGP's insane packet system, where a PGP message is a practically arbitrary stream of packets, some control and some data, with totally incoherent cryptographic bindings. It's like something in between XMLDSIG (which pulls cryptographic control data out of random places in XML messages according to attacker-controlled tags) and SSL2 (with no coherent authentication of the complete handshake).
The attack on detached signatures (attack #1) happens because GnuPG needs to run a complicated state machine that can put processing into multiple different modes, among them three different styles of message signature. In GPG, that whole state machine apparently collapses down to a binary check of "did we see any data so that we'd need to verify a signature?", and you can selectively flip that predicate back and forth by shoving different packets into message stream, even if you've already sent data that needs to be verified.
The malleability bug (attack #4) is particularly slick. Again, it's an incoherent state machine issue. GPG can "fail" to process a packet because it's cryptographically invalid. But it can also fail because the message framing itself is corrupted. Those latter non-cryptographic failures are handled by aborting the processing of the message, putting GPG into an unexpected state where it's handling an error and "forgetting" to check the message authenticator. You can CBC-bitflip known headers to force GPG into processing DEFLATE compression, and mangle the message such that handling the message prints the plaintext in its output.
The formfeed bug (#3) is downright weird. GnuPG has special handling for `\f`; if it occurs at the end of a line, you can inject arbitrary unsigned data, because of GnuPG's handling of line truncation. Why is this even a feature?
Some of these attacks look situational, but that's deceptive, because PGP is (especially in older jankier systems) used as an encryption backend for applications --- Mallory getting Alice to sign or encrypt something on her behalf is an extremely realistic threat model (it's the same threat model as most cryptographic attacks on secure cookies: the app automatically signs stuff for users).
There is no reason for a message encryption system to have this kind of complexity. It's a deep architectural flaw in PGP. You want extremely simple, orthogonal features in the format, ideally treating everything as clearly length-delimited opaque binary blobs. Instead you get a Weird Machine, and talks like this one.
Amazing work.
Thank you for this excellent explanation!
I'm working on a multi sig file authentication solution based on minisign. Anyone knows the response of the dev regarding minisign's listed vulnerability? If I'm not mistaken, the response of the authors are not included in the vulnerabilities' descriptions.
AFAICT this is GnuPG specific and not OpenPGP related? Since GnuPG has pulled out of standards compliance anyway there are many better options. Sequoia chameleon even has drop in tooling for most workflows.
They presented critical parser flaws in all major PGP implementations, not just GNU PGP, also sequoia, minisign and age. But gpg made the worst impression to us. wontfix
Sequoia is mentioned in only one vulnerability for supporting lines much longer than gpg. gpg silently truncates and discards long base64 lines and sequoia does not. So the vulnerability is in ability to feed more data to sequoia which doesn't have the silent data loss of gpg.
In all other cases they only used sequoia as a tool to build data for demonstrating gpg vulnerabilities.
The vulnerability that opens the talk, where they walk through verifying a Linux ISO's signature and hash and then boot into a malicious image, impacts both GnuPG and Sequoia.
Since when are age or minisign PGP implementations?
They're not, but the flaws they found are independent of PGP. Mainly invalid handling of strings in C and allowing untrusted ANSI codes in terminal output.
The talk title includes "& Friends", for what it's worth.
I think it would be more accurate (and more helpful) to say that the two factions in the OpenPGP standards schism[1] have pulled away from the idea of consensus. There is a fundamental philosophical difference here. The LiberePGP faction (GnuPGP) is following the traditional PGP minimalism when it comes to changes and additions to the standard. The RFC-9580 faction (Sequoia) is following a kind of maximalist approach where any potential issue might result in a change/addition.
Fortunately, it turned out that there wasn't anything particularly wrong with the current standards so we can just do that for now and avoid the standards war entirely. Then we will have interoperability across the various implementations. If some weakness comes up that actually requires a standards change then I suspect that consensus will be much easier to find.
[1] https://articles.59.ca/doku.php?id=pgpfan:schism
I'm sure getting a "nothing's particularly wrong with the current standards" vibe from this talk.
Some of these are suggesting that an attacker might trick the victim into decrypting a message before sending to the attacker. If that is really the best sort of attack you can do against PGP then, yeah, that is the kind of vibe you might get.
The talk doesn't even cover anything from the current afaict
I believe that's incorrect but we may be referring to different things as "current".
no, some clearsig issues are a problem in openpgp standard itself
The specific bugs are with GPG, but a lot of the reason they can exist to begin with is PGP’s convoluted architecture which, IMO, makes these sorts of issues inevitable. I think they are effectively protocol bugs.
This is depressing.
From what I can piece together while the site is down, it seems like they've uncovered 14 exploitable vulnerabilities in GnuPG, of which most remain unpatched. Some of those are apparently met by refusal to patch by the maintainer. Maybe there are good reasons for this refusal, maybe someone else can chime in on that?
Is this another case of XKCD-2347? Or is there something else going on? Pretty much every Linux distro depends on PGP being pretty secure. Surely IBM & co have a couple of spare developers or spare cash to contribute?
> Surely IBM & co have a couple of spare developers or spare cash to contribute?
A major part of the problem is that GPG’s issues aren’t cash or developer time. It’s fundamentally a bad design for cryptographic usage. It’s so busy trying to be a generic Swiss Army knife for every possible user or use case that it’s basically made of developer and user footguns.
The way you secure this is by moving to alternative, purpose-built tools. Signal/WhatsApp for messaging, age for file encryption, minisign for signatures, etc.
If by "pretty much every Linux distro depends on PGP being pretty secure" you're referring to its use to sign packages in Linux package managers, it's worth noting that they use PGP in fairly narrowly constrained ways; in particular, the data is often already trusted because it was downloaded over HTTPS from a trusted server (making PGP kind of redundant in some ways). So most PGP vulnerabilities don't affect them.
If there were a PGP vulnerability that actually made it possible to push unauthorized updates to RHEL or Fedora systems, then probably IBM would care, but if they concluded that PGP's security problems were a serious threat then I suspect they'd be more likely to start a migration away from PGP than to start investing in making PGP secure; the former seems more tractable and would have maintainability benefits besides.
> already trusted because it was downloaded over HTTPS from a trusted server (making PGP kind of redundant in some ways)
That's mostly incorrect in both counts. One is that lots of mirrors are still http-only or http default https://launchpad.net/ubuntu/+archivemirrors
The other is that if you get access to one of the mirrors and replace a package, it's the signature that stops you. Https is only relevant for mitm attacks.
> they'd be more likely to start a migration away from PGP
The discussions started ages ago:
Debian https://wiki.debian.org/Teams/Apt/Spec/AptSign
Fedora https://lists.fedoraproject.org/archives/list/packaging@list...
Debian and most Debian derivatives have HTTP-only mirrors. Which I've found absolutely crazy for years. Though nobody seems to care. Maybe it'll change this time around.
Though this type of juggling knives is not unique to Linux. AMD and many other hardware vendors ship executables over unencrypted connections for Windows. All just hoping that not a single similar vulnerability or confusion can be found.
Downloading over HTTPS does not help with that (although it can prevent spies from seeing what files you are downloading) unless you can independently verify the server's keys. The certificate is intended to do this but the way that standard certificate authorities work will only verify the domain name, and has some other limitations. TLS does have other benefits, but it does a different thing. Using only TLS to verify the packages is not very good, especially with the existing public certificate authorities.
If you only need a specific version and you already know what that one is, then using a cryptographic hash will be a better way to verify packages, although that only applies for one specific version of one specific package. So, using an encrypted protocol (HTTPS or any other one) alone will not help, although it will help in combination with other things; you will need to do other things as well, to improve the security.
Haven't read it since it is down, but based on other comments, it seems to be an issue with cleartext signatures.
I haven't seen those outside of old mailing list archives. Everyone uses detached signatures nowadays, e.g. PGP/MIME for emails.
If I understood their first demo correctly, they verified a fedora iso with a detached signature. The booted iso then printed "hello 39c3". https://streaming.media.ccc.de/39c3/relive/1854
It was a cleartext signature, not a detached signature.
Edit: even better. It was both. There is a signature type confusion attack going on here. I still didn't watch the entire thing, but it seems that unlike gpg, they do have to specify --cleartext explicitly for Sequoia, so there is no confusion going on that case.
the writeup is now available and the recording lives at https://media.ccc.de/v/39c3-to-sign-or-not-to-sign-practical...
I don't understand the disappointment expressed here in the maintainers deciding to WONTFIX these security bugs.
Isn't this what ffmpeg did recently? They seemed to get a ton of community support in their decision not to fix a vulnerability
ffmpeg doesn't have a cargo-cult of self-proclaimed "privacy experts" that tell activists and whistleblowers to use their thing instead of other tools cryptographers actually recommend.
Yeah, instead they have a cargo-cult of self-proclaimed OSS contribution experts who harass anyone that critiques or challenges ffmpeg's twitter account.
If mass use of GPG benefited Microsoft, Amazon, Google and all the other assholes it would be polished, slick, and part of 9th grade curriculum. They call it “Face ID” that’s the Orwellian shit that makes money so that’s what we get instead. These things take resources, don’t blame the projects.
Another related writeup https://www.latacora.com/blog/2019/07/16/the-pgp-problem/
There is some misleading stuff in that article. To save time I made an article to provide my commentary:
* https://articles.59.ca/doku.php?id=pgpfan:tpp
Don't you think it's time to update it, given you start by saying that "If someone, while trying to sell you some high security mechanical system, told you that the system had remained unbreached for the last 20 years you would take that as a compelling argument"?
Because you're clearly presenting it as a defense of PGP on a thread from a presentation clearly delineating breaks in it using exactly the kind of complexity that the article you're responding to predicts would cause it to break.
writeups are online :))
> brb, were on it!!!!
its back up!
hug of death?
Nope. Not yet enabled. It was submitted to HN right after the talk where they promised to make it public "really soon" after the talk. We all saw the talk live or on the stream
gpg.fail fail: "brb, we're on it!"
[video]