I don’t understand the obsession with systemd managing everything. I do not want it to manage my logs, NTP, DNS resolution, and I sure as hell don’t want it to manage /home.
I don't want to think about it, just login and everything is there where and as I left it. I had this in college in the early 1990's with yp and nfs. However setting that up is hard on a dedicated network. Getting it to work with a laptop which might not even be connected to a network (as happened to me last night on amtrak in the middle of nowhere, North Dakota)
As mentioned, back in the day, you’d connect to a terminal server which would connect you to a random host. You’d login to that host (using a shared credential managed by yellow pages, maybe — this was pre-LDAP). Once logged in, something like mountd would mount your home directory from the NFS server and off you go.
Not a lot of these kinds of systems out there today. Curious how a modern one would be managed and secured.
I've been digging into this for years and it seems the consensus is simply RDP of some flavor. Mainly VNC derivatives and NX. NoMachine ticks all the boxes, it can manage spawning shells and passing audio and files. But I dunno, I just don't like it that much. It's not at all the same kind of magic that X forwarding is.
We had those terminal servers, but also full work stations that you logged into directly. x forwarding is magic when you can share/ use an expensive/powerful computer but most of the time the local computer is fast and cheap (these days a modern destop run circles around a then supercomputer)
Well, we had this at university. I think it's just running some protocol to sync user database (yp/nis/ldap/something) and amd to automount home dirs. You want to mount all home dirs on first access, not just who's logged in, unless you want to prevent people sharing files by Unix file permissions. Or mount the whole of /home, but then it has to go through one server.
Okay, but systemd can't teleport data. This could let you carry your home directory on a thumbdrive or such, but it's not a synchronization daemon. It kinda sounds like you just want syncthing or the like?
Roaming home directories are a goal of this project. It can work via home on a usb stick. I'm not sure how/if it works with network shares and usb both - something I want as I have several computers on my desk and lahtops I use for travel.
It would also work over NFS, which could have value.
My interest is actually in the opposite direction; I want a single machine with all home directories on its own internal hard disk, but where each user encrypts their home directory separately. That's doable other ways, but homed could be a nice way to do it that works out of the box.
It's easy to understand: RH is IBM which in turn is same business owners as MS and Apple so they preparing to destroy Linux, making it not a Windows or mainframes killer. Positioning... Holds even if systemd started before RH buyout.
Works becouse RH is or was main developping force in open source. Ubuntu or SUSE or Debian or EU are just joke in software development. And RH [management] goes rogue... Think: back to pre-POSIX UNIX wars...
I'm proud of my former distant coworker Russ Allbery resigned from Debian over systemd. It was shoved down most distro's throats, often with illiberal force over the objections of wiser elders and more experienced SA's.
Meanwhile, I want to be able to mount my home directory on an external drive, and have it shared between systems without UID/GID hell.
And,
Have an encrypted home directory and boot the system and be able to enter my password with my keyboard which is connected to a thunderbolt dock during boot. Something which has been possible on Mac and windows for a decade or two.
Systemd-homed is the ONLY way to achieve these (and many others) in Linux.
Criticisms of systemd just because “it doesn’t smell like Unix” is all nice and fine, but ignores real quality of life and security features it provides. If you don’t have these usecases, you’re welcome to continue to ignore systemd, but some of us actually want these feature.
> Systemd-homed is the ONLY way to achieve these (and many others) in Linux.
It absolutely is not. [Full] disk encryption has been fine for... at least 15 years, probably more. Sharing a home directory requires consistent UID/GID, but that's not hard even fully manually (which is fine if you're just one person).
Yes Linux has had FDE on for a long time, but with traditional FDE using LUKS etc, you cannot use accessories such as a thunderbolt keyboard during the boot process to enter the password to unlock the disk.
Which is a problem if you’re like me and want to just connect your Linux laptop to a thunderbolt dock and use the keyboard attached to it.
You could have been a little clearer, but the main communication problem is that I didn't know that was a problem that could exist so it was easy to not follow. After a few minutes of confused web searching, I found https://fedoramagazine.org/thunderbolt-how-to-use-keyboard-d... which appears to describe the problem and some solutions (that don't involve homed, not least because this article predates it). AIUI, the problem is that you have to log in so you can approve the keyboard; that might be a pain point with an encrypted root (albeit it looks solvable), but if it works with homed I struggle to believe that it wouldn't work with any other encrypted home setup. Mostly because that part of homed is mostly a thin wrapper of code and convention over existing stuff.
If you come from any other platform, the idea of needing to look up which version of which flavor of Linux you have to find what specific commands you need to use to do basic things looks insane.
systemd has done leaps and bounds for making Linux platforms look reasonably manageable and standardized.
> If you come from any other platform, the idea of needing to look up which version of which flavor of Linux you have to find what specific commands you need to use to do basic things looks insane.
As opposed to jumping between IRIX and AIX and Solaris? See Rosetta Stone for Unix:
Wasn't one of the points of multiple distributions was to allow experimentation and allowing for different philosophies of doing things? If you're going to homogenize things what's the point of having multiple distributions in the first place?
> systemd has done leaps and bounds for making Linux platforms look reasonably manageable and standardized.
So I've gone from service foo start/stop (which also works on BSD) to systemctl foo start/stop. Yay! (Of course some distros use "ssh" and others "sshd", or "apache2" versus "httpd".)
No it hasn't. For example going from Raspberry Pi OS to stock Debian I have to be mindful of where network manager is used in place of systemd. I have to be mindful of what version systemd is being used. Same hassle as before but now with less POSIX and more binary blobs.
> If you come from any other platform, the idea of needing to look up which version of which flavor of Linux you have to find what specific commands you need to use to do basic things looks insane
Right, which is why Windows home edition is managed via GPO and iOS exposes the same APIs as macOS. /s
Different operating systems are different, even if they share a kernel.
The free software world needed an API for managing system resources. Poettering came along and provided one. It's not perfect, but it solved problems. The resistance to systemd isn't proposing alternative ways of solving these problems. It's instead insisting these problems remain unfixed. Is it any wonder that the anti-systemd camp has become irrelevant?
A lot of the resistance to systemd isn't resistance to the problems being solved, it's resistance to solving the problems with a big ball of interdependent components. As we saw in the xz attack, that's a huge attack surface to consider and the project's general hostility to producing small, focused libraries means that people often depend on it where they shouldn't.
i wouldn't mind the api where it must be different but all too often he had not invented here syndrom and reinvented things that worked great already while fixing what was broke. He also suffers from all the world is linux syndrom and so bsd needs to figure out how to solve the problem from scratch (mostly has not)
Worked great already? Like 3,000 line shell scripts parsing dependency information out of comments? That's what I mean when I say about insisting problems not be solved.
When systemd came out people said you were a conspiracy theorist if you said it would be anything more than an init system. Now in the year of our Lord 2025 we're discussing "systemd-homed" as if that should ever be a real thing.
It seems to be that a lot of what systemd is doing (over and above being 'just' an init system) seems to be focused on standalone systems.
And that's fine and all for some folks, but for those of us sysadmin-ing servers/VMs, it's all sorts of annoying that these sub-systems exist for dynamic environments (laptops using networkd/resolvd/etc to handle moving around), but I just want my system to be static and not have (e.g.) resolv.conf futzed around with (I've taken to doing a chattr +i on the file quite often).
> (I've taken to doing a chattr +i on the file quite often)
That requires ext4 AFAIK, whereas many systems use XFS, BTRFS, or ZFS. I've done this as well on several files I don't want mucked with, when I can't simply disable the systemd daemons. For me, ext4 works best.
Hm. So then don't use (systemd-)resolved? Alternatively, I've accepted that it's built to work with a decades-old ecosystem and that resolv.conf is effectively a generated, read-only-except-resolved file. And in turn, resolved's configuration is perfectly static and equally immutable. /shrug
My* only problem is that it's pretty good at what it does, and can be... more helpful than you might like at providing consistent global DNS resolution. For example, it's use over dbus makes processes in `netns`s susceptible to leaking DNS requests. Though arguably I should've been going more full-containery than just a netns maybe, given my expectations.
The number of ways and things that twiddle with /etc/resolv.conf nowadays is quite unreasonable.
Changing the IP address was also fairly simple in editing a file, but now there's networkd sometimes, and NetManager other times, and netplan too, and perhaps make sure your YAML file is indented with the right number of spaces in the right place…
> The number of ways and things that twiddle with /etc/resolv.conf nowadays is quite unreasonable.
In many years of daily-driving unix-likes and being an amateur and professional sysadmin, I think resolv.conf is the only time I've ever actually used `chattr +i`.
The big thing appears to be moving the user metadata into the home directory itself rather than it being around the system, and enabling home folder encryption, which has been like... a single button press feature on Windows since like Windows XP. Sounds like a step forward.
I'm slightly confused. I understand the appeal to putting user configuration inside the home directory, and I definitely approve of encrypting each home directory individually, but doesn't doing both of them together mean that you can't read the user data until it's been decrypted?
The encrypted volume has an encrypted copy of the `~/.identity` file in it's metadata fields.
The same key which encrypts the volume decrypts the metadata, but they use different IVs.
You could assume that most systems the key would be secured with the TPM so this won't be much of a big deal to the user, but otherwise when they try to login it would prompt for this password first.
> * I understand the appeal to putting user configuration inside the home directory* […]
I'm not sure I understand the appeal. What does "putting user configuration inside the home directory" mean in this context? Is there a file with the UID, GIDs (primary, secondaries), GECOS, etc?
You can't use d-bus for this because d-bus isn't available early enough, relies on user accounts, and can't enumerate through large sets of objects with optional filtering they had to create and invoke the completely separate "Varlink." Which is _closer_ to the traditional Unix/Plan9 service model without actually achieving it meaningfully.
The infamous part of d-bus, that it helps inject arbitrary binary payloads into existing text protocols, is now reversed in varlink, it takes what should be arbitrary binary payloads (user records, certificates, etc..) and instead forces you to manage them as JSON objects. Signing and conveying signatures for this object are predictably painful.
"The signature section contains one or more cryptographic signatures of a reduced version of the user record. This is used to ensure that only user records defined by a specific source are accepted on a system, by validating the signature against the set of locally accepted signature public keys. The signature is calculated from the JSON user record with all sections removed, except for regular, privileged, perMachine. Specifically, binding, status, signature itself and secret are removed first and thus not covered by the signature. This section is optional, and is only used when cryptographic validation of user records is required (as it is by systemd-homed.service for example)."
This all seems very brittle and I don't see the kinds of testing that would project confidence in this system. Good luck to all who use this and trust it.
Ultimately ipc, service discovery, and security all need to be codesigned to work together. Systemd is unfortunately trying to work in an ecosystem where it does not have the luxury of a clean first principles approach. Generally I would argue moving off of dbus and onto varlink is in the right direction. I'm not sure what you think is brittle about the approach of using ipc and a schema for the data sent over it. If they had gone in the other direction and mandated grpc ala http instead, would that have been "less brittle"?
That IMO does not, in any respect, excuse the signature design. This JSON+blobs design is totally new other than needing to support a handful of preexisting fields. And it’s very much the case that a lot of the record is trusted in the sense that loading malicious data could compromise the integrity or availability of the machine.
So structure it like that! Have a whole file that is signed or otherwise integrity-checked in its entirely. Have another file with fields that are per-(user,machine) and integrity-check that. “Integrity-check” means that you validate the binary contents of the file before you even attempt to parse it, and then you parse the literal bytes that you checked.
It’s not the nineties anymore, and architects should know better.
> Please note that this specification assumes that JSON numbers may cover the full integer range of -2^63 … 2^64-1 without loss of precision (i.e. INT64_MIN … UINT64_MAX). Please read, write and process user records as defined by this specification only with JSON implementations that provide this number range.
It's not the default, but JS is capable of this. (JavaScript has a big integer type nowadays, and the JSON.parse function's "reviver" parameter I think should be capable of parsing to bigints, but you'd need to specify such a reviver.)
Something like this, I think:
JSON.parse(
/* just a test input JSON */
`{"a": 1.1, "b": 22222222222222222222222222222222, "c": {"d": 999999999999999999999999}}`,
/* a reviver that returns BigInts, if it's an integer. */
(key, value, context) => {
if(typeof value === "number" && /^[0-9]+$/.test(context.source)) {
return BigInt(context.source);
} else {
return value;
}
}
);
And from KDE as well, through Qt's Qt Declarative libraries that use QML.
Judging by the Qt source, if the internal JS runtime JSON parser is used then it will not support full range of 64-bit integers, since the double floating point type is used for any integers x where abs(x) > 1^^25.
For those curious about systemd-homed, lwn had a writeup about a discussion in Fedora about it which provides a good summary of the pros and cons of systemd-homed.
Their docs don't even mention homes mounted over NFS, or LDAP managed users. This is the same sort of pathetically marginal garbage that damns Snaps, which somehow think that large environments put all user directories in /home - even that that is NOT a standard and doesn't scale worth a damn.
Systemd is a curse, the TRON MCP that doesn't even seem have a system for alternate solutions to compete. Before systemd we saw a more lively environment of alternatives for each service area, but systemd strangles this with a collection of mediocrities, and lack of foresight.
Looking through the doc at https://systemd.io/HOME_DIRECTORY/ shows a entire webpage built of ideas many would rightfully reject, some defy standards, some defy common sense, and best practices, fail to scale, add arbitrary constraints, or have other problems.
I've been a sysadmin at large sites before. systemd-homed looks a lot like unusable trash.
I don’t understand the obsession with systemd managing everything. I do not want it to manage my logs, NTP, DNS resolution, and I sure as hell don’t want it to manage /home.
I want something to manage home though. I shouldn't be unable to access my files just because I'm on a different computer from last time.
i'm not sure if that is what it does but I think that is a goal
Manage it do what now? Copy files between computers? Like rsync?
Or… like iCloud? No on that last one, having Linux require some server seems to defeat the point. Why not a Mac then?
I don't want to think about it, just login and everything is there where and as I left it. I had this in college in the early 1990's with yp and nfs. However setting that up is hard on a dedicated network. Getting it to work with a laptop which might not even be connected to a network (as happened to me last night on amtrak in the middle of nowhere, North Dakota)
Yea I’ve been curious about this.
As mentioned, back in the day, you’d connect to a terminal server which would connect you to a random host. You’d login to that host (using a shared credential managed by yellow pages, maybe — this was pre-LDAP). Once logged in, something like mountd would mount your home directory from the NFS server and off you go.
Not a lot of these kinds of systems out there today. Curious how a modern one would be managed and secured.
I've been digging into this for years and it seems the consensus is simply RDP of some flavor. Mainly VNC derivatives and NX. NoMachine ticks all the boxes, it can manage spawning shells and passing audio and files. But I dunno, I just don't like it that much. It's not at all the same kind of magic that X forwarding is.
We had those terminal servers, but also full work stations that you logged into directly. x forwarding is magic when you can share/ use an expensive/powerful computer but most of the time the local computer is fast and cheap (these days a modern destop run circles around a then supercomputer)
Well, we had this at university. I think it's just running some protocol to sync user database (yp/nis/ldap/something) and amd to automount home dirs. You want to mount all home dirs on first access, not just who's logged in, unless you want to prevent people sharing files by Unix file permissions. Or mount the whole of /home, but then it has to go through one server.
Okay, but systemd can't teleport data. This could let you carry your home directory on a thumbdrive or such, but it's not a synchronization daemon. It kinda sounds like you just want syncthing or the like?
Roaming home directories are a goal of this project. It can work via home on a usb stick. I'm not sure how/if it works with network shares and usb both - something I want as I have several computers on my desk and lahtops I use for travel.
I remember the systemd folks talking about this thumb drive portability of your homedir . Seems very niche. Is that the only advantage of homed?
It would also work over NFS, which could have value.
My interest is actually in the opposite direction; I want a single machine with all home directories on its own internal hard disk, but where each user encrypts their home directory separately. That's doable other ways, but homed could be a nice way to do it that works out of the box.
It's easy to understand: RH is IBM which in turn is same business owners as MS and Apple so they preparing to destroy Linux, making it not a Windows or mainframes killer. Positioning... Holds even if systemd started before RH buyout.
Works becouse RH is or was main developping force in open source. Ubuntu or SUSE or Debian or EU are just joke in software development. And RH [management] goes rogue... Think: back to pre-POSIX UNIX wars...
I see the appeal. Imagine all the points of failure that are spread out all over the place in classical Unix.
Systemd kind of combs these all into one place so there's a single point of failure. Now there's just one of them, so it's DRY.
I begrudgingly accept systemd for service control, but that’s about it.
The other “points of failure” you mention are all incredibly well-tested and battle-hardened.
> Systemd kind of combs these all into one place so there's a single point of failure. Now there's just one of them, so it's DRY.
A single point of failure is… good?
(Or is your statement an example of Poe's law?)
I'm proud of my former distant coworker Russ Allbery resigned from Debian over systemd. It was shoved down most distro's throats, often with illiberal force over the objections of wiser elders and more experienced SA's.
Yeah big nope on this. Needs to be separate, if it’s useful at all, not systemd “separate”.
I don’t run systemd at all, to be safe.
Stay safe!
Meanwhile, I want to be able to mount my home directory on an external drive, and have it shared between systems without UID/GID hell.
And,
Have an encrypted home directory and boot the system and be able to enter my password with my keyboard which is connected to a thunderbolt dock during boot. Something which has been possible on Mac and windows for a decade or two.
Systemd-homed is the ONLY way to achieve these (and many others) in Linux.
Criticisms of systemd just because “it doesn’t smell like Unix” is all nice and fine, but ignores real quality of life and security features it provides. If you don’t have these usecases, you’re welcome to continue to ignore systemd, but some of us actually want these feature.
> Systemd-homed is the ONLY way to achieve these (and many others) in Linux.
It absolutely is not. [Full] disk encryption has been fine for... at least 15 years, probably more. Sharing a home directory requires consistent UID/GID, but that's not hard even fully manually (which is fine if you're just one person).
Maybe I wasn’t clear.
Yes Linux has had FDE on for a long time, but with traditional FDE using LUKS etc, you cannot use accessories such as a thunderbolt keyboard during the boot process to enter the password to unlock the disk.
Which is a problem if you’re like me and want to just connect your Linux laptop to a thunderbolt dock and use the keyboard attached to it.
A problem which systemd-homed solves.
You could have been a little clearer, but the main communication problem is that I didn't know that was a problem that could exist so it was easy to not follow. After a few minutes of confused web searching, I found https://fedoramagazine.org/thunderbolt-how-to-use-keyboard-d... which appears to describe the problem and some solutions (that don't involve homed, not least because this article predates it). AIUI, the problem is that you have to log in so you can approve the keyboard; that might be a pain point with an encrypted root (albeit it looks solvable), but if it works with homed I struggle to believe that it wouldn't work with any other encrypted home setup. Mostly because that part of homed is mostly a thin wrapper of code and convention over existing stuff.
You're comfortable sharing secret keys between systems?
The keys would definitely be secreted hehehe
If you come from any other platform, the idea of needing to look up which version of which flavor of Linux you have to find what specific commands you need to use to do basic things looks insane.
systemd has done leaps and bounds for making Linux platforms look reasonably manageable and standardized.
> If you come from any other platform, the idea of needing to look up which version of which flavor of Linux you have to find what specific commands you need to use to do basic things looks insane.
As opposed to jumping between IRIX and AIX and Solaris? See Rosetta Stone for Unix:
* https://bhami.com/rosetta.html
* https://bhami.com/unix-rosetta.pdf
Wasn't one of the points of multiple distributions was to allow experimentation and allowing for different philosophies of doing things? If you're going to homogenize things what's the point of having multiple distributions in the first place?
> systemd has done leaps and bounds for making Linux platforms look reasonably manageable and standardized.
So I've gone from service foo start/stop (which also works on BSD) to systemctl foo start/stop. Yay! (Of course some distros use "ssh" and others "sshd", or "apache2" versus "httpd".)
> If you come from any other platform, the idea of needing to look up which version of which flavor of Linux you have to find what specific commands you need to use to do basic things looks insane
Right, which is why Windows home edition is managed via GPO and iOS exposes the same APIs as macOS. /s
Different operating systems are different, even if they share a kernel.
The free software world needed an API for managing system resources. Poettering came along and provided one. It's not perfect, but it solved problems. The resistance to systemd isn't proposing alternative ways of solving these problems. It's instead insisting these problems remain unfixed. Is it any wonder that the anti-systemd camp has become irrelevant?
A lot of the resistance to systemd isn't resistance to the problems being solved, it's resistance to solving the problems with a big ball of interdependent components. As we saw in the xz attack, that's a huge attack surface to consider and the project's general hostility to producing small, focused libraries means that people often depend on it where they shouldn't.
i wouldn't mind the api where it must be different but all too often he had not invented here syndrom and reinvented things that worked great already while fixing what was broke. He also suffers from all the world is linux syndrom and so bsd needs to figure out how to solve the problem from scratch (mostly has not)
Worked great already? Like 3,000 line shell scripts parsing dependency information out of comments? That's what I mean when I say about insisting problems not be solved.
That dependency problem needed to be fixed. However many other things go with it - I object to some
When systemd came out people said you were a conspiracy theorist if you said it would be anything more than an init system. Now in the year of our Lord 2025 we're discussing "systemd-homed" as if that should ever be a real thing.
It seems to be that a lot of what systemd is doing (over and above being 'just' an init system) seems to be focused on standalone systems.
And that's fine and all for some folks, but for those of us sysadmin-ing servers/VMs, it's all sorts of annoying that these sub-systems exist for dynamic environments (laptops using networkd/resolvd/etc to handle moving around), but I just want my system to be static and not have (e.g.) resolv.conf futzed around with (I've taken to doing a chattr +i on the file quite often).
> (I've taken to doing a chattr +i on the file quite often)
That requires ext4 AFAIK, whereas many systems use XFS, BTRFS, or ZFS. I've done this as well on several files I don't want mucked with, when I can't simply disable the systemd daemons. For me, ext4 works best.
> That requires ext4 AFAIK, whereas many systems use XFS, BTRFS, or ZFS.
All three of these support immutable files:
* https://man.archlinux.org/man/xfs.5.en#FILE_ATTRIBUTES
* https://man.archlinux.org/man/btrfs.5.en#Attributes
* 2016 OpenZFS bug where it was broken and fixed: https://github.com/openzfs/zfs/pull/5486
TIL. Thanks!
Hm. So then don't use (systemd-)resolved? Alternatively, I've accepted that it's built to work with a decades-old ecosystem and that resolv.conf is effectively a generated, read-only-except-resolved file. And in turn, resolved's configuration is perfectly static and equally immutable. /shrug
My* only problem is that it's pretty good at what it does, and can be... more helpful than you might like at providing consistent global DNS resolution. For example, it's use over dbus makes processes in `netns`s susceptible to leaking DNS requests. Though arguably I should've been going more full-containery than just a netns maybe, given my expectations.
> Hm. So then don't use (systemd-)resolved?
The number of ways and things that twiddle with /etc/resolv.conf nowadays is quite unreasonable.
Changing the IP address was also fairly simple in editing a file, but now there's networkd sometimes, and NetManager other times, and netplan too, and perhaps make sure your YAML file is indented with the right number of spaces in the right place…
> The number of ways and things that twiddle with /etc/resolv.conf nowadays is quite unreasonable.
In many years of daily-driving unix-likes and being an amateur and professional sysadmin, I think resolv.conf is the only time I've ever actually used `chattr +i`.
> With the advent of systemd-homed it might be desirable to convert an existing, traditional user account to a systemd-homed managed one.
As someone unfamiliar with systemd-homed, I have a very basic question: why would someone want (or not want) to do this?
Based on... a web search: https://wiki.archlinux.org/title/Systemd-homed
The big thing appears to be moving the user metadata into the home directory itself rather than it being around the system, and enabling home folder encryption, which has been like... a single button press feature on Windows since like Windows XP. Sounds like a step forward.
I'm slightly confused. I understand the appeal to putting user configuration inside the home directory, and I definitely approve of encrypting each home directory individually, but doesn't doing both of them together mean that you can't read the user data until it's been decrypted?
The encrypted volume has an encrypted copy of the `~/.identity` file in it's metadata fields.
The same key which encrypts the volume decrypts the metadata, but they use different IVs.
You could assume that most systems the key would be secured with the TPM so this won't be much of a big deal to the user, but otherwise when they try to login it would prompt for this password first.
> * I understand the appeal to putting user configuration inside the home directory* […]
I'm not sure I understand the appeal. What does "putting user configuration inside the home directory" mean in this context? Is there a file with the UID, GIDs (primary, secondaries), GECOS, etc?
What is put inside the homedir?
A lot of JSON in a big file.
* https://systemd.io/USER_RECORD/
You'll enjoy the bit about the umask. Yes, this is short on details of where all of the privileged and secret stuff lives.
You home dir including password is on a usb drive and so can move from machine to machine with all your files.
The continued pathology of systemd.
You can't use d-bus for this because d-bus isn't available early enough, relies on user accounts, and can't enumerate through large sets of objects with optional filtering they had to create and invoke the completely separate "Varlink." Which is _closer_ to the traditional Unix/Plan9 service model without actually achieving it meaningfully.
The infamous part of d-bus, that it helps inject arbitrary binary payloads into existing text protocols, is now reversed in varlink, it takes what should be arbitrary binary payloads (user records, certificates, etc..) and instead forces you to manage them as JSON objects. Signing and conveying signatures for this object are predictably painful.
"The signature section contains one or more cryptographic signatures of a reduced version of the user record. This is used to ensure that only user records defined by a specific source are accepted on a system, by validating the signature against the set of locally accepted signature public keys. The signature is calculated from the JSON user record with all sections removed, except for regular, privileged, perMachine. Specifically, binding, status, signature itself and secret are removed first and thus not covered by the signature. This section is optional, and is only used when cryptographic validation of user records is required (as it is by systemd-homed.service for example)."
This all seems very brittle and I don't see the kinds of testing that would project confidence in this system. Good luck to all who use this and trust it.
Ultimately ipc, service discovery, and security all need to be codesigned to work together. Systemd is unfortunately trying to work in an ecosystem where it does not have the luxury of a clean first principles approach. Generally I would argue moving off of dbus and onto varlink is in the right direction. I'm not sure what you think is brittle about the approach of using ipc and a schema for the data sent over it. If they had gone in the other direction and mandated grpc ala http instead, would that have been "less brittle"?
That IMO does not, in any respect, excuse the signature design. This JSON+blobs design is totally new other than needing to support a handful of preexisting fields. And it’s very much the case that a lot of the record is trusted in the sense that loading malicious data could compromise the integrity or availability of the machine.
So structure it like that! Have a whole file that is signed or otherwise integrity-checked in its entirely. Have another file with fields that are per-(user,machine) and integrity-check that. “Integrity-check” means that you validate the binary contents of the file before you even attempt to parse it, and then you parse the literal bytes that you checked.
It’s not the nineties anymore, and architects should know better.
> Please note that this specification assumes that JSON numbers may cover the full integer range of -2^63 … 2^64-1 without loss of precision (i.e. INT64_MIN … UINT64_MAX). Please read, write and process user records as defined by this specification only with JSON implementations that provide this number range.
Wait, so.. not javascript?
It's not the default, but JS is capable of this. (JavaScript has a big integer type nowadays, and the JSON.parse function's "reviver" parameter I think should be capable of parsing to bigints, but you'd need to specify such a reviver.)
Something like this, I think:
Interesting catch. Don't many desktop Linux utilities from the GNOME project use JavaScript?
And from KDE as well, through Qt's Qt Declarative libraries that use QML.
Judging by the Qt source, if the internal JS runtime JSON parser is used then it will not support full range of 64-bit integers, since the double floating point type is used for any integers x where abs(x) > 1^^25.
Most (all?) systems running systemd are going to have a javascript interpreter as a polkit dependency anyway.
For those curious about systemd-homed, lwn had a writeup about a discussion in Fedora about it which provides a good summary of the pros and cons of systemd-homed.
https://lwn.net/Articles/995915/
<tirade style="justified">
F*k systemd, and systemd-homed along with it.
Their docs don't even mention homes mounted over NFS, or LDAP managed users. This is the same sort of pathetically marginal garbage that damns Snaps, which somehow think that large environments put all user directories in /home - even that that is NOT a standard and doesn't scale worth a damn.
Systemd is a curse, the TRON MCP that doesn't even seem have a system for alternate solutions to compete. Before systemd we saw a more lively environment of alternatives for each service area, but systemd strangles this with a collection of mediocrities, and lack of foresight.
Looking through the doc at https://systemd.io/HOME_DIRECTORY/ shows a entire webpage built of ideas many would rightfully reject, some defy standards, some defy common sense, and best practices, fail to scale, add arbitrary constraints, or have other problems.
I've been a sysadmin at large sites before. systemd-homed looks a lot like unusable trash.
</tirade>