Hmm. I have just spent a week or two getting my mind around systemd, so I will add a few comments….
** The degree of parallelism achieved by systemd does very good things to start up times. (Yes, that is a critical parameter, especially in the embedded world)
** Socket activation is very nifty / useful.
** There are a lot of learning that has gone into things like dbus https://lwn.net/Articles/641277/ While there are things I really don’t like about dbus (cough, xml, cough)…. I respect the hard earned experience encoded into it)
** Systemd’s use of cgroups is actually a very very nifty feature in creating rock solid systems, systems that don’t go sluggish because a subsystem is rogue or leaky. (But I think we are all just learning to use it properly)
** The thought and effort around “playing nice” with distro packaging systems via “drop in” directories is valuable. Yup, it adds complication, but packaging is real and you need a solution.
** The thought and complication around generators to aid the transition from sysv to systemd is also vital. Nobody can upgrade tens of thousands of packages in one go.
TL;DR; Systemd actually gives us a lot of very very useful and important stuff. Any competing system with the faintest hope of wide adoption has a pretty high bar to meet.
The biggest sort of “WAT!?” moments for me around systemd is that it creates it’s own entirely new language… that is remarkably weaker even than shell. And occasionally you find yourself explicitly invoking, yuck, shell, to get stuff done.
Personally I would have preferred it to be something like guile with some addons / helper macros.
I actually agree with most of what you’ve said here, Systemd is definitely trying to solve some real problems and I fully acknowledge that. The main problem I have with Systemd is the way it just subsumes so much and it’s pretty much all-or-nothing; combined with that, people do experience real problems with it and I personally believe its design is too complicated, especially for such an essential part of the system. I’ll talk about it a bit more in my blog (along with lots of other things) at some stage, but in general the features you list are good features and I hope to have Dinit support eg socket activation and cgroups (though as an optional rather than mandatory feature). On the other hand I am dead-set that there will never be a dbus-connection in the PID 1 process nor any XML-based protocol, and I’m already thinking about separating the PID 1 process from the service manager, etc.
Please don’t. It is a lot easier to turn machine-readable / binary logs to human-readable than the other way around, and machines will be processing and reading logs a lot more than humans.
Human-readable doesn’t mean freeform. It can be machine-readable too. At my last company, we logged everything as date, KV pairs, and only then freeform text. It had a natural mapping to JSON and protocol buffers after that.
https://github.com/uber-go/zap This isn’t what we used, but the general idea.
Yeah, you can do that. But then it becomes quite a bit harder to sign, encrypt, or index logs. I still maintain that going binary->human readable is more efficient, and practical, as long as computers do more processing on the logs than humans do.
Mind you, I’m talking about storage. The logs should be reasonably easy for a human to process when emitted, and a mapping to a human-readable format is desirable. When stored, human-readability is, in my opinion, a mistake.
You make good points. It’s funny, because I advocated hard for binary logs (and indeed stored many logs as protocol buffers on Kafka; only on the filesystem was it text) from systems at $dayjob-1, but when it comes to my own Linux system it’s a little harder for me to swallow. I suppose I’m looking at it from the perspective of an interactive user and not a fleet of Linux machines; on my own computer I like to be able to open my logs as standard text without needing to pipe it through a utility.
I’ll concede the point though: binary logs do make a lot more sense as building blocks if they’re done right and have sufficient metadata to be better than the machine-readable text format. If it’s a binary log of just date + facility + level + text description, it may as well have been a formatted text log.
So long as they accumulate the same amount of useful info…. and is machine parsable, sure.
journalctl spits out human readable or json or whatever.
I suspect to achieve near the same information density / speed as journalctl with plain old ascii will be a hard ask.
In my view I want both. Human and machine readable… how that is done is an implementation detail.
I’m sort of curious about which “subsume everything” bits are hurting you in particular.
For example, subsuming the business of mounting is fairly necessary since these days the order in which things get mount relative to the order in which various services are run is pretty inexorable.
I have doubts about how much of the networkd / resolved should be part of systemd…. except something that collaborates with the startup infrastructure is required. ie. I suspect your choices in dinit will be slightly harsh…. modding dinit to play nice with existing network managers or modding existing network managers to play nice with dinit or subsuming the function of network management or leaving fairly vital chunks of functionality undone and undoable.
Especially in the world of hot plug devices and mobile data….. things get really really hairy.
I am dead-set that there will never be a dbus-connection in the PID 1
You still need a secure way of communicating with pid 1….
That said, systemd process itself could perhaps be decomposed into more processes than it currently is.
However as I hinted…. there are things that dbus gives you, like bounded trusted between untrusted and untrusting and untrustworthy programs that is hard to achieve without reimplementing large chunks of dbus….
…and then going through the long and painful process of learning from your mistakes that dbus has already gone through.
Yes, I truly hate xml in there…. but you still need some security sensitive serialization mechanism in there.
ie. Whatever framework you choose will still need to enforce the syntactic contract of the interface so that a untrusted and untrustworthy program cannot achieve a denial of service or escalation of privilege through abuse of a serialized interface.
There are other things out there that do that (eg. protobuffers, cap’n’proto, …), but then you still in a world where desktops and bluetooth and network managers and …….. need to be rewritten to use the new mechanism.
For example, subsuming the business of mounting is fairly necessary since these days the order in which things get mount relative to the order in which various services are run is pretty inexorable.
systemd’s handling of mounting is beyond broken. It’s impossible to get bind mounts to work successfully on boot, nfs mounts don’t work on boot unless you make systemd handle it with autofs and sacrifice a goat, and last week I had a broken mount that couldn’t be fixed. umount said there were open files, lsof said none were open. Had to reboot because killing systemd would kill the box anyway.
It doesn’t even start MySQL reliably on boot either. Systemd is broken. Stop defending it.
For example, subsuming the business of mounting is fairly necessary since these days the order in which things get mount relative to the order in which various services are run is pretty inexorable.
There are a growing number of virtual filesystems that Linux systems expect or need to be mounted for full operation - /proc, /dev, /sys and cgroups all have their own - but these can all be mounted in the traditional way: by running ‘/bin/mount’ from a service. And because it’s a service, dependencies on it can be expressed. What Systemd does is understand the natural ordering imposed by mount paths as implicit dependencies between mount units, which is all well and good but which could also be expressed explicitly in service descriptions, either manually (how often do you really change your mount hierarchies…) or via an external tool. It doesn’t need to be part of the init system directly.
(Is it bad that systemd can do this? Not really; it is a feature. On the other hand, systemd’s complexity has I feel already gotten out of hand. Also, is this particular feature really giving that much real-world benefit? I’m not convinced).
I suspect your choices in dinit will be slightly harsh…. modding dinit to play nice with existing network managers or modding existing network managers to play nice with dinit
At this stage I want to believe there is another option: delegating Systemd API implementation to another daemon (which communicates with Dinit if and as it needs to). Of course such a daemon could be considered as part of Dinit anyway, so it’s a fine distinction - but I want to keep the lines between the components much clearer (than I feel they are in Systemd).
I believe in many cases the services provided by parts of Systemd don’t actually need to be tied to the init system. Case in point, elogind has extraced the logind functionality from systemd and made it systemd-independent. Similarly there’s eudev, the Gentoo fork of the udev device node management daemon which extracts it from systemd.
You still need a secure way of communicating with pid 1…
Right now, that’s via root-only unix socket, and I’d like to keep it that way. The moment unprivileged processes can talk to a privileged process, you have to worry about protocol flaws a lot more. The current protocol is compact and simple. More complicated behavior could be wrapped in another daemon with a more complex API, if necessary, but again, the boundary lines (is this init? is this service management? or is this something else?) can be kept clearer, I feel.
Putting it another way, a lot of the parts of Systemd that required a user-accessible API just won’t be part of Dinit itself: they’ll be part of an optional package that communicates the Dinit only if it needs to, and only by a simple internal protocol. That way, boundaries between components are more clear, and problems (whether bugs or configuration issues) are easier to localise and resolve.
On the other hand I am dead-set that there will never be a dbus-connection in the PID 1 process nor any XML-based protocol
Comments like this makes me wonder what you actually know about D-Bus and what you think it uses XML for.
I suppose you are hinting that I’ve somehow claimed D-Bus is/uses an XML-based protocol? Read the statement again…
Systemd solves (or attempts to) some actually existing problems, yes. It solves them from a purely Dev(Ops) perspective while completely ignoring that we use Linux-based systems in big part for how flexible they are. Systemd is a very big step towards making systems we use less transparent and simple in design. Thus, less flexible.
And if you say that’s the point: systems need to get more uniform and less unique!.. then sure. I very decidedly don’t want to work in an industry that cripples itself like that.
Hmm. I strongly disagree with that.
As a simple example, in sysv your only “targets” were the 7 runlevels. Pretty crude.
Alas the sysv simplicity came at a huge cost. Slow boots since it was hard to parallelize, and Moore’s law has stopped giving us more clock cycles… it only gives us more cores these days.
On my ubuntu xenial box I get… locate target | grep -E ‘^/(run|etc|lib)/.*.target$’ | grep -v wants | wc 61 61 2249
(Including the 7 runlevels for backwards compatibility)
ie. Much more flexibility.
ie. You have much more flexibility than you ever had in sysv…. and if you need to drop into a whole of shell (or whatever) flexibility…. nothing is stopping you.
It’s actually very transparent…. the documentation is actually a darn sight better that sysv init ever was and the source code is pretty readable. (Although at the user level I find I can get by mostly by looking at the .service files and guessing, it’s a lot easy to read than a sysv init script.)
So my actual experience of wrangling systemd on a daily basis is it is more transparent and flexible than what we had before…..
A bunch of the complexity is due to the need to transition from sysv/upstart to systemd.
I can see on my box a huge amount of crud that can just be deleted once everything is converted.
All the serious “Huh!? WTF!?” moments in the last few weeks have been around the mishmash of old and new.
Seriously. It is simpler.
That said, could dinit be even simpler?
I don’t know.
As I say, systemd has invented it’s own quarter arsed language for the .unit files. Maybe if dinit uses a real language…. (I call shell a half arsed language)
You are comparing systemd to “sysv”. That’s a false dichotomy that was very agressively pushed into every conversation about systemd. No. Those are not the only two choices.
BTW, sysvinit is a dumb-ish init that can spawn processes and watch over them. We’ve been using it as more or less just a dumb init for the last decade or so. What you’re comparing systemd to is an amorphous, distro-specific blob of scripts, wrappers and helpers that actually did the work. Initscripts != sysvinit. Insserv != sysvinit.
Ok, fair cop.
I was using sysv as a hand waving reference to the various flavours of init /etc/init.d scripts, including upstart that Debian / Ubuntu have been using prior to systemd.
My point is not to say systemd is the greatest and end point of creation… my point is it’s a substantial advance on what went before (in yocto / ubuntu / debian land) (other distros may have something better than that I haven’t experienced.)
And I wasn’t seeing anything in the dinit aims and goals list yet that was making me saying, at the purely technical level, that the next step is on it’s way.
Personally I would have preferred it to be something like guile with some addons / helper macros.
So, https://www.gnu.org/software/shepherd/ ?
Ah, no, you probably meant just the language within systemd. But adding systemd-like functionality to The Shepherd would do that. I think running things in containers is in, or will be, but maybe The Shepherd is too tangled up in GuixSD for many people’s use cases.
Just use runit
. It’s dead simple, actually documented, actually used in production, BSD licensed, and so on. I use it on my work computer with no problems. It’s no bullshit, no bloat, you don’t have to “learn runit” to use it and get exactly what you want.
I’m aware of runit. It does seem pretty nice, but there are a few things about it that bother me. I don’t want to get into specifics here since it can so easily become a matter of one opinion vs the other, but I’ll try to write about some general issues which Dinit should handle well (and which I don’t think runit does) at some point in the near future.
Well one things that comes to mind is that runit doesn’t deal well (or at all) with (double-)forking services. Those are unfortunate by themselves — I mean, let the OS do its job please! — but still exist.
I have run into some odd behavior with runit a time or two, somehow managing to get something into a weird wedged state. I could never figure out what the exact problem was (maybe it is fixed by now?). Oddly enough, I never had the same issue with ye olde daemontools.
Aside from that, I do also like runit – as a non pid 1 process supervisor.
We use runit heavily at my job. It’s a massive pain to deal with, and we have to use a lot of automation to deal with the incredibly frequent issues we have with it. I would never recommend it to anyone, honestly.
I’ve mentioned this here: https://lobste.rs/s/2qjf4o/problems_with_systemd_why_i_like_bsd_init#c_8qtwla
Also, since then, we’ve had problems with svlogd losing track of the process that it’s logging for. Also it should be noted that you absolutely don’t get logging for free, and it requires additional management.
Runit does have support for dependencies in a way, you put the service start command in the run file and it starts the other service first, or blocks until it finishes starting. Right?
How does it lose track of its controlled processes? Like do you know what causes it? For example I know runit doesn’t manage double forking daemons.
What kind of scaffolding have you set up to ensure logging? What breakages do you have to guard against? How do you guard against them?
Do you know why svlogd loses the process? As I understand, it’s just hooked to stdout/stderr, so how could it lose track? What specific situations does that happen in? How did you resolve?
I know it’s a lot of questions, but I’m genuinely curious and would love to learn more.
How does it lose track of its controlled processes? Like do you know what causes it? For example I know runit doesn’t manage double forking daemons.
The reason runit
, daemontools
classic, and most other non-pid-1 supervisors “lose track” of supervised processes comes down to the lack of setsid(2)
. If you have a multiprocess service, in 99% of cases you should create a new process group for it and use process group signaling rather than single process signaling. If you don’t signal the entire process group when you sv down foo
, you’re only killing the parent, and any children will become orphans inherited by pid 1, or an “orphaned process group” that might keep running.
A few years back I got a patch into daemontools-encore
to do all of this, although we screwed up the default behavior (more on that in a second). You can read more about the hows and whys of multiprocess supervision in that daemontools-encore PR.
If you’re using a pid-1 supervisor like BSD init, upstart, systemd, etc it can do something more intelligent with these orphans, since it’s the one that inherits them. Also, pid-1 supervisors usually run services in a new process group by default.
Now, the screw-up: when we added multiprocess service support to daemontools-encore
, I initially made setsid
an opt-in feature. So by default, services wouldn’t run in a new process group, which is the “classic” behavior of daemontools
, runit
, et al. There are a few popular services like nginx
that actually rely on this behavior for hot upgrades, or for more control over child processes during shutdown. Unfortunately I let myself get talked out of that, and we made setsid
opt-out. That broke some of these existing services for people, and the maintainer did the worst thing possible, and half-backed out multiprocess service support.
At this point bruceg/daemontools-encore
is pretty broken wrt multiprocess services, and I wouldn’t recommend using it. I don’t have the heart to go back and argue with the maintainer that we fix it by introducing breaking behavior again. Instead I re-forked it, fixed multiprocess support, and have been happily and quietly managing multiprocess services on production systems for several years now. It all just works. If you’re interested, here’s my fork: https://github.com/acg/daemontools-encore/tree/ubuntu-package-1.13
I guess I’ll end with a request for advice. How should I handle this situation fellow lobsters? Suck it up and get the maintainer to fix daemontools-encore
? Make my fork a real fork? Give up and add proper setsid
support to another daemontools derivative like runit
?
Thank you for all your answers! Can you comment on the -P flag in runsvdir? Does that not do what you want?
There are several problems with multiprocess services in runit
.
As mentioned above, some services should not use setsid
, although most properly written services should. But runsvdir -P
is global.
If you use runsvdir -P
, then sv down foo
should use process group signalling instead of parent process signalling, or you can still create orphans. As another example, sv stop foo
should send SIGSTOP
to all processes in the process group, but since it doesn’t, everyone but the parent process continues to run (ouch!). Unfortunately runit
entirely lacks facilities for process group signalling.
In my patched daemontools-encore:
svc -=X foo
signals the parent process onlysvc -+X foo
signals the entire process groupsvc -X foo
does one or the other depending on whether you’ve marked the service as multiprocess with a ./setsid
fileBut generally you just use the standard svc -X foo
, because it does the right thing.
Besides the things mentioned above, runsvdir -P
introduces some fresh havoc in other settings. Try this in a foreground terminal:
mkdir -p ./service/foo
printf '#!/bin/sh\nfind / | wc -l\n' > ./service/foo/run
chmod +x ./service/foo/run
runsvdir -P ./service
^C
ps ax | grep find
ps ax | grep wc
The find / | wc -l
is still running, even though you ^C’ed the whole thing! What happened? Well, things like ^C and ^Z result in signals being sent to the terminal’s foreground process group. Your service is running in a new, separate process group, so it gets spun off as an orphan. The only good way to handle this is for the supervisor to trap and relay SIGINT
and SIGHUP
to the process groups underneath it.
To those wondering who runs a supervisor in a foreground terminal as non-root…me! All the time. The fact that daemontools derivatives let you supervise processes directly, without all that running-as-root action-at-a-distance system machinery, is one of their huge selling points.
Dinit already used setsid, today I made it signal service process groups instead of just the main process. However when run as a foreground process - which btw Dinit fully supports, that’s how I test it usually - you can specify that individual services run “on console” and they won’t get setsid()‘d in this case. I’m curious though as to how running anything in a new session (with setsid) actually causes anything to break? Unless a service somehow tries to do something to the console directly it shouldn’t matter at all, right?
I’m curious though as to how running anything in a new session (with setsid) actually causes anything to break? Unless a service somehow tries to do something to the console directly it shouldn’t matter at all, right?
The problems are outlined above. ^C, ^Z etc get sent to the tty’s foreground process group. If the supervisor is running foreground with services under it in separate process groups, they will continue running on ^C and ^Z. In order to get the behavior users expect – total exit and total stop of everything in the process tree, respectively – you need to catch SIGINT
, SIGTSTP
, and SIGCONT
in the foreground process group and relay them to the service process groups. Here’s what the patch to add that behavior to daemontools-encore
looked like.
Thanks for the info! I still have a few questions, correlated to your numbered points:
Like nginx yes? How does a pid 1 handle nginx differently / what makes a pid 1 different? If 99% of stuff needs the process group signaled, but nginx works with pid 1 supervisors, do they not signal the process group? How does all that work? And how does all of this tie in to using runit as a pid 1? Would the problems you have with it not exist for people using it as a pid 1? Because the original discussion was about alternate init systems, which is how I use it.
This would only create orphans if the child process ignores sighup right? Obviously that’s still a big problem, but am I correctly understanding that? And when runsvdir gets sighup it then correctly forwards sigterm to its children yes? Not as easy as ^C but still possible. Would any of this behavior be different if you were running as root, but still not as pid 1?
Been using it for more than a year now. It syncs a lot of things, including my pass store. No problems so far.
Do we actually have any context? Or is it just accusations? I’m honestly asking.
Also, of all things, to take offense to this? The hell. It’s a completely sane statement.
So you didn’t consider how Dropbox works when choosing it to solve a problem. And now you know better. Good for you, I guess?
The bigger problem is specifically merging arbitrary data. AFAIK it’s an unsolved one.
I didn’t create todo.txt. And I probably wouldn’t have written this post if I didn’t encounter people who try to use Dropbox that way.
Not the author, me. I didn’t know. And it is good me me to know better rather than learn it through bitter experience, yes.
I’ve already given my 2 cents on the matter in an older post, but I’d also like to add something else:
Why would you expect people not to use downvotes as a convenient shorthand for “I disagree with this”? It’s much easier to click on a button than to form an argument or even just ignore something you don’t like. People are just like that.
And seriously, the graying out is moronic, stop it. I will mention this every time the topic gets brought up.
I know this might be controversial, but I’d be in favour of totally removing downvotes from lobsters. If you don’t think it’s helpful, don’t upvote it. The ‘best’ comments will still rise to the top, but the site won’t punish users with radical opinions.
Thanks for doing this! How would you feel to add now a “report” button for patently abusive comments?
Anyway, I think we can at least experiment and see what happens for a couple of weeks. If things get out of hand we can always roll back, but hey, at least we tried!
I think it’s easy to find and message a mod in those cases (it has happened before even with downvoting).
I’m not against adding a formal “report” action but if it’s too easy to tap it anytime someone’s jimmies are rustled, we (the moderators) are going to have more stuff to wade through and investigate every time we just want to read the site like everyone else. The nice thing about comment downvoting is that the users did all the work and if the comment was bad enough, it was grayed out and collapsed, effectively doing the same thing a moderator would have to by removing it.
I think it might be a good idea to handle comments like stories: have a “flag” button that is basically a downvote button, but has different connotations. Also show the flag reasons above the flagged comment for visibility (I also think this will promote discussion, I know it does with stories).
This way the community still has a channel to self-moderate without the tendency for abuse that the downvote button had.
This kind of democratic action, transparency, & good faith efforts in community and admins is why I like this site so much. The content, too, obviously. I cant wait to see what effect it has on the comment quality!
A member of the community wanted a change to the status quo. Many people weighed in. Most convincing arguments were in favor of eliminating downvotes. jcs eliminated downvotes.
Now, compare that to how most forums or admins handle a “change the site” thread. This thread’s results looks more democratic in comparison.
But…. that’s still not democracy.
(I’m not saying it’s bad that lobsters isn’t a democracy, I’m just saying it isn’t.)
As Transmetropolitan points out, democracy can be overrated.
Next we need to consider removing comment upvotes. Otherwise there’s potentially too much positive encouragement for short bite-sized comments that lack substenance and don’t really contribute to the discussion but which happen to have “popular appeal” for other reasons.
Anyway it’s nice to watch how this plays out. Downvotes can be quite frustrating.
Because we probably don’t want to see the comment section full of bite sized chunks that lack substenance and don’t contribute to a discussion. Yet people love such things. At least sites with mainstream appeal are rampant with such comments. Repeating memes, calling for hillary/trump/putin/obama to be shot, making fun of $bigcorp, bashing people’s software choices, etc. etc.
There’s so much crap (even on-topic) one could post and there’s probably always an audience that agrees and will upvote just because they agree or find it funny or whatever. Downvotes are essentially community moderation that in my view has done a great job keeping such bull to a minimum on HN and here (not so much on reddit, I wonder why?). If downvotes were only used for this purpose, I don’t think anyone would object. The problem is when they are used to suppress, discourage and frustrate perfectly legitimate posters.
Agreed. The front page right now contains at least a few posts that have comments that I’d normally down-vote. Some of those comments have even bubbled to the top, despite the fact that they are silly content free quips that I expect to see at the top of a reddit post, but not here.
maybe in their place you can add some thread coloring to note that certain problematic people are participating in the discussion? I’m sure the list for everyone is different, but I’d love some warning before even looking at a thread where $USER is arguing in bad faith using discredited arguments because they’re a callow and thoughtless young person.
Yeah but eventually callow young people become grumbly old people and your filters will be all wrong.
I like the way it’s handled for posts; there’s an upvote, and there’s a flag button. Flags are essentially downvotes, but because they’re treated differently they don’t “feel” like downvotes. I don’t think people are flagging article submissions that they disagree with.
My suggestion is that downvotes should be accompanied by a reason, and then “downvote” can be “upvoted” to show agreement. I don’t have the knowledge/time/energy to implement this and offer a PR, unfortunately. I do try to offer a reason I’m downvoting a post, unless it’s a troll.
Right now if I click on a downvote arrow, I need to specify the reason. Thus, if the reason is not listed, that means the comment shouldn’t be downvoted.
The current options are:
Some communities solve this problem by eliminating completely the downvote and replacing it with a “report” button, which has different connotations.
If we’re going to stick with up & downvotes, there’s little we can do. Sites like Slashdot have been experimenting with alternative methods to control moderation quality, and I think there is no perfect system.
However, and this is 100% my personal opinion, my favorite method is the upvote + report button. I don’t mind “dumb” or “incorrect” comments sitting with 1 point at the bottom of the page. We all can be mistaken sometimes or write a comment which is just too snarky or misinterpretable, and receiving a downvote for them is infuriating and solves nothing.
Edit: I replied to you instead of OP by mistake, however I think the discussion is still relevant
It’s much easier to click on a button than to form an argument or even just ignore something you don’t like. People are just like that.
Maybe we need a separate “Disagree” button to satisfy that basic human instinct.
I’d love if the disagree button grayed out the post for the local user but did nothing on the backend.
bspwm with a custom lemonbar script.
inb4 complaints that the landing page doesn’t work with javascript and custom fonts turned off.
(Very pretty site though)
It works well enough with everything turned off to at least find the github link. If you just scroll down.
It is with certain people here who do everything in a terminal, or who disable that stuff for security reasons. As you say, to each their own. :)
At least the user who constantly complained about sites not being accessible through Tor isn’t here anymore, though he was banned for other reasons than just that.
Can’t argue with security! But some things are made for browsers, some for Terminal… those in favor of the latter can always head directly to GitHub ;)
For accessibility reasons, overriding fonts specifically is rare but very important to some users. It does break most of the web, so you’re in good company…
A very good point – this has been added into our issue tracker… we aspire to deliver a better (more accessible) experience than most of the web! :)
I’ve seen systems with only upvotes, and they seem to work pretty well. Downvotes often only serve as a landslide “remove this disgusting filth” behaviour, where a few people would downvote a comment and then it is seen as the will of the community, so the comment just keeps being downvoted by people automatically. At least that’s how it happens on reddit.
Here there is also the problem of graying out “unwanted” comments. I don’t particularly care about downvotes per se, but lobste.rs actively makes downvoted comments extremely annoying to read. Why? Let people form their own opinions instead of indulging in pavlovian conditioning.
Another good thing would be to hide comment ratings until you vote on them. Or just for a time, like reddit does it.
Along the lines of “let people form their own opinions” and thinking slightly sideways, what about de-coupling the rating from the sum of up/down votes? That is, keep downvotes, and simply display them along with the total upvote count. Give people an option to sort based on summing them, summing their absolute value, ignoring ups, ignoring downs, etc.
If the argument is that removing downvotes removes a meaningful avenue for communication, I suggest that the information lost by simply summing all votes is similarly stifling, and it would be interesting to see what sort of interactions a more transparent and nuanced rating system would enable.
“Political correctness” means “not being an asshole”, and I think someone being an asshole is pretty much the best justification for downvoting. Mere incorrectness can be fixed with a short, well-cited comment, but asshole behavior needs a fairly large stick attached, or it will take over a community.
“Political correctness” these days (at least) means “not saying things that could upset people even if you are saying what you think is true”. It’s not about not being an asshole, it’s about being nice at the expense of being honest.
Or that’s the excuse given by people who like being raging assholes. “Oh, I’m just being honest. Sorry I’m not ‘PC’ enough for you.” Either way, perhaps not a useful term.
Why can’t it be both? Yes, there are assholes who use that as an excuse. There are also people who shut others up using political correctness as a pretense. Both exist, and, in my experience, people are perfectly capable of detecting assholes but completely freeze up when perfectly reasonable things they say result in accusations of racism and other -isms. My experience is not representative, of course.
But the point is it’s not black and white. You should be fine saying things you think are true. And as long as that is ok, there will be trolls using it as an excuse. I’m all for accepting the existence of trolls if the price is free speech.
Did they give this reason in a comment, or do you assume this is the reason? There’s no such option underneath the down arrow (lol), so while I’m not saying that you’re wrong, I would still like to see some examples.
Also, was it related to suckless, or is the hat irrelevant here?
So basically, you already run code you personally never reviewed or tested, HTTPS is enough, our script is good, we will continue to recommend this install method.
Gotcha.
compared to maintaining (and testing) half a dozen package formats for different distros.
Again with that bullshit. Let. Packagers. Do. Their. Jobs.
compared to maintaining (and testing) half a dozen package formats for different distros.
Again with that bullshit. Let. Packagers. Do. Their. Jobs.
I’m torn on this. On the one hand, yes. Having 3rd party packagers is great for pretty much everyone and ideally f/oss software organizations should maintain and license their software so that repackaging is possible.
The problem with this view is that user maintained packages lag upstream, often by quite a lot, and as an engineering org having lagging user packages means having to deal with innocent users who are stuck with fixed bugs. See a classic rant on this from jwz.
So yes. By all means I want package managers (or /opt
monodir
installs) so I can uninstall your software eventually and update it
sanely along with everything else, but really that means that the
developing org does have to take on packaging responsibilities or else
pay the costs associated with not taking them on. For all that I dislike
curl | bash
, it definitely seems to be a local optimum.
Disclaimer: this is my view as a user, I’ve never participated in the community as a (re)packager or distributing an original package except in publishing Maven artifacts which don’t require repackaging and thus don’t count.
Again with that bullshit. Let. Packagers. Do. Their. Jobs.
I certainly agree with letting packagers do their jobs. However, it seems many users see this as the project’s responsibility, rather than the distribution’s. I read this part of the post as being about that sentiment.
From the perspective of a young project trying to gain traction, taking on this responsibility can noticeably help with uptake. Unfortunately, individual projects - even very large ones, like languages - are not in a great position to make things work smoothly at the scale of an entire distribution. (I’d count Linux distributions and also things like Homebrew as basically the same sort of thing, here.)
I think the ideal is for projects to keep offering these “fast start” setups, but recognize that they are not going to be the long-term solution, nor the solution for everybody.
I wish they would at least encourage users to even think about the security implications, but focusing on that aspect isn’t the heart of why this keeps happening.
As an (occasional, part time) packager, the ideal would be to get some frozen version of the uptream release (or a reference to same) that I can transform into the package. A magic button I can press whenever I like to get the latest version at that time does not meet that need.
If a young project wants to get into distros (I don’t know whether that’s the kind of traction you’re thinking of, if not then obviously disregard) I’d suggest that’s what they should be thinking about doing, and the curl|sh should be a wrapper over that
I was mostly thinking in terms of mindshare - users and contributors. I think a lot of developers don’t necessarily think of distros as part of their plan at all. That’s exactly what I wish were different. :)
It makes a lot of sense, now that you say it, that getting a frozen version is the biggest need there.
This has come up before - https://lobste.rs/s/ejptah/curl_sh
I still haven’t seen a strong argument against this mode of installation, but I still hear a lot of “you’re doing it wrong” anger. I’d be very interested in a cogent argument (comment or link pointer) about why this is bad, as it feels to me like the culture has been shifting in the direction of curl | sh
Shell-based installations for people who want to run the “arbitrary” code from the developers doesn’t prevent you from using packages and dependency management. What’s the problem here?
Often times these scripts are targeted at give OSs (often the big 3, which excludes things like NixOS, OpenBSD, FreeBSD… etc), or more commonly, send flags to commands that aren’t available on all systems (sed
on OpenBSD for example, was missing -i
up until recently).
These missing flags can be disastrous. No one ever writes scripts to handle errors that pop up from missing flags. The result of which can be clobbered files, removed /’s.. or really anything..
If the connection fails during the middle of the script download, whatever has been downloaded will already have been executed. Imagine if you’re on a system that doesn’t utilize the –no-preserve-root option to rm, and the script called for rm -rf /some/directory/here
, but the connection terminated at rm -rf /
, your system would be hosed.
There’s no way to audit that what was performed during one curl | bash
instance will be the same thing performed in another instance, even if done only seconds later. There’s no way to audit what exactly was done without taking a forensic image of the system beforehand.
Simply relying on HTTPS for security doesn’t protect against all threat actors. Certain actors can, and have in the past, performed man-in-the-middle attacks against SSL/TLS encrypted connections. There are companies like Blue Coat, who provide firewall and IPS appliances, and who also are CAs and can perform man-in-the-middle attacks of every SSL/TLS connection for those sitting behind its appliances. This can also be done in any enterprise setting where the client has a corporate CA cert installed and the corporate firewall can do SSL/TLS inspection. Often times, the private key material for the corporate CA certificates are stored insecurely.
The same holds true, and is especially grievous, when the installation instructions say to use curl | sudo bash
.
No, thank you. I’ll stick to my package manager that keeps track of not only every single file that is placed on the filesystem, but the corresponding hashes.
edit[0]: Fix typo
connection fails
TFA addresses this.
audit
Download the script and look at it. If you have reason to believe that the upstream is gonna serve you something malicious on subsequent installs, then you should audit the entire source you are installing, not just the installer.
HTTPS
If you don’t already have the package in your distro’s repositories, then you will need to use HTTPS or a similar mechanism to download it. There is no way to verify it against a hash either, because you will need to use HTTPS or a similar mechanism to download the hash. I’m sure there are more reliable (and exotic) ways of bootstrapping trust but in practice nobody will use them.
This also has nothing to do with curl | bash in particular; this attack applies to, say, downloading a tarball of the source and ./configure && make && make install.
This is what I love about FreeBSD’s ports tree: it solves all of what you just brought up. Each port entry (like www/chromium) already contains a list of hashes for all files it needs to download. Additionally, when using the binary package repo, the repo is cryptographically signed and the public key material is already on my system. No need for HTTPS in the slightest.
I don’t disagree with you here, using packages with your distro is preferable to curl | sh when the option is available. I see curl | sh as a convenient way of performing an installation when that option is not available. There is a lot of paranoia over curl | sh though that would lead one to believe that is more insecure than other forms of non-package installation, and I think having an article that counters these misconceptions is valuable.
If the connection fails during the middle of the script download, whatever has been downloaded will already have been executed. Imagine if you’re on a system that doesn’t utilize the –no-preserve-root option to rm, and the script called for rm -rf /some/directory/here, but the connection terminated at rm -rf /, your system would be hosed.
The sandstorm script is specifically designed to avoid that failure case:
# We wrap the entire script in a big function which we only call at the very end, in order to
# protect against the possibility of the connection dying mid-script. This protects us against
# the problem described in this blog post:
# http://blog.existentialize.com/dont-pipe-to-your-shell.html
_() {
set -euo pipefail
I take issue with recommending and even defending it as good practice. If you want to use it, no one can stop you.
Oh wow, let’s see:
Basically it’s the same reasons why you shouldn’t just blindly do make install from source, only there are no DESTDIR and PREFIX.
I see where you’re coming from now.
I think one of the drivers for people not considering those reasons (with server-side software, anyway) is that while package management tries to solve the issues you’ve identified, it hasn’t been particularly successful or reliable. A common solution which does work is to use installers and shell scripts to build an image and replace the whole system when you need to upgrade/downgrade/cleanup/uninstall – this is perfectly compatible with sandstorm’s position.
In this case the end user is responsible for their system and its administration (which is exactly what every license says anyway). The idea that people should only install things provided by their distro feels a bit paternalistic/outmoded.
You seem to think that my position is “install from packages or else”. No. There is a plethora of valid approaches to administering your system. My problem is that Sandstorm are recommending end users do something that can pretty easily shoot their foot off and then defend it with hand-waving and and arguing with paranoid strawmen.
I’d really have no problem if they’d even mention at some point in their install instructions something like “or look in your distro repos, it might be packaged already, yay!”, but no. They specifically ignore distro repositories altogether.
That and they recommend building from HEAD despite even having regular tagged releases, which is also a small red flag for me.
UPD: To be clear, this is very relevant:
In this case the end user is responsible for their system and its administration
They deliberately recommend an installation method that requires the user to really know what they are doing. That can be valid, just not as the default option. Recommending such a volatile approach as the default install method for end users is at the very least irresponsible.
Got it, you take issue with the very concept of install scripts, not the practice of piping one to sh from curl.
The only real argument I read against curl|sh
is the one regarding network issues. let’s say the script you’re curl
ing include a line as:
CONFDIR=".coolsoft"
rm -fr $HOME/$CONFDIR/tmp
And curl
get a connection reset
right after the first ‘/’, you’ll loose your whole $HOME
.
I do agree that there are way to prevent this kind of things to happen, by using “guideline”, “best practice”, “defensive programming” or whatever, but if you don’t read the script you’re pipingi.to sh
, this is something that can happen to you.
Another grief against this method is that sometimes, the devs ask you to run
curl | sudo sh
But that’s a different topic
I’d be very interested in a cogent argument (comment or link pointer) about why this is bad
This reddit link and discussion cover some of the evil possibilities.
Again with that bullshit. Let. Packagers. Do. Their. Jobs.
“Software should only be distributed through official distro channels” (the only consistent interpretation I can find of your statement) is far from universally held idea, so I’d expect an opinion stated this forcefully to come with some reasoning.
Look. If someone wants to build packages for every distro ever — I can’t and don’t want to stop them. But don’t use that argument like someone’s forcing you to build those packages. It was your own damn choice.
Their “own damn choice” was to sidestep the distro packages thing and create a shell script installer. You appear to have an issue with that, since you called it “bullshit.” I doubt Sandstorm cares if others package their software differently (packagers can Do. Their. Jobs!), since it is Apache licensed. What exactly is your objection?
Wait what. It’s a simple case of a false dichotomy. Look. They are saying that they only have two choices, and two choices only: an installer script that you pipie into your shell or building and testing half a dozen packages. Like someone is forcing them to. It’s pretty obvious hand-waving.
Those are not the only two possible choices for the project. They know it, you know it, I know it. Don’t get caught on a simple fallacy.
Sure, no one stops packages from packaging the thing. But they are telling users to bypass the repos. That’s not helping.
Where in the article are they telling people to bypass the repos? In fact they even say
However, once Sandstorm reaches a stable release, we fully intend to offer more traditional packaging choices as well.
Also, I am confused to what any of this has to do with whether curl | bash is secure or not.
Also, I am confused to what any of this has to do with whether curl | bash is secure or not.
That’s the hand-waving part. They’ve mentioned the packaging issue for no reason other than confuse you even more, which is why I initially said it was bullshit. No one is forcing them to build packages for all or even all major distros, but they go out of their way to use it as an argument for… what exactly?
What irks me about this discussion is that if sandstorm.io had provided a URL for a Debian/RPM repository no one would bat an eye. And yet those packages would be just as “arbitrary” as this shell script, and certainly harder for to read for those inclined to do so.
I’m all for using distro-provided packages - but let’s acknowledge that installing third-party software depends on some combination of trust and technical know-how, and curl
doesn’t measurably change the quantities.
And yet those packages would be just as “arbitrary” as this shell script, and certainly harder for to read for those inclined to do so.
I actually agree. A package built by the distro for the distro is at the very least tested and signed by people with at least some track record. I’m almost as much against devs providing their own packages as the main way of installing their shit because my distro’s maintainer would almost always do a better job of making sure the package works and is consistent with the distro’s guidelines. In short: a package from the repos has a much smaller chance of surprising me.
It’s just that a script is even worse because packages you build for distros, and even if you don’t know the distro as well, you will probably at least superficially test the package on the target distro. A script is supposed to be run by whoever on whatever system out there, which has so many ways to fail.
Unfortunately I would imagine if most packagers were just doing their jobs (as in employment that pays their bills) they would have no time to update packages.
Yeah my company has paid for a substantial number of hours I’ve spent on AUR packages over the years :)
Did they? I don’t remember hot-swapping being a significant issue 20 years ago, before USB and Bluetooth. Outputs like monitors used to be unidirectional dependencies, now they’re a complex bidirectional negotiation. Systems are bigger, with many more services - keeping your tweaks to a dozen init scripts on a few servers updated with the new OS every 18 months or so was tractable; now there’s a few hundred and Arch pushes a few updates per week. These three concerns feed positively into one another. It wasn’t too bad to add a sleep 2
for that one slow hard drive to come up before mounting /usr
, but getting drivers loaded in the right order while the user adds and removes bargain-basement peripherals that require or notify daemons interfacing with this week’s self-updated browser or Steam client is really hard to make Just Work, to say nothing of the problems that appear when the server room you used to rack boxen in is now a set of virtual machines that are live and die by the auto-scaler.
Slow hard drives are a problem other operating systems deal with too. The usual solution is for the kernel not to announce the presence of the device until the platters are spinning and the data is readable. It doesn’t require a multi daemon message bus in user land to determine when mount can run.
I know that’s just a made up example, but it’s a fine exemplar of the big picture. Solving a problem at the wrong layer means you have another problem which needs a solution at a different wrong layer and now you need another layer to fix that.
Systems are bigger, with many more services
In what sense do you mean this? The “Cloud Revolution” has made systems (that most consumers interact with) smaller. Many companies run 1 thing per “server”. Compare this to the era that SMF was created in, where there was one big-honkin machine that had to do everything. And even in the SMF world, SMF wasn’t an amorphis blob that consumed everything in its way, it had clear boundaries and a clear definition of done. I’m not saying SMF is the solution we should have or that it was designed for desktops (it wasn’t) but rather questioning your claim of servers becoming bigger.
Tandems, high-end Suns, Motorola Unix, Apollo, and anything with EIDE or SCSI involved hot swap more than a decade before Bluetooth.
I will grant that the average linux contributor is more likely to paste ‘sleep 2’ in an init shell script than to try to understand shell, or init, but that’s a problem of the linux contributors, not of technology, or shell, or init, in my opinion.
They worked, albeit not well. It’s not like systemd was the first sysvinit replacement, nor was Linux even the first UNIX-like to replace initscripts. I think that honor actually goes to Solaris 10, which introduced SMF 10 years ago (not the case, see ChadSki’s post below mine). SMF solves most of the same problems listed in this reddit post, and predates systemd by quite some time.
init/rc scripts don’t handle depedencies well. Debian had some hacks that do that, which relied on special comments in initscripts. OpenBSD and FreeBSD order the base system’s startup for you, and rely on you to deal with ordering anything from packages or ports. SMF (and later, systemd) keep track of service dependencies, and ensure that those are satisfied before starting services.
init/rc scripts are fairly slow, and that can’t be avoided. Honestly, for traditional uses of UNIXes on traditional servers, this doesn’t matter. For desktops, laptops, and servers that expect frequent reboots for updates (CoreOS’s update strategy comes to mind), this does and that’s one of the places systemd inarguably shines. IME I haven’t seen similar speed from SMF, but it’s honestly so rare that I reboot a solaris box anyway.
init/rc scripts don’t have any mechanism for determining whether or not a service is running. pid files are very racey and unreliable. pgrep
can also be suspect, depending on what else happens to be running on the system at the time. SMF and systemd reliably keep track of running processes, rather than leaving it up to chance.
Another point not well made in this article is that writing init scripts can be awful. Package maintainers save you from this pain 99% of the time, but if you ever have to write your own on Debian or Red-Hat, there’s a lot of boilerplate you need to begin with. To be fair, OpenBSD proves that this doesn’t have to be as painful as Linux distros made it.
Saying that it’s not a difficult problem or that it’s all already solved by sysvinit/openrc/rc doesn’t really cut it, because it’s straight up not honest. SMF does solve these problems, and it solves them well and reliably. I used to be a systemd fan, but over time I’ve grown much more skeptical given some of the defaults and the attitude the systemd maintainers have towards the rest of the community at large.
I’d love a tool like SMF for Linux and the BSDs. I’ve used runit before, but man is it awful.
Related: A history of modern init systems (1992-2015)
Covers the following init systems:
IBM System Resource Controller (1992)
daemontools (1997) + derivatives (1997-2015)
rc.d (2000)
simpleinit, jinit and the need(8) concept (2001-3)
minit (2001-2)
depinit (2002)
daemond (2002-3)
GNU dmd (2003)
pinit (2003)
initng (2005)
launchd (2005)
Service Management Facility (SMF) (2005)
eINIT (2006)
Upstart (2006)
Asus eeePC fastinit + derivatives (2007-2015)
OpenRC (2007)
Android init (2008)
procd (2012)
Epoch (2014)
sinit (2014)
I was very happy with runit when I was using void (but this was only for a laptop and personal server). Can you elaborate at all on what you don’t like about it?
I believe I responded to you before about this :) I can elaborate more on that if you want.
I imagine it works “well enough” on a desktop, so in that case it’s fine. But honestly, it’s not what I’m looking in a modern init system.
Porting some nontrivial initscripts from Fedora to Debian (or was it other way around, can’t remember) has been the most painful operation I have done in my career so far. Several hundred lines of shell script is never fun, even more so when the script uses crude distro dependent hacks to get everything bootstrapped in right order.
If systemd can lift that maintenance horror/pain from someones shoulders I am happy for it, even if I don’t personally like some decisions that systemd has done.
Just because they work doesn’t mean they are good. Windows always boots just fine for me when I want to play games, does that mean I would want to ever, ever, ever, ever touch anything system level in Windows? Certainly not.
Maybe you have a better idea of the issues here than you’re presenting, but I’m reading your comment as an “I hate systemd and I always will” knee-jerk response. The original post clearly articulates new issues that have cropped up in modern Linux systems, that systemd resolves. Instead you choose to blatantly ignore those issues. Have you actually dealt with lazy-loading hardware issues? Reproducible boots across 10k+ machines in a cluster? Have you actually worked with an init system in a meaningful capacity? Because it’s complete bullshit.
I acknowledge this comment is aggressive, but I’m sick and tired of systemd whining from people who don’t understand how much pain systemd resolves.
In my biased opinion, your post doesn’t actually articulate the problem many people have with systemd. It’s not that making a better init system may not be desirable but it’s that they don’t feel systemd actually does this well.
pkg install
it and add <service>_enable="YES"
to /etc/rc.conf
. Done. It works. Now, some part of the FreeBSD community are talking about moving to something like launchd but…I don’t know what your experience is, but your post in no way makes me think “hrm, maybe systemd is a good idea”. Instead it makes me think “what kind of a chop-shop is Linux if you have the problems you listed and systemd is the answer?” It reminds me of the 7-Up commercials[0] where they have a blind taste test, and 7-Up wins, hands down, next to detergent and bile.
Well, you asked, so by way of context, I was a Unix sysadmin dealing with hot-plugged hardware (ranging from SCSI-1 up to entire CPUs and bus subsystems), appearing/disappearing dev entries, “socket activation”, and dependency management in 1988.
The original post is actually a pack of hilarious cockamamie buzzword malarkey, as is “reproducible boots across 10k+ machines in a cluster”. But then, bluster and bravado are the fundamental atom of the red hat/freedesktop playbook, apparently.
I see. The Arch init scripts were particularly useless, so the switch definitely made sense for them. Given your expertise, I’d like to shift my stance and instead ask how you dealt with all these issues? Systemd is a glorified event loop, which deals with the problem nicely, but I don’t see how classic init-script based systems handle this at all. And I didn’t see them handling these issues when I was working on cluster management.
0) every init system is a glorified event loop (https://github.com/denghuancong/4.4BSD-Lite/blob/c995ba982d79d1ccaa1e8446d042f4c7f0442d5f/usr/src/sbin/init/init.c#L1178).
1) the init scripts were written correctly. I will 100% grant you that the Linux sysv init scripts were increasingly written without regard for quality or maintainability. Nevertheless, that is the fault of the maintainers rather than of the init system. If you’d like more of an education in this regard, download OpenBSD or FreeBSD and check out their rc scripts, which are quite clean, fast, effective, extensible, and comprehensible, and even less featureful than sysvinit.
2) hot plug was handled differently by different vendors, but generally the device would appear or disappear in /dev (kernel drivers discovered the device arriving or departing from the (hardware) bus) and a device- or subsystem-specific userland program would get notified by the change, and, e.g., mount/unmount file systems, etc. Also not rocket surgery.
3) we referred to “socket activation” by its original name, “inetd”, and we got rid of it as soon as we could.
4) dependency management was generally resolved with runlevels and sequential scripts. Even loaded down with crap, modern Unix machines boot in a matter of seconds with any init system. Unix machines generally stay up for years, so ‘super fast boot’ and parallel boot were things nobody cared about until the Desktop Linux crowd, who were so incompetent at getting CPU P-states to work properly that they actually rebooted their systems every time they opened their laptop lids. For years.
5) I do “reproducible boots across 10k+ machines in a cluster” with, like, whatever came after BOOTP. Maybe DHCP? In any case, that has nothing at all to do with any init system anywhere.
The easiest thing is to ask a few more whys. Or instead of how do we solve this problem, why do we have this problem? The best code is no code. :)
In the grander scheme of things, this is something that I believe is unfortunately common. My prime example of this is the Hadoop ecosystem. Hadoop is junk. Absolute junk. And every solution to Hadoop is…to add more code to, and around, Hadoop in hopes that somehow adding more will make a piece of junk less junk. In the end you get something that looks like a caricature of the Beverly Hillbillies truck and works twice as bad.
Wait, what? I remember multiple flavours of Unix running with read-only NFS-mounted /usr, and I’m quite sure I wasn’t hallucinating at the time.
Nah, you’re wrong. https://freedesktop.org/wiki/Software/systemd/separate-usr-is-broken
Freedesktop said it’s broken. Must be true.
In a world filled with reasonable people who aren’t assholes there wouldn’t be any more reaction to this than there has been to decisions such as which desktop environment should be the default
Oh come on. That is extremely dishonest. Do I need to explain that comparing a high-level interface you can easily swap out to a low-level toolset a lot of things depend on (by design of said toolset) is completely idiotic? Systemd brings with it a runtime dependency on it that is required to “properly utilize it”, it has things you literally cannot disable (journald, for one) even if you don’t need them and the systemd upstream is very much willing to break conventions — and that they do often enough for people to notice.
It is very decidedly not as trivial as a default DE.
But the init system isn’t something that most users will notice – apart from the boot time.
Yet somehow they do. How come?
For some reason the men in the Linux community who hate women the most seem to have taken a dislike to systemd. I understand that being “conservative” might mean not wanting changes to software as well as not wanting changes to inequality in society but even so this surprised me.
Oooooh, it’s one of those posts. This guy is seriously trying to say that the ones who dislike sudden changes in their systems that they can’t see a good reason for are the same people who don’t want social progress. And, in turn, are obviously misogynists. How can anyone take this guy’s words even remotely serious after this?
While the issue of which init system to use by default in Debian was being discussed we had a lot of hostility from unimportant people who for some reason thought that they might get their way by being abusive and threatening people.
There was no hostility from people actively pushing systemd and calling the opposition all kinds of words. None. Gotcha.
MikeeUSA is an evil person who hates systemd [6]. This isn’t any sort of evidence that systemd is great (I’m sure that evil people make reasonable choices about software on occasion). But it is a significant factor in support for non-systemd variants of Debian (and other Linux distributions). Decent people don’t want to be associated with people like MikeeUSA, the fact that the anti-systemd people seem happy to associate with him isn’t going to help their cause.
Bad person agrees with X. So no good person agrees with X, otherwise they are not a good person. That’s literaly the definition of guilt by association, the author isn’t even trying to hide it.
Sending homophobic and sexist abuse is going to make you as popular as the GamerGate and GodHatesAmerica.com people.
Being grouped with GG to me personally is a compliment. I read r/KiA sometimes and those guys are doing some seriously good work on tracking unethical shit in gaming journalism and journalism in general. If anyone actually bothered to read that sub, most of the people there are nice and cheery shitlords, not some cave-dwelling monsters the media makes them out to be. A lot are actually women. There are even some trans folk in there.
Conclusion: The entire post is almost literally just an attempt to associate disliking a piece of code to being a bad person. That is extremely dishonest and, frankly, disgusting.
Being grouped with GG to me personally is a compliment. I read r/KiA sometimes and those guys are doing some seriously good work on tracking unethical shit in gaming journalism and journalism in general.
Really? Since it’s somewhat a professional requirement that I follow the game scene, I have had the unfortunate opportunity to run into the gamergate crowd in a number of cases, and it always seemed to have quite specific goals which are not actually about “ethics in videogame journalism”, at least in any sense a normal person would understand journalistic ethics. They remind me more of the Republicans who dig through grant proposals trying to find some “gotcha” thing that can be taken out of context to prove that The Taxpayer’s Money Is Being Wasted By A Vast Liberal Conspiracy. Even a conference I’ve attended on and off, DiGRA, is seen by the gamergate crowd as center of some huge conspiracy, when is kind of funny from the perspective of an academic, where it’s seen as just another medium-sized conference with no budget and pretty limited influence (most of us mainly publish in more prestigious venues in our home disciplines). There is also a weird obsession with doxxing people, which is a whole other can of worms.
Unfortunately, you can’t control who associates with an open movement. But seriously, read KiA a bit if you’re interested. As all big movements, it does go overboard with the conspiracy theories quite often, they don’t come across as horrible people. For example, doxxing and brigading are against the rules of the sub. And KiA being the widely accepted GG HQ, it should tell you something.
And btw there is this infamous group called Ayyy Lmao who do a lot of nasty shit and then say GG did it. There are also gamerghazi, who admitted to brigading and false-flagging GG on numerous occasions. Next time you see something nasty done in the name of gamergate, look closely at the account(s) doing it. Not saying there aren’t horrible people in the movement, but my personal experience shows there is a lot of false flagging and not a lot of evidence that GG is some sort of organized harassment campaign.
My understanding of the history was that 8chan became the main GG hub, no? I thought they started mainly on 4chan, and moved to 8chan after moot banned a bunch of the gamergate discussions. I’ve heard of the subreddit also though, just didn’t think of it as the “main” location. But I admittedly haven’t dug into it much; I tend to run across it only on twitter.
Chans are only good for short-term operations, people tend to group in more permanent locations. These days KiA is mostly known as the GG HQ. What you see on chans is in no small part the group I mentioned above: Ayyy Lmao. They are in it for the lulz and would fuck your shit up no matter which side you are on.
I will guess this was posted in response to systemd now killing processes like tmux by default.
But the init system isn’t something that most users will notice – apart from the boot time.
Ah, but see, now users are noticing.
As for the rest, I have expressed criticism that breaking nohup is a poor decision, but I don’t think I called it womanly or whatever, so I’m not sure how relevant it is.
Looks like it’s from April 2015, so not a new article.
What’s too bad about this post is that it’s about two separate things: systemd and people who act like horrible human beings to others. I’m not sure if the author intended it, but this post makes it seem like if you don’t like systemd, you act like a horrible person to others, and that is really quite rude.
Have you not noticed how everyone who expressed concern about systemd was treated during the adoption period? Hell, it’s still happening in places like r/linux. There was a time when just mentioning you use something other than systemd on #archlinux would result in extremely agressive questioning and, in the end, accusations of being regressive, a whiny child, personally responsible for holding back progress etc. I’ve personally been called so many things by people who have an unhealthy emotional investment in systemd… and when you get angry and retaliate, they point to that and say “See? He’s an angry child! Disregard his opinion!”
I’m not saying it’s all people who like systemd that are like that. It’s just that there actually was a pretty loud minority of assholes who kept (and in some places, still do) shouting down the opposition using very dirty tricks that are, unfortunately, not obvious to the casual observer.
My point is, calling people who dislike systemd incompetent and/or straight-up bad people is nothing new.
Interestingly, #systemd on freenode never shouted at me for asking questions. Even when I was upset. So it’s probably just herd mentality that went much too far.
The criteria they put down — as usual — are suspiciously close to whatever the other players on the market are doing. Linux isn’t a platform that the corporations want to see, sure, but is that a bad thing? Rhetorical question, I don’t assume people on here are idiots. Linux — as a wider ecosystem — will survive and thrive, I’m sure of it, in a world without corporate interests. The projects this post makes nods to won’t.
And distro repos generally do a much better job than the bloody app stores the post drools over. User interfaces — sure, that’s a mess, but let’s not throw the baby out with the bathwater eh. I don’t need random developers dropping their crap into close — conceptually — proximity of my live systems thank you very much.