I stopped reading at “I occasionally do freelance work, in the past I couldn’t afford to pick my clients, which means I saw my fair share of dissociative identity disorder and ADHD cases.”
I’m always looking for a cross-compiling system for building macOS executables from Linux, either as a single static executable, or as a self-contained relocatable bundle of (interpreter + libraries + user code entrypoint), because getting legal Mac build workers is such a pain.
The best toolkit I’ve found, by far, is golang Where you just GOOS=darwin go build .... There are a variety of more-or-less hacky solutions in the Javascript ecosystem, and a few projects for Python, but for Ruby this area is sorely lacking.
I mention this because while XAR looks like an awesome way to distribute software bundles, I still need to figure out a way to do nice cross-compiles if I’m going to use it to realistically target both macOS and Linux.
Tell me about it. I’ve tried cross compiling Rust from Linux to OSX and it was just a saga of hurt from start to finish.
For Go, did you need to jump through the hoops of downloading an out-of-date Xcode image, extracting the appropriate files and compiling a cross-linker? Or is that mysteriously handled for you by the Go distribution itself?
You literally just run GOOS=<your target os> GOARCH=<your target architecture> go build. No setup needed. Here’s the vars go build inspects.
It’s frustrating trying to do similar in compiles languages, and then interpreted languages with native modules are even worse.
Go basically DIYs the whole toolchain and directly produces binaries. That has pros and cons, but means it can cross-compile without needing any third-party stuff like the Xcode images. For example it does its own linking, so it doesn’t need the Xcode / LLVM linker to be installed for cross-compilation to Mac.
No reason you can’t put a whole virtualenv, python interpreter and all, into your XAR. XAR can pack anything.
You still need a tool to prepare that virtualenv so that you can pack it, and that’s the sort of tool I struggle to find - cross-compiling a venv, or equivalent in other languages.
Yes, exactly. I am less interested in different formats and more in a tool to create them. The ease of doing that with Go is the target.
The ease of doing that with Go is the target.
By this you mean, you’re looking for a solution for Python packaging that makes it as easy as Go to distribute universally?
I used this once before to take some code I wrote for Linux (simple cli with some libraries - click, fabric, etc.) and release it for Windows: http://www.py2exe.org/index.cgi/Tutorial
The Windows users on my team used the .exe file and it actually worked. It was a while back but I remember that it was straightforward.
[Comment removed by author]
The scale of the problem they solve is a lot larger than what most people will ever work on. The fan-out nature of the product is challenging enough, but there’s more.
The 140 chars thing is inconsequential. It would be the easiest thing to change.
The 140 chars thing is inconsequential. It would be the easiest thing to change.
Agree with everything until this part. I think it’s very likely that there is some critical RDBMS with a varchar(140) column that’ll make “easy” an actual nightmare with people waking up in cold sweats.
140 characters are counted as 140 Unicode grapheme clusters, so the byte size is already potentially a lot larger and variable.
True. In MySQL you’d likely set the collation to utf-8 or whatever. That doesn’t make doubling, or eliminating the character limit all together any less difficult though?
In MySQL you’d likely set the collation to utf-8 or whatever.
Fun fact: you’d want the “whatever” https://medium.com/@adamhooper/in-mysql-never-use-utf8-use-utf8mb4-11761243e434
But here’s the rub: MySQL’s “utf8” isn’t UTF-8.
The “utf8” encoding only supports three bytes per character. The real UTF-8 encoding — which everybody uses, including you — needs up to four bytes per character.
MySQL developers never fixed this bug. They released a workaround in 2010: a new character set called “utf8mb4”.
A few years ago they had a bug for a day or so which allowed much longer tweets, so I doubt they have this hard limit anywhere except for the validation code.
Twitter is all about pushing ads and trying to find ways to monetize their users. They have over 3k employees, and I have no idea WTF they’re doing to be honest. The site has terrible performance, and it’s buggy as hell. If you look at the source for the page, it’s downright nightmarish. They keep adding shit like moments that nobody wants or ever asked for, while ignoring actual user requests like the ability to edit tweets.
I started using Mastodon recently, and it’s just a better experience all around. The core functionality of Twitter is not that hard to implement,. If you’re not trying to monetize, then you can provide a much better experience for the users.
Personally, I’d really like to see the internet go back to being a distributed system where anybody can run a server and interact with people, as opposed to current centralized model where a few sites dominate all the social media.
Running your own servers is cheaper and easier than ever. You can get a Digital Ocean droplet for 5 bucks a months nowadays, and the prices are only going down.
Meanwhile, setting up and managing apps like Mastodon has become much easier as well thanks to Docker. Run the container that the maintainer packages, and you can get it up and running in minutes.
I think Mastodon is a great example that this model absolutely does work today. I also think that it’s more robust than the startup model.
Mastodon is open source, and it will be around as long as people want to use it. The features get added based on user demand, as opposed to demand of investors. Anybody can run their own node, and set it up any way they like. No central entity decides how Mastodon is used, or what it’s used for.
This is what internet was meant to be. We took a terrible detour with walled gardens like Facebook and Twitter, but it doesn’t have to be that way.
If you’re not trying to monetize, then you can provide a much better experience for the users.
There are tons of shitty FOSS projects out there. I am an open source enthusiast, my job title is literally “Open Source Software Engineer.” I love FOSS software. But the idea that it’s better because you’re not trying to make money is just not one I’d come close to making. I love using Linux on the desktop but it’s way worse for most users than Windows or MacOS. Open source is great because it’s about Freedom, not because it provides a superior user experience. Sometimes it does, sometimes it doesn’t. It really depends on the product and what you’re using it for.
This. Often proprietary is better quality because more man hours are spent on it. However, despite this I will use Free Software over proprietary any day because it gives me something proprietary can never give me, Free Software gives me freedom.
Of course, open source is not a guarantee that you’ll end up with a great piece of software. However, I’m talking about the specific difference in motivation for Twitter and Mastodon developers. Personally, I find Linux far preferable to Windows as a desktop as well, but MacOS is definitely a lot more polished than Linux.
they make a considerable amount of money. is it net profit? no. mostly because they have an insane head count.
And yes, a large part of this may be that I no longer feel like I can trust “init” to do the sane thing. You all presumably know why.
No, I don’t. Please explain what you mean.
Such appeal to everyone “knowing what you mean” and the implication that everyone supports your standpoint are toxic. They are a good way of making a personal opinion look like a group opinion. Combined with a fuzzy notion of “sane”, this is basically just spreading bile.
And it works. If this mail were more complex or that sentence would be missing, it probably wouldn’t be here on lobsters. It’s certainly not posted for the review above it.
If you’re reading the kernel mailing list, it can be assumed that you have some familiarity with the subject matter, and if not, you’re not missing anything crucial to the discussion here. Torvalds has decided not to point and name the implied party, probably to avoid another heated flame war on the mailing list.
Some context to get you up to speed:
He is referring to what is currently the most popular init system on Linux, systemd. systemd is a relatively recent development of Red Hat, and has been adopted by all major distributions. Prior to systemd gaining popularity, the init system was a hodgepodge of shell scripts, which clearly had its share of problems.
However, systemd has been adding more features to its resume. Besides just being an init system, it has also absorbed the hardware abstraction layer udev, it implements its own dbus daemon (a popular Linux message bus used to communicate between different services and programs), it has taken control of some power management features such as suspend on lid close for laptops, it implements login and virtual terminal handling, it contains a dhcp client and server, and it provides its own system logger using a binary format that can in practice only be used through the tools provided by systemd.
This attitude of trying to do everything from a single piece of software has proven to be somewhat controversial among Linux users, because the old UNIX mantra for software was “do one thing and do it well”. This is especially controversial for something as central as the init system, because it is always running and runs with elevated rights.
The lead developer of systemd has also responded to a few issues with some unpopular comments, and has in the past been in conflict with the Linux kernel developers by refusing to cooperate on certain issues caused by Linux and systemd interaction. systemd has also, despite its widespread use, been hit by a number of fairly serious bugs, some of which had significant security impacts. The simplicity and potential impact of some of these bugs has left many people in doubt over the general quality of systemd and related projects.
In particular Linus has had specific issues in the past - there was a problem a while back where kernel developers would boot with the “debug” flag and systemd would start spamming the console with messages and drown out the kernel information said developers needed. See https://lkml.org/lkml/2014/4/2/580 where someone proposed a patch that would remove “debug” from /process/cmdline so that the presence of that flag was completely unavailable to userspace (including systemd), thus literally preventing the problem from happening. Real icky situation.
Ha, while this isn’t quite what I called for, it is a great explanation of the state of things. “Controversial”, I certainly agree with (although, coming to systemd, I’m on the “It fixes a lot of things for me” side of things).
Thanks for that.
There seems to be two main camps of users (3 if you include distro maintainers as a separate camp):
People who maintain a few systems, perhaps use Linux on the desktop.
People who maintain many systems.
Camp 1 people don’t mind when Systemd does something arbitrary, unexpected or indeterminate. Camp 2 people hate Systemd’s indeterminism.
Personally I hate magic. Systemd is magic. I can’t trust it to do what I want to do, only what it wants to do.
As a person who maintains many systems professionally, i have to interject here since I always see it stated as fact that professional operators dislike systemd. I like systemd a lot because it gives badly needed structure to Linux service management. Most colleagues who worked with systemd feel the same. (This doesn’t mean it’s perfect or bug-free)
Is there a reason you didn’t deploy daemontools or runit or some such to give badly needed structure to Linux service management before systemd forced it on you (however willingly)?
I did use those at various times but that’s not the same as being the default that manages all services on the system. Systemd also has a powerful declarative configuration the other options did not.
There seems to be two main camps of users (3 if you include distro maintainers as a separate camp):
- People who maintain a few systems, perhaps use Linux on the desktop. Camp 1 people don’t mind when Systemd does something arbitrary, unexpected or indeterminate.
I’m a group 1 member but I absolutely hate when systemd doesn’t operate as I would expect a init daemon.
This attitude of trying to do everything from a single piece of software has proven to be somewhat controversial among Linux users, because the old UNIX mantra for software was “do one thing and do it well”.
I’ve found this mantra to be only applicable in certain situations, usually when it comes to applications that users directly interact with. Things like email clients, text editors, and IRC clients (web browsers could spawn an entire discussion on this all their own). I’m not an expert on init systems, and your previous paragraph on systemd clearly shows its feature creep. But when it comes to an init system, I’ve always seen that as a complex process where it’s necessary for it to do more than one thing. This can be especially true with modern systems where everything you’ve mentioned (HAL, dbus, power management, login, networking, etc) being (arguably) necessary for the system to run correctly and in a useful way.
So, I wonder, is it possible to have an init system that is:
that still abides by “doing one thing, and doing it well”?
The mantra predates text editors (well aside of ‘ed’), email and irc clients. Any user interfaces in the modern sense really. It meant using a bunch of small, single purpose programs (like cat, troff, tail, ps…) which could be combined by user to the desired effect with standard system mechanisms like redirection, pipes and shell scripts.
We can argue the practical merits of systemd forever but it’s fairly clear it goes against the tradition of UNIX systems development. It’s a huge, opaque, uncooperative beast that makes turtles cry. I hope Linus is close to the point where he’ll just come up with something more digestible.
The original idea behind that mantra was to make tools that were:
As systemd takes over more of a Linux machine, they destroy their own simplicity, requiring someone to keep a massive amount of state in their head to modify the code or even work on units as an administrator. However, they also destroy the composiblity of the system’s tools. Things like binary logs and internal-to-systemd protocols can’t be parsed by standard command line tools, and thus users lose this ability to compose different parts of the system. This has been my biggest issue with systemd, that it violates not only “do one thing and do it well”, but also the composiblity that makes that possible.
A side note on GUI’s: The GUI design model is specifically the opposite of “do one thing and do it well”. GUI’s are not designed for composibility, they are designed to take the user from one end of a specific process to the other. They trade off the ability to compose with other programs, for a more robust control of the user experience.
The context of this discussion is around trying to bring sanity to rlimits for setuid processes…
In an attempt to provide sensible rlimit defaults for setuid execs, this inherits the namespace’s init rlimits:
$ ulimit -s 8192 $ ulimit -s unlimited $ /bin/sh -c ‘ulimit -s’ unlimited $ sudo /bin/sh -c ‘ulimit -s’ 8192
This is modified from Brad Spengler/PaX Team’s hard-coded setuid exec stack rlimit (8MB) in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don’t reflect the original grsecurity/PaX code.
Certainly traditionally it has been trivially easy for a rogue daemon to bring a system to it’s knees…. since traditionally, out of the box, there are no rlimits imposed.
It is the init systems job to start daemons… it would be really nice if it imposes sane rlimits on anything it starts.
Systemd does that, and attempts to do it in sanish ways by imposing the limits on process groups. (ie. A rogue daemon cannot escape it’s constraints by spawning a legion).
I’m would be easily convinced that systemd’s approach is not the best and/or not correct.
However I’m certain that the linux ecosystem needs work in this area and systemd is at least undertaking that work.
You seem to be mixing up system daemons and setuid utilities. This patch has nothing do with the limits systemd imposes on the processes it starts, so whatever systemd does, or other systems did not do, in this area is irrelevant.
There’s no question that systemd is capable of setting rlimits for child processes. The question is whether the limits systemd sets for itself are a good default template for setuid processes run by users.
I figured it was systemd, and I know there’s a TON of systemd hate floating around, but I didn’t realize just how rampant the freeping creaturism had become.
That’s really unfortunate, thanks for the clarifying comment.
[Comment removed by author]
What qualifies as “insane”?
This might sound trollish, but I seriously don’t find “sane/insane” distinctions very useful (beyond their connotations). In my experience, these words get used when you can’t construct a better argument for why the other side is doing something wrong, so you call yours “sane” and the others “insane”.
[Comment removed by author]
That sounds like a nice definition, but “without reason or logic” is just as fuzzy, “counter productive, destructive, harmful” are also easily stated, but must be followed by hard facts to be held up.
Also, regarding the principle of least surprise: Matz (who popularised it) also famously said that it applies to his surprise.
Your definition just moves the playing field.
Also, I would argue that anyone implementing a piece of software so central to the Linux world is a “domain expert”. This is boundary play at its finest.
For a specific example: How would you classify the change to kill tmux servers after a user logged out? Lots of people found that surprising. And in the larger space of existing init systems, quite unprecedented. I think “insane” is lacking precision, but adequately captures many people’s sentiment.
and to add to this response, I find it insane that the response was asking tmux to include a change for the new behavior systemd enforced.
Especially since it changes how every unix has behaved for almost 30-40 years regarding HUP.
Or a more recent one where it parses a username of “0haha” as being an invalid username and runs the unit as root. And now perfectly valid usernames starting with a number won’t work in systemd unit files as they get interpreted as being invalid because systemd can’t seemingly parse numbers in a config file sanely to distinguish a user name versus a user id.
This all might sound like splitting hairs, but breaking userspace (HUP behavior shouldn’t need a patch for your init in tools), and not parsing a username sanely are pretty basic things I would expect a first year undergrad to be able to do.
So yes, I agree insane is a good word to use for things. I could come up with hard facts, but systemd really feels like one step forward and two steps back for a lot of things. I don’t really feel like its a very good example of good engineering practices, aka binary logs that can be corrupted forcing you to do insane things to get a system online due to a short write to the filesystem is… also insane, we have decades of knowledge of how to do this that has been ignored.
If the corners aren’t rounded on this desk, why should I feel safe about the rest of the desk?
In my experience, these words get used when you can’t construct a better argument for why the other side is doing something wrong, so you call yours “sane” and the others “insane”
“In your experience”, huh? So you’re just extrapolating your own personal experiences to all of mankind, then? “Citation needed!”
You see, it’s easy to filibuster any conversation by calling for better argumentation, proof, evidence, studies to back claims up and so on.
You know perfectly well why someone might call systemd “insane”. What’s your actual contribution to the conversation, besides signalling to everyone what a rational and sophisticated person you are?
“In your experience”, huh? So you’re just extrapolating your own personal experiences to all of mankind, then? “Citation needed!”
No, I don’t. That’s why I wrote: In my experience.
You know perfectly well why someone might call systemd “insane”.
No, I don’t. I use systemd every day and I’m very fine with how it works and how it behaves.
It has, as all implementations of a thing, issues and flaws, but that’s all. I’d be happy to try an alternative, which would for sure improve in a lot of areas (and may be worse in others), but that’s a trade-off, nothing more.
What’s your actual contribution to the conversation, besides signalling to everyone what a rational and sophisticated person you are?
I’m highlighting a conversational pattern that is all too often used to create unity were there is none. I’m neither rational or sophisticated.
And vim tells you to type :quit<enter> when you press Ctrl-C.
(Why not simply exit when Ctrl-C is pressed? Because in vim Ctrl-C is used to abort an operation that’s taking too long, eg. a search over a gigantic file.)
I generally agree with this, but disagree with the analogy for set -e:
In all widely used general-purpose programming languages, an unhandled runtime error - whether that’s a thrown exception in Java, or a segmentation fault in C, or a syntax error in Python - immediately halts execution of the program; subsequent lines are not executed.
Segmentation faults will halt C programs, yes, but that’s a pretty unusual kind of error, an actual memory-protection violation. Most errors don’t halt C programs. Plenty of functions will fail by setting errno, returning an error value, and doing nothing else. If you don’t explicitly check the return value and bail, your program continues executing, exactly as you don’t want it to do in either C or bash. Using set -e in bash sets a mode where you error out if any of these functions fail, which C doesn’t support, i.e. a mode where if fopen fails, the program bails instead of continuing on to your loop where you now try to fread from the file that didn’t open (the fread will also fail, but again without erroring out).
For quick-and-dirty programs I actually wish C (or Go) had a version of this so I could write straightline code that doesn’t check any error returns, just treating all errors as fatal errors.
I personally like to use set -e to protect my critical scripts to misbehave (eg, overwritting backups with empty tarballs).
I do agree with you that set -e has its use, and shouldn’t be made automatic. Encountering an error doesn’t mean your whole script failed miserably. It could even be part of it:
echo starting backup
mountpoint -q /mnt/backup || { mount /mnt/backup || exit 1 }
rsync -az /home /mnt/backup
Setting -e in this case would prevent the script to recover from an easy to solve error (drive not mounted). This is neither desirable, nor practical for debugging.
So really, don’t blindly use shell options, understand, and use them wisely!
#define CHECK(f) if(f) { perror(#f); printf("%s:%d\n", __FILE__, __LINE__); exit(1); }
FILE *f = fopen("example");
CHECK(f);
CHECK(fread(f, 1, bytes, buffer) == bytes);
Macros for == 0, == value, not null, etc are of course easy to build.
CHECKV(bytes, fread(f, 1, bytes, buffer));
CHECKZ(close(f));
CHECKV could print v for extra debugging deliciousness. Would still have to remember which function returns what to indicate error though.
You can do that, yeah, but I’d prefer something I can just put once at the top, like #pragma ABORT_ON_ERROR, which the compiler or libc would implement.
That would be neat, but this is definitely the next best.
Too many people don’t realize that just because C macros aren’t all fancy and hygienic like lisp/rust macros, doesn’t mean they aren’t still incredibly useful.
I use something similar in few places. I also at first use something like this at quick and dirty stage:
if( (fd = open(“foo”, O_RDONLY)) < 0 || len = read(fd, buf, sizeof(buf)) < 0 || close(fd) < 0 ) { perror(argv[0]); exit(-1); }
It could be nice to have a C to C compiler. That would add this low level exception handling. Describe for every function an error condition. Die on this error and display nice error message maybe with stack trace.
Give ability to catch those exceptions, but then they have to be caught somewhere in caller.
It should be noted that it is not possible to check if a command fails if
Besides this, I always use -e is enabled, e.g. check if a binary is available with hash <name>, because the whole script will fail immediately as soon as one command returns a non-zero exit status.set -uo pipefail but I never had to use IFS='$\n\t'.
You’re right, I didn’t remember the issue correctly. But, you can’t save the return value of a failing command for later use, which is what I had tried to do in some script:
# ...
set -e
failing-command
x=$?
# ...
The real point I want to make is that set -e is sometimes/often not what I want.
I don’t know about Chrome, but modern Safari will actually pause unused tabs, or throw them out of memory, and won’t reload all your previously open tabs when restoring a session either.
but chrome does pause unused tabs. If your chrome doesn’t do it, you can use The Great Suspender plugin
[Comment removed by author]
“ up to date running on 64 bit Ubuntu 14.04 with an up to date LTS kernel currently has 882 tabs in 20 windows.”
If 882 isn’t a typo, then what are you doing with that many tabs? Making sure your SGI UV doesn’t have any bad RAM in a fun way? Generating data for the concurrency QA team at Mozilla? Squeezing out those last memory leaks? Hosting a cloud service that runs “multi-tenant, JavaScript VM’s?”
I think I maxed out at a dozen or two when doing research. Especially running through and sucking papers out of ACM and IEEE. ;)
I prefer x86 because it’s so much faster. My 2008 AMD desktop is faster than the 2016 Raspberry Pi 3.
Not a fair comparison. A current iPhone, or an ARM server board, is probably faster than your 2008 AMD desktop.
Your comment may have been made in good faith, but it sounds trolly. Obviously desktop chips plugged into a wall outlet will have a huge advantage over phone chips.
ARM vs Atom chips would be a more fair comparison.
And of course, ARM wins on performance/power.
As you scale up chip sizes and performance, power consumption is a draw. Decoding logic is a minuscule part of the chip.
- The process turns a request for binary DNS data into into XML, feeds it into the sytemd/dus ecosystem, which turns it into binary DNS to send it to the forwarder. The binary DNS answer then gets turned into XML goes through systemd/dbus, then is turned back into binary DNS to feed back into glibc.
That’s certainly one way to do things.
It’s things like that which make me question if people understand that software is entirely man made and doesn’t need to be complicated. The Standard Model isn’t forcing XML on us.
Apropos, one [of many] great Henry Baker’s Quotes
Physicists, on the other hand, routinely decide deep questions about physical systems–e.g., they can talk intelligently about events that happened 15 billion years ago. Computer scientists retort that computer programs are more complex than physical systems. If this is true, then computer scientists should be embarrassed, considering the fact that computers and computer software are “cultural” objects–they are purely a product of man’s imagination, and may be changed as quickly as a man can change his mind. Could God be a better hacker than man?
Where does XML supposedly come in? D-Bus does not use XML for serialization.
Also the original announcement at https://lists.ubuntu.com/archives/ubuntu-devel/2016-May/039350.html says resolved does not require D-Bus.
I’ve thought about this some more. (As a small matter, the choice of serialization format wasn’t really the big wtf for me.) But it does illustrate systemd has an image problem. I’m willing to believe just about anything. Its detractors have certainly been hard at work, and they haven’t been entirely fair. But then Lennart “haha, fuck BSD and tmux too for good measure” has been a rather poor defender of his choices. Everything I’ve read by him leads me to conclude he doesn’t believe software can be too complicated, only not complicated enough. So presented with a claim that systemd does something extraneously silly, my default response is not to reject it.
Asking for evidence is exactly what one should do.
But then Lennart “haha, fuck BSD and tmux too for good measure” has been a rather poor defender of his choices.
He also has very poor attackers. Most of the criticism I read basically boils down to “everyone hates on systemd and believes it’s not POSIX”. (from our recent discussions, I’d happily exclude you there)
No one wants to engage with that crowd in a nuanced argument, lowering the quality of support and the quality of criticism at the same time.
This is also why I regularly call out non-complex arguments, because that is the road they lead down to.
We happily use systemd in a lot of deployments and like it in practice. It works and is approachable to newcomers. Software and new software have bugs (also critical ones), so it doesn’t help to call out “systemd implemented a base service” - that’s the way the project works, deal with it. All of the components systemd now replaces will be replaced at some point.
Criticism must be phrased in terms of whether the pace is healthy or different approaches would work better or in platform-wide solutions lost along the way.
You have to break an egg to make an omelette, but there’s always the question what kind of omelette it should be.
According to this post on lwn:
is really as easy as it gets
But looking at the source it is using lots of sd_bus_message* calls so for something doesn’t require D-bus it seems to have a dependency problem…
To be fair, turning things into an internal representation for processing before serializing back into the original format is not at all uncommon.
This is true, and I expect this to be done especially when the original format is a binary blob. But there are better formats than XML! Especially if this is only used internally for processing, why not make it some kind of object? XML is rigid and prone to breakage, and is meant to be something barely amenable to both humans and machines. Seems extraneous here.
This is the part of Systemd’s approach to the world that really keeps me up at night.
So, anecdote: when Systemd decided it needed its own resolver (which already elicits the standard architectural “wat” from us old-fashioned UNIX folks), said to my fellow sysadmin at the time, “I’m calling it: it’ll be vulnerable to cache poisoning.” Here I am, a lowly sysadmin, aware that a baseline acceptable caching resolver must include the measures outlined in RFC 5452. Of course, I expect to be wrong, on some level: no one is that careless.
Systemd comes along and bam, vulnerable. Even having predicted it, I was amazed.
I think it’s indicative of a broader attitude on the part of the Systemd developers: the idea that they’re smart enough to “go it alone,” that they don’t need to look at prior art, and that whatever has come before has nothing in particular to teach them.
It keeps me up at night as an operations person, because it’s like having to relive all of the great networked UNIX security failings in miniature, hoping against hope that one of them doesn’t make it to a stable distribution before being caught. I’m not particularly optimistic about it. And of course, it’s operations that Poettering keeps telling me he’s helping: no more init scripts, faster boots!
And here I was more worried about outmoded concepts like “system integrity” and “existing knowledge.”
This is the part of Systemd’s approach to the world that really keeps me up at night.
Systemd is a Red Hat product, built by Red Hat employees, to Red Hat’s specifications. So it isn’t systemd’s approach, It is Red Hat’s approach that keeps you up at night.
Operating system vendors do not make money by writing maintenance free software and giving it away. They make money by selling support contracts and training classes. There is nothing wrong with that, everyone has to make a living. Nor am I suggesting that OS vendors write maliciously bad software to sell support! One can’t support something that doesn’t work well enough to be deployed in the first place. Red Hat’s systemd project seems to be a good example of this.
Red Hat is an operating system vendor, like Microsoft, Apple, IBM, Sun, and DEC, and they should be expected to behave as such with the projects that they sponsor.
To expect otherwise is to deny their fundamental nature.
This argument also includes that they want the product as stable as possible. Because if you sell support, you’d like to keep the number of incidents to a minimum, they are the thing that costs you. Having software under your control can be a fundamental part of that.
Also, we shouldn’t forget that systemd wasn’t very welcome in Red Hat and was started by Lennard Poettering on his own time before being adopted when thinking about narratives to explain the whole thing.
Myth: systemd is a Red-Hat-only project, is private property of some smart-ass developers, who use it to push their views to the world. Not true. Currently, there are 16 hackers with commit powers to the systemd git tree. Of these 16 only six are employed by Red Hat. The 10 others are folks from ArchLinux, from Debian, from Intel, even from Canonical, Mandriva, Pantheon and a number of community folks with full commit rights.
(Quoted from http://0pointer.de/blog/projects/the-biggest-myths.html)
If systemd is merely a Red Hat tool to sell training, why have most other big distros (Debian, Ubuntu, Arch) adopted it? Ubuntu even had their own alternative they had worked on for years.
Red Hat has worked on lots and lots of open source projects: http://community.redhat.com/software/ this apparently wasn’t an issue before but now they have employees on a project people don’t like and it’s a symbol of how they naturally have to be like this?
Running your browser as a different user is an interesting challenge. Under normal circumstances I want to save my bank statement PDFs in my home directory. I want to upload my burrito pictures to Twitter. Slicing this off to a separate user is a significant usability setback.
Better ideas like those already implemented in Chrome, which usually uses all means of sandboxing a platform can provide? (including the sec comp sys call filtering on Linux)
Running as a non-unique unprivileged user means that user can potentially access much more than was intended. If nobody is running both your web and database servers. A compromise in either is a compromise in both.
I think my issue with the OP is more the underlying semantics of the Unix model. A “user” is too heavyweight and coarse an abstraction for handling privilege separation, and carries along with it too much historical baggage. But nobody is doing capabilities, which are IMO the correct mechanism. One muddles along, I suppose.
Creating a unique UID for an operation could be a very clean separation of privs. How is a different UID to heavy weight? The coarseness is point, it is an unequivocal separation between the main account and the account of the untrusted process.
Mount a FUSE interposer in the sandbox and all kinds of FS behaviors could be proxied through.
Unix users carry a lot of implicit assumptions about privilege with them. Have you ever tried to do complex access control with UID/GID permissions? It’s a nightmare.
In a world where the default model of computation involves a large number of actual humans proxied through Unix users logging into an 11/750 or a Sparcstation 20, maybe the Unix user model holds. In a world where 99.9999% of computers are single-user at all times, it’s way too heavy and ill-fitting an abstraction.
I was confused by the implication that OS X requires constant updates but OpenBSD only twice a year. AFAIK OpenBSD base system and packages do require updates more often but neither are (officially) provided in binary form ready to install.
If you want to try linked clones for faster machine creation, be aware it’s not enabled by default https://docs.vagrantup.com/v2/virtualbox/configuration.html
I have to say, on the whole I feel systemd is better for Linux than it is worse for Linux. I mean to say Linux is better off with it than without it.
With that said, though, I can whole heartedly understand why people don’t like it and I must admit I too have decided to leave Linux for BSD, but not because of systemd.
For me it solves nothing about linux, but about the linux distribution space. Too many distros had way too shitty of init systems (openrc as mentioned in the blog here had MANY MANY more bugs opened against it than systemd), and made no solid efforts to make big changes because they are a distro, generally not a pile of C programmers. systemd has made it such that many distros now look & feel the same to me, which is very simplifying mentally. My workarounds for what make things crappy are easy to implement when I know I just need to run systemctl and add a few service drop-ins to /etc/systemd/system.
Is systemd the best init? Nope, I hate most of it, and I really hate the lack of portability. We should write open source assuming that linux might go away, not such that linux must live forever. Note: I also dislike docker for similar reasons. Linux or die is a philosophy from a time when the UNIX market was mostly proprietary. Now it’s an obscenity thrown in the face of projects whose portability is largely responsible for the success of Linux.
Why should a Linux low level user land tool be portable? The various things in a BSD userland are certainly tied to BSD.
Why should a linux tool not be portable? Saying you don’t have time, or “patches welcome” is fine. But these projects that openly say they intend to reject patches to make them portable are offensive in the highest sense.
/sbin/init is not some “low level user land tool” that other systems don’t need, and the features they export all rely on unix processes and maybe some chroot / jail magic. The rest is depressingly the same, that’s why something could be implemented entirely in bash, which is praised in this article.
There is no technical reason for systemd to be linux only when an IPC and cgroups are some of the least unique parts of linux! If you can run a modern web browser (i.e. not a hypervisor, but something as big) then the compatibility code is possible to write.
For the same reasons he lays out in his article actually. Linux finally has some consistency across distributions. I think that was something Linux was sorely missing and one of the major points BSDers had over Linux.
Looks like the TextMate story is repeating itself. Another example why a a proprietary editor is not worth the hassle.
Maybe don’t jump head first into the next shiny thing developed by just one person. BBEdit has been around for over 20 years. Open source projects can also die, unless you want to become the developer of your editor.
Or just switch every six months as the barrier to entry/vendor lock inis almost nil for most editors. I’ve tried notepad++, gedit, geany and vi in the last few months. Not to mention still using Visual Studio at home.
Depends on the editor. I believe Vim/Emacs/Spacemacs have a quite high barrier, consisting of both idiosyncratic ways to use the editors and a huge selection of plugins that you might depend upon.
On the contrary, TextMate v1 was pretty stable (with a couple of known issues that had plugins to fix them) for years. Just because it wasn’t getting new releases didn’t mean it was dead/unusable.
That said, TextMate 2 was still under development which is also happening in the open now.
That said, TextMate 2 was still under development which is also happening in the open now.
I have no complaints about TM2, I’ve installed it for people who need a simple editor for programming. But it seems to have lost all mindshare now, first to ST and now to Atom. Maybe it was open sourced too late to have bugger impact.
But would that stand in a way of an open source release without any advertising of the sideloading option?
I agree with you. And I think one of the main concerns of apple could have been also that they were using the sideload option to install a binary blob…
There are open source applications on the App Store, like VLC, so apparently they are not taking issue with that.
I thought this was exactly the sort of situation for which the “developer ID” certs were created - independent distribution so that Apple isn’t the sole gatekeeper of what software people can use. What exactly is Apple’s position here? That it’s not okay to distribute apps outside the store, in general? That apps not on the store still have to abide by the store’s policy? Is this actually part of their terms, or are they just invoking a “we’ll kick you out of the developer program” thing?
By “exactly” what developer ID is for, wouldn’t that be developing and testing your own app? “Side loading” of somebody else’s app seems like an end run around the rules. (No argument about the rightness of the rules.)
It may well be that they always did talk about it in those terms. That isn’t how I understood it, but…
Developer ID is only a Mac thing: https://developer.apple.com/developer-id/
The iOS thing is different. From https://developer.apple.com/library/prerelease/ios/documentation/DeveloperTools/Conceptual/WhatsNewXcode/Articles/xcode_7_0.html:
Now everyone can run and test their own app on a device—for free. You can run and debug your own creations on a Mac, iPhone, iPad, iPod touch, or Apple Watch without any fees, and no programs to join.
Seems to clearly say you are supposed to run “your own creations”.
I’m so happy to read about the progress on this, we need this on linux though! Who is interested in developing this?
I’d assume that the answer you’d get if you ask a Linux kernel developer is “use SELinux or AppArmor”.
But if we’re wishing for arbitrary things, then I’d also like for strlcpy, strlcat and the arc4random family to be an actual part of POSIX and subsequently adopted by glibc/Linux.
Or Smack for those favoring simplicity. Looking at the link, I just found out it got big in automotive Linux, too.
The closest equivalent on Linux are probably the systemd filesystem sandboxing options: https://www.freedesktop.org/software/systemd/man/systemd.exec.html#ReadWritePaths=
I think it’d be easier to make OpenBSD as good as Linux. I’m not sure what it’s missing, though. Momentum, I guess.
Culture and priorities are different. Linux’s specifically targets what brings in mainstream and corporate audiences. OpenBSD explicitly rejects a lot of that to be simpler, more UNIX like, or quality/security. Lastly, there’s more attempts to sell Linux-based systems that generate revenue that can fund development.