We don’t want to get submissions for every CVE and, if we do get CVEs, we probably want them tagged security.
while I agree with you in this case, I don’t particularly like the “I speak for everyone” stance you seem to be taking here.
This one is somewhat notable for being the first (?) RCE in Rust, a very safety-focused language. However, the CVE entry itself is almost useless, and the previously-linked blog post (mentioned by @Freaky) is a much better article to link and discuss.
Second. There was a security vulnerability affecting rustdoc plugins.
Do you think an additional CVE tag would make sense? Given there’s upvotes some people seem to be interested.
Yeah, I’d rather not have them at all. Maybe a detailed, tech write-up of discovery, implementation, and mitigation of new classes of vulnerability with wide impact. Meltdown/Spectre or Return-oriented Programming are examples. Then, we see only the deep stuff with vulnerability-listing sites having the regular stuff for people using that stuff.
There are a lot of potentially-RCE bugs (type confusion, use after free, buffer overflow write), if there was a lobsters thread for each of them, there’d be no room for anything else.
Here’s a list a short from the past year or two, from one source: https://bugs.chromium.org/p/oss-fuzz/issues/list?can=1&q=Type%3DBug-Security+label%3AStability-Memory-AddressSanitizer&sort=-modified&colspec=ID+Type+Component+Status+Library+Reported+Owner+Summary+Modified&cells=ids
i’m fully aware of that. What I was commenting on was Rust having one of these RCE-type bugs, which, to me, is worthy of discussion. I think its weird to police these like their some kind of existential threat to the community, especially given how much enlightenment can be gained by discussion of their individual circumstances.
But that’s not Rust, the perfect language that is supposed to save the world from security vulnerabilities.
Rust is not and never claimed to be perfect. On the other hand, Rust is and claims to be better than C++ with respect to security vulnerabilities.
It claims few things - from the rustlang website:
Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.
None of those claims are really true.
It’s clearly not fast enough if you need unsafe to get real performance - which is the reason this cve was possible.
It’s clearly not preventing segfaults - which this cve shows.
It also can’t prevent deadlocks so it is not guaranteeing thread safety.
I like rustlang but the claims it makes are mostly incorrect or overblown.
Unsafe Rust is part of Rust. I grant you that “safe Rust is blazingly fast” may not be “really true”.
Rust prevents segfaults. It just does not prevent all segfaults. For example, a DOM fuzzer was run on Chrome and Firefox and found segfaults, but the same fuzzer run for the same time on Servo found none.
I grant you on deadlocks. But “Rust prevents data race” is true.
I’m just going to link my previous commentary: https://lobste.rs/s/7b0gab/how_rust_s_standard_library_was#c_njpoza
Good talk.
I recently used systemd “in anger” for the first time on a raspi device to orchestrate several scripts and services, and I was pleasantly surprised (but also not surprised, because the FUD crowd is becoming more and more fingerprintable to me). systemd gives me lifecycle, logging, error handling, and structure, declaratively. It turns out structure and constraints are really useful, this is also why go has fast dependency resolution.
It violates unix philosohpy
That accusation was also made against neovim. The people muttering this stuff are slashdot markov chains, they don’t have any idea what they’re talking about.
The declarative units are definitely a plus. No question.
I was anti-systemd when it started gaining popularity, because of the approach (basically kitchen-sinking a lot of *NIX stuff into a single project) and the way the project leader(s) respond to criticism.
I’ve used it since it was default in Debian, and the technical benefits are very measurable.
That doesnt mean the complaints against it are irrelevant though - it does break the Unix philosophy I think most people are referring to:
Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new “features”.
If you believe composability (one program’s output is another program’s input) is an important part of The Unix Philosophy, then ls violates it all day long, always has, likely always will. ls also violates it by providing multiple ways to sort its output, when sort is right there, already doing that job. Arguably, ls formatting its output is a violation of Do One Thing, because awk and printf exist, all ready to turn neat columns into human-friendly text. My point is, The Unix Philosophy isn’t set in stone, and never has been.
Didn’t ls predate the Unix Philosophy? There’s a lot of crufthistory in unix. dd is another example.
None of that invalidates the philosophy that arose through an extended design exploration and process.
nobody said it’s set in stone; it’s a set of principles to be applied based on practicality. like any design principle, it can be applied beyond usefulness. some remarks:
i don’t see where ls violates composability. the -l format was specifically designed to be easy to grep.
People have written web pages on why parsing the output of ls is a bad idea. Using ls -l doesn’t solve any of these problems.
As a matter of fact, the coreutils people have this to say about parsing the output of ls:
However ls is really a tool for direct consumption by a human, and in that case further processing is less useful. For futher processing,
find(1)is more suited.
Moving on…
the sorting options are an example of practicality. they don’t require a lot of code, and would be much more clumsy to implement as a script (specifically when you don’t output the fields you’re sorting on)
This cuts closer to the point of what we’re saying, but here I also have to defend my half-baked design for a True Unix-y ls Program: It would always output all the data, one line per file, with filenames quoted and otherwise prepared such that they always stick to one column of one line, with things like tab characters replaced by \t and newline characters replaced by \n and so on. Therefore, the formatting and sorting programs always have all the information.
But, as I said, always piping the output of my ls into some other script would be clumsier; it would ultimately result in some “human-friendly ls” which has multiple possible pipelines prepared for you, selectable with command-line options, so the end result looks a lot like modern ls.
about formatting, i assume you’re referring to columniation, which to my knowledge was not in any version of ls released by Bell Labs. checking whether stdout is a terminal is indeed an ugly violation.
I agree that ls shouldn’t check for a tty, but I’m not entirely convinced no program should.
just because some people discourage composing ls with other programs doesn’t mean it’s not the unix way. some people value the unix philosophy and some don’t, and it’s not surprising that those who write GNU software and maintain wikis for GNU software are in the latter camp.
your proposal for a decomposed ls sounds more unixy in some ways. but there are still practical reasons not to do it, such as performance and not cluttering the standard command lexicon with ls variants (plan 9 has ls and lc; maybe adding lt, lr, lu, etc. would be too many names just for listing files). it’s a subtle point in unix philosophy to know when departing from one principle is better for the overall simplicity of the system.
With all due respect[1], did your own comment hit your fingerprint detector? Because it should. It’s extrapolating wildly from one personal anecdote[2], and insulting a broad category of people without showing any actual examples[3]. Calling people “markov chains” is fun in the instant you write it, but contributes to the general sludge of ad hominem dehumanization. All your upvoters should be ashamed.
[1] SystemD arouses strong passions, and I don’t want this thread to devolve. I’m pointing out that you’re starting it off on the wrong foot. But I’m done here and won’t be responding to any more name-calling.
[2] Because God knows, there’s tons of badly designed software out there that has given people great experiences in the short term. Design usually matters in the long term. Using something for the first time is unlikely to tell you anything beyond that somebody peephole-optimized the UX. UX is certainly important, rare and useful in its own right. But it’s a distinct activity.
[3] I’d particularly appreciate a link to NeoVim criticism for being anti-Unix. Were they similarly criticizing Vim?
[3] I’d particularly appreciate a link to NeoVim criticism for being anti-Unix. Were they similarly criticizing Vim?
Yes, when VIM incorporated a terminal. Which is explicitly against its design goals. From the VIM 7.4 :help design-not
VIM IS... NOT *design-not*
- Vim is not a shell or an Operating System. You will not be able to run a
shell inside Vim or use it to control a debugger. This should work the
other way around: Use Vim as a component from a shell or in an IDE.
A satirical way to say this: "Unlike Emacs, Vim does not attempt to include
everything but the kitchen sink, but some people say that you can clean one
with it. ;-)"
Neo-VIM appears to acknowledge their departure from VIM’s initial design as their :help design-not has been trimmed and only reads:
NVIM IS... NOT design-not
Nvim is not an operating system; instead it should be composed with other
tools or hosted as a component. Marvim once said: "Unlike Emacs, Nvim does not
include the kitchen sink... but it's good for plumbing."
Now as a primarily Emacs user I see nothing wrong with not following the UNIX philosophy, but at it is clear that NeoVIM has pushed away from that direction. And because that direction was an against their initial design it is reasonable for users that liked the initial design to criticism NeoVIM because moving further away from the UNIX philosophy.
Not that VIM hadn’t already become something more than ‘just edit text’, take quickfix for example. A better example of how an editor can solve the same problem by adhering to the Unix Philosophy of composition through text processing would be Acme. Check out Acme’s alternative to quickfix https://youtu.be/dP1xVpMPn8M?t=551
akkartik, which part of my comment did you identify with? :) FWIW, I’m fond of you personally.
I’d particularly appreciate a link to NeoVim criticism for being anti-Unix
Every single Hacker News thread about Neovim.
Were they similarly criticizing Vim?
Not until I reply as such–and the response is hem-and-haw.
To be fair I don’t think the hacker news hive mind is a good judge of anything besides what is currently flavour of the week.
Just yesterday I had a comment not just downvoted but flagged and hidden-by-default, because I suggested Electron is a worse option than a web app.
HN is basically twitter on Opposite Day: far too happy to remove any idea even vaguely outside what the group considers “acceptable”.
Indeed, I appreciate your comments as well in general. I wasn’t personally insulted, FWIW. But this is precisely the sort of thing I’m talking about, the assumption that someone pushing back must have their identity wrapped up in the subject. Does our community a disservice.
OTOH, I spent way too much of my life taking the FUD seriously. The mantra-parroting drive-by comments that are common in much of the anti-systemd and anti-foo threads should be pushed back. Not given a thoughtful audience.
https://news.ycombinator.com/item?id=7289935
The old Unix ways are dying… … Vim is, in the spirit of Unix, a single purpose tool: it edits text.
https://news.ycombinator.com/item?id=10412860
thinks that anything that is too old clearly has some damage and its no longer good technology, like the neovim crowd
Also just search for “vim unix philosophy” you’ll invariably find tons of imaginary nonsense:
Please don’t make me search /r/vim :D
thinks that anything that is too old clearly has some damage and its no longer good technology, like the neovim crowd
That’s not saying that neovim is ‘anti-Unix philosophy’, it’s saying that neovim is an example of a general pattern of people rewriting and redesigning old things that work perfectly well on the basis that there must be something wrong with anything that’s old.
Which is indeed a general pattern.
That’s not saying that neovim is ‘anti-Unix philosophy’
It’s an example of (unfounded) fear, uncertainty, and doubt.
rewriting and redesigning old things that work perfectly well on the basis that there must be something wrong with anything that’s old.
That’s a problem that exists, but attaching it to project X out of habit, without justification, is the pattern I’m complaining about. In Neovim’s case it’s completely unfounded and doesn’t even make sense.
It’s not unfounded. It’s pretty obvious that many of the people advocating neovim are doing so precisely because they think ‘new’ and ‘modern’ are things that precisely measure the quality of software. They’re the same people that change which Javascript framework they’re using every 6 weeks. They’re not a stereotype, they’re actual human beings that actually hold these views.
Partial rewrite is one of the fastest ways to hand off software maintainership, though. And vim needed broader maintainer / developer community.
Vim’s maintainer/developer community is more than sufficient. It’s a highly extensible text editor. Virtually anything can be done with plugins. You don’t need core editor changes very often if at all, especially now that the async stuff is in there.
You don’t need core editor changes very often if at all, especially now that the async stuff is in there.
Which required pressure from NeoVim, if I understood the situation correctly. Vim is basically a one-man show.
Thanks :) My attitude is to skip past crap drive-by comments as beneath notice (or linking). But I interpreted you to be saying FUD (about SystemD) that you ended up taking seriously? Any of those would be interesting to see if you happen to have them handy, but no worries if not.
Glad to have you back in the pro-Neovim (which is not necessarily anti-Vim) camp!
What is FUD is this sort of comment: the classic combination of comparing systemd to the worst possible alternative instead of the best actual alternative with basically claiming everyone that disagrees with you is a ‘slashdot markov chain’ or similar idiotic crap.
On the first point, there are lots of alternatives to sysvinit that aren’t systemd. Lots and lots and lots. Some of them are crap, some are great. systemd doesn’t have a right to be compared only to what it replaced, but also all the other things that could have replaced sysvinit.
On the second point, it’s just bloody rude. But it also shows you don’t really understand what people are saying. ‘I think [xyz] violates the unix philosophy’ is not meaningless. People aren’t saying it for fun. They’re saying it because they think it’s true, and that it’s a bad thing. If you don’t have a good argument for the Unix philosophy not matter, or you think systemd doesn’t actually violate it, please go ahead and explain that. But I’ve never actually seen either of those arguments. The response to ‘it violates the Unix philosophy’ is always just ‘shut up slashdotter’. Same kind of comment you get when you say anything that goes against the proggit/hn hivemind that has now decided amongst other things that: microsoft is amazing, google is horrible, MIT-style licenses are perfect, GPL-style licenses are the devil-incarnate, statically typed languages are perfect, dynamically typed languages are evil, wayland is wonderful, x11 is terrible, etc.
claiming everyone that disagrees with you is a ‘slashdot markov chain’ or similar idiotic crap
My claim is about the thoughtless shoveling of groundless rumors. Also I don’t think my quip was idiotic.
there are lots of alternatives to sysvinit that aren’t systemd
That’s fine, I never disparaged alternatives. I said: systemd is good and I’m annoyed that the grumblers said it wasn’t.
It’s not good though, for all the reasons that have been said. ‘Better than what you had before’ and ‘good’ aren’t the same thing.
seriously. If you don’t like systemd, use something else and promote its benefits. Tired of all the talking down of systemd. It made my life so much easier.
seriously. If you like systemd, use it and shut up about it. Tired of all the talking up of systemd as if it’s actually any better than its alternatives, when it is objectively worse, and is poorly managed by nasty people.
Have you watched the video this thread is about? Because you really sound like the kind of dogmatist the presenter is talking about.
If you like systemd, use it and shut up about it
Also, isn’t this a double-standard, since when it comes to complaining about systemd, this attitude doesn’t seem that prevalent.
No, because no other tool threatens the ecosystem like systemd does.
Analogy: it wasn’t a double-standard 10 years ago to complain about Windows and say ‘if you like Windows, use it and shut up about it’.
I see this kind of vague criticism when it comes to systemd alot. What ecosystem is it really breaking? It’s all still open source, there aren’t any proprietary protocols or corporate patents that prevent people from modifying the software to not have to rely on systemd. This “threat”, thr way I see it, has turned out to be at most a “ minor inconvenience “.
I suppose you’re thinking about examples like GNOME, but on the one hand, GNOME isn’t a unix-dogmatist project, but instead they aim to create a integrated desktop experience, consciously trading this in for ideal modularity – and on the other, projects like OpenBSD have managed to strip out what required systemd and have a working desktop environment. Most other examples, of which I know, have a similar pattern.
I think that the problem is fanboyism, echo chambers and ideologies.
I might be wrong, so please don’t consider this an accusation. But you writing this sounds like someone hearing that systemd is bad, therefore never looking at it, yet copying it. Then one tries it and finding out that baseless prejudices were in fact baseless.
After that the assumption is that everyone else must have been doing the same and one is enlightened now to see it’s actually really cool.
I think that this group behavior and blindly copying opinions is one of the worst things in IT these days, even though of course it’s not limited to this field.
A lot of people criticizing systemd actually looked at systemd, really deep, maybe even built stuff on it, or at least worked with it in production as sysadmin/devop/sre/…
Yes, I have used systemd, yes I understand why decisions we’re taken, where authors if the software were going, read specs of the various parts (journald for example), etc.
I think I have a pretty good understanding compared to at least most people that only saw it from a users perspective (considering writing unit files to be users perspective as well).
So I could write about that in my CV and be happy that I can answer a lot of questions regarding systemd, advocate its usage to create more demand and be happy.
To sum it up: I still consider systemd to be bad on multiple layers, both implementation and some ideas that I considered great but then through using it seeing that it was a wrong assumption. By the way that’s the thing I would not blame anyone for. It’s good that stuff gets tried, that’s how research works. It’s not the first and not the last project that will come out sounding good, to only find out a lot of things either doesn’t make a difference or make it worse.
I am a critic of systemd but I agree that there’s a lot of FUD as well. Especially when there’s people that blame everything, including own incompetence on systemd. Nobody should ever expect a new project to be a magic bullet. That’s just dumb and I would never blame systemd for trying a different approach or for not being perfect. However I think it has problems on many levels. While I think the implementation isn’t really good that’s something that can be fixed. However I think some parts of the concept level are either pretty bad or have turned out to be bad decisions.
I was very aware that especially in the beginning the implementation was bad. A lot got better. That’s to be expected. However next to various design decisions I consider bad I think many more were based on ideas that I think to most people in IT sound good and reasonable but in the specific scenarios that systemd is used it at least in my experience do not work out at all or only work well in very basic cases.
In other words the cases where other solutions are working maybe not optimal, but that aren’t considered a problem worth fixing because the added complexity isn’t worth it systemd really shines. However when something is more complex I think using systemd frequently turns out to be an even worse solution.
While I don’t wanna go into detail because I don’t think this is the right format for an actual analysis I think systemd in this field a lot in common with both configuration management and JavaScript frameworks. They tend to be amazing for use cases that are simple (todo applications for example), but together with various other complexities often make stuff unnecessarily complicated.
And just like with JavaScript frameworks and configuration management there’s a lot of FUD, ideologies, echochambers, following the opinion of some thought leaders, and very little building your own solid opinion.
Long story short. If you criticize something without knowing what it is about then yes that’s dumb and likely FUD. However assuming that’s the only possible reason for someone criticizing software is similarly dumb and often FUD regarding this opinion.
This by the way also works the reverse. I frequently see people liking software and echoing favorable statements for the same reasons. Not understanding what they say, just copying sentences of opinion leaders, etc.
It’s the same pattern, just the reversal, positive instead of negative.
The problem isn’t someone disliking or liking something, but that opinions and thoughts are repeated without understanding which makes it hard to have discussions and arguments that give both sides any valuable insides or learnings
Then things also get personal. People hate on Poetteing and think he is dumb and Poetteing thinks every critic is dumb. Just because that’s a lot of what you see when every statement is blindly echoed.
That’s nice, but the implication of the anti-systemd chorus was that sys v init was good enough. Not all of these other “reasonable objections” that people are breathless to mention.
The timbre reminded me of people who say autotools is preferrable to cmake. People making a lot of noise about irrelevant details and ignoring the net gain.
But you writing this sounds like someone hearing that systemd is bad, therefore never looking at it, yet copying it.
No, I’m reacting to the idea that the systemd controversy took up any space in my mind at all. It’s good software. It doesn’t matter if X or Y is technically better, the popular narrative was that systemd is a negative thing, a net-loss.
In your opinion it’s good software and you summed up the “anti-systemd camp” with “sys v init was good enough” even though people from said “anti-systemd camp” on this very thread disagreed that that was their point.
To give you an entirely different point of view, I’m surprised you don’t want to know anything about a key piece of a flagship server operating systems (taking that one distro is technically an OS) affecting the entire eco system and unrelated OS’ (BSDs etc.) that majorly affects administration and development on Linux-based systems. Especially when people have said there are clear technical reasons for disliking the major change and forced compliance with “the new way”.
you summed up the “anti-systemd camp” with “sys v init was good enough” even though people from said “anti-systemd camp” on this very thread disagreed that that was their point.
Even in this very thread no one has actually named a preferred alternative. I suspect they don’t want to be dragged into a discussion of details :)
affecting the entire eco system and unrelated OS’ (BSDs etc.)
BSDs would be a great forum for demonstrating the alternatives to systemd.
Well, considering how many features that suite of software has picked up, there isn’t currently one so that shortens the conversation :)
launchd is sort of a UNIX alternative too, but it’s currently running only on MacOS and it recently went closed source.
It violates unix philosohpy
That accusation was also made against neovim. The people muttering this stuff are slashdot markov chains, they don’t have any idea what they’re talking about.
i don’t follow your reasoning. why is it relevant that people also think neovim violates the unix philosophy? are you saying that neovim conforms to the unix philosophy, and therefore people who say it doesn’t must not know what they’re talking about?
are you saying that neovim conforms to the unix philosophy, and therefore people who say it doesn’t must not know what they’re talking about?
When the implication is that Vim better aligns with the unix philosophy, yes, anyone who avers that doesn’t know what they’re talking about. “Unix philosophy” was never a goal of Vim (”:help design-not” was strongly worded to that effect until last year, but it was never true anyways) and shows a deep lack of familiarity with Vim’s features.
Some people likewise speak of a mythical “Vim way” which again means basically nothing. But that’s a different topic.
vim does have fewer features which can be handled by other tools though right? not that vim is particularly unixy, but we’re talking degrees
The people muttering this stuff are slashdot markov chains, they don’t have any idea what they’re talking about
I’ll bookmark this comment just for this description.
I dislike CloudFlare because they’re making the internet more centralized (the more small websites use them as a proxy, the less direct connections to small websites are made) and because of some infamous abuse handling incidents, but I would trust them 100000% more than my local ISP.
The local ISP knows where I live, the local ISP has to comply with local laws, the local ISP has monitoring installed by the local equivalent of the NSA. The local ISP didn’t even promise any privacy at all, which is worse than CloudFlare’s privacy policy for this resolver.
“the local ISP has monitoring installed by the local equivalent of the NSA”
You should assume Cloudfare does, too. They are a venture-funded, for-profit company operating in a surveillance state in ideal position to do surveillance. The NSA/FBI also pays or coerces compliance per Core Secrets leaks. The real question to determine if they won’t cooperate with the NSA is: “Will they turn down $30-$100+ million, go bankrupt, and/or go to prison for me?” If not, then they’ll likely cooperate. The cooperation also always mandates they lie about cooperating. They can promise government-proof anything while relaying data to the government.
Key word being local. If you live in a country that’s not very friendly to the US, it’s better to have NSA surveillance than local surveillance :)
Excellent point! I argued something similar in essay on using multiple, non-cooperative jurisdictions for security. :)
Couldn’t the opposite be just as true? If you live in a country that’s not friendly enough to the US, it may also be better to have local surveillance than NSA surveillance. If I know my government is out for my data, can’t easily access the stuff the US has, and isn’t sophisticated enough to upstream crypto algorithms into the Linux kernel or tap into underwater fibre cables, I’d pick local any day.
edit: plural
That’s true. However, your local ISP will still know where you connect. It will still see how much and if it’s unencrypted what you send/receive.
CloudFlare being a big target has to comply with some other country’s laws, as a US company it has to comply with NSLs, which might or might not exist in your local country. CloudFlare being a big company might also comply with other country’s laws - maybe not small ones, bug look at the list of companies that comply with China, etc.
Also this is actually not about your ISP vs CloudFlare. It’s about whatever you have configured vs ClfoudFlare. If Firefox starts making HTTPS requests to CF as a system administrator, when you expect DNS requests you might even miss them.
I think the problem is not that Firefox allows this, but that it’s skipping your system-wide configuration, without asking. After all I can already use CloudFlare’s DNS servers if I want to do so.
And then: CloudFlare makes its money by selling CDN features (including analytics, etc.) to companies, while my ISP makes money by selling internet to me. If your ISP doesn’t promise any privacy (or has no privacy policy, as you make it sound like) maybe consider switching your ISP.
The main point however is: I don’t think “overwriting” things like resolving hostnames is something an application should do, unless it’s asking or by design made to do so. In this case it’s not.
It will per default skip what you, your system administrator, etc. might have done to secure you.
It’s totally fine you trust CloudFlare more than your ISP/your local setup, but I don’t think it’s fine if a piece of software dictates and overwrites whom you trust silently, when you might already have consciously chosen someone else you trust.
If your ISP doesn’t promise any privacy (or has no privacy policy, as you make it sound like) maybe consider switching your ISP.
In most of the US, that isn’t feasible. Most places have at most two residential broadband providers: the phone company (typically AT&T), and the cable company (either Comcast or Spectrum, depending on location). And not counting MVNOs, there are, what, four mobile broadband providers?
I do basically agree with you that this may skip what your local sysadmin has done to secure you. But it’s making the trade-off that most people do not have a local sysadmin doing anything to secure you, and will never opt-in to anything to secure themselves.
From my experience there is two kind of groups. One caring only about potential for mitigations and the “tactics” laid out in the article and the other only focusing on the existence of vulnerabilities.
With “existence of vulnerability” I don’t mean any statistics about how many vulnerabilities have been found, but limiting factors, like reducing code size, proofs, etc.
On both sides there seem to be standard arguments about the other, which often hold true, at least in the core. So for example that even with very good mitigation, you should still “fix your bugs”.
I really enjoyed the comparison with economy. There are a lot of parallels. There is trade-offs, but also the part that one should not forget things are tools and not religions or ideologies, to not make extreme arguments, that might even contain a lot of truth, but completely disregard the reality we live in or the reason why security engineering/economy exists in first place.
To not get too political, on the security side you can go to the extreme of security by not having a service at all or slightly less extreme, no network connection or a service not started. The most secure bank is probably one without network connection or assets - or a bank that isn’t a bank. However, the reason people care about banking security is because they wanna have it in first place. It’s not a complete self purpose. This is why I see it matching with (financial) economy.
In a way this is what for example makes OpenBSD so interesting. It’s secure, but it’s also a general purpose operating system existing in real life, running your favorite browser, which is in stark contrast to various highly secure operating systems that never manage to find their way out of academia.
I know where the author is heading, but some browsers building with one compiler doesn’t strike me as a monoculutre. Not too long ago “everyone” (with few exceptions) using or programming for Linux was using GCC and glibc. Now people use clang, gcc and probably others (icc, etc.). So it’s more like things became a lot less of a monoculture and probably mostly for the effort of BSD and MacOS users and developers making sure that software doesn’t only work with GCC.
Yes, it’s at least Mozilla and Chrome now using clang, but these are neither the only browsers nor is big projects focusing mostly on a defined set of tools something very uncommon.
It’s just a guess, but I also think that it will not suddenly become a huge undertaking to try to compile Firefox with another compiler. For the Rust parts maybe, but it’s already like that.
Not to say it’s a good thing, but there of course are up- and downsides. Especially for such a big project and especially for a project already using said implementation, helping to develop it it makes a lot more sense than in various other cases where you often only have one supported version of GCC. People using source based approaches to install packages probably know this. Compiling some version of some compilers, maybe taking hours just to compile a little piece of software that absolutely requires it.
Other than that, even if Mozilla now uses one compiler over different platforms I hope they won’t start “ruling out” compilation with other compilers or rejecting a few lines of code to keep or establish compatibility. At least from the article it sounds like that would be the case.
It makes me really appreciate the projects that require only a c89/c99 compliant compiler, like sqlite and lua. Admittedly their dependencies are also minimal, only require the c standard library iirc, but it sure is nice.
I know where the author is heading, but some browsers building with one compiler doesn’t strike me as a monoculutre.
So, even for a “toy” project, we used to build again:
And we would’ve built on an Alpha if we had one lying around–helps reveal the really thorny issues.
The thing is, not using multiple compilers (and architectures!) helps hide bugs.
Completely agree, but it’s still not unusual for projects to use one compiler for their official releases.
It’s just a guess, but I also think that it will not suddenly become a huge undertaking to try to compile Firefox with another compiler.
Suddenly? No, but I’m afraid that sooner or later having both clang and rust will become required for any platform that wants to ship firefox. Which is a shame, since Mozilla mission is:
Our mission is to ensure the Internet is a global public resource, open and accessible to all.
Suddenly? No, but I’m afraid that sooner or later having both clang and rust will become required for any platform that wants to ship firefox. Which is a shame, since Mozilla mission is:
That’s already the case. Stylo needs clang to build. You can build some parts with GCC though.
I agree. Sadly the browser already without this has very big difference in platform support, even without that. For example WebRTC (the multimedia part), sandboxing capabilities, etc. But then of course supporting that on many platforms isn’t easy. Would be great of course, if that mission lead to a focus on not only supporting Windows, Linux and MacOS.
Maybe someone has more insights, but something that makes me wonder a lot about how things work internally at Mozilla is that there is quite a few bug reports with ready to integrate patches remaining unanswered for often years, yet there is often changes that completely surprise users, some of them being very far away from Mozilla’s stated mission.
While I get that not all the people working for Mozilla work in all areas it seems a bit like on the “accepting and integrating contributions” side of things there is a problem. As a foundation asking for monetary contribution it’s often a bad sign when contribution in form of work gets not taken care of. I hope Mozilla can fix this, so contributors don’t get too frustrated.
So it’s more like things became a lot less of a monoculture and probably mostly for the effort of BSD and MacOS users and developers making sure that software doesn’t only work with GCC.
Not to belittle works of BSD people, a lot of Clang portability work was done by Debian before BSD decided on Clang. https://clang.debian.net/ goes back to Clang 2.9.
FreeBSD initially imported Clang at revision r72732 into the tree June 2nd 2009:
https://svnweb.freebsd.org/base?view=revision&revision=193323 https://llvm.org/viewvc/llvm-project/?pathrev=72732
This was long before FreeBSD 9.0-RELEASE (January 2012).
The public documentation of the effort starts back in Feburary of 2009:
https://wiki.freebsd.org/action/recall/BuildingFreeBSDWithClang?action=recall&rev=2
As of June 2009, Clang was at version 2.5. Version 2.6 didn’t happen until October 2009.
http://lists.llvm.org/pipermail/llvm-announce/2009-March/000031.html http://lists.llvm.org/pipermail/llvm-announce/2009-October/000033.html
So this means the devs were working with the devel/llvm-devel FreeBSD port, which would have been based on HEAD or slightly newer than Clang 2.4.
So I’m not sure that I believe the story that Debian was that invested in LLVM/Clang before FreeBSD was. There was no reason to; the Linux kernel had so many GCC-isms to overcome, what would be the gain? (other than some faster compiling of packages but poorer performing binaries)
edit: FreeBSD was trying to build all of the ports collection with Clang around May 2010. This still predates Debian by over a year
https://wiki.freebsd.org/action/recall/PortsAndClang?action=recall&rev=1
Okay, wrong perspective then. From my angle I saw how tons of projects got pull requests, patches, etc. so they’d work with clang.
Do you have any background on why the Debian clang community even popped up early? I’d have considered them to be be philosophically closer to sticking to GCC (other than for where it’s necessary).
Also saw that Wikipedia actually does have a nice timeline. However it doesn’t mention where Debian starts only where it “finishes”: https://en.wikipedia.org/wiki/Clang#Status_history
Do you have any background on why the Debian clang community even popped up early?
Debian is so large that it has a lot of (pardon me) crazy people. As an evidence, I submit the existence of Debian GNU/kFreeBSD.
But why everyone blame npm and “micro-libraries” as the main problem in js? Aren’t all other languages (except C/C++) has the same way of dealing with dependencies? Even in conservative Java installing hundreds of packages from Maven is norm.
Something to consider is that JavaScript has an extreme audience. People who barely consider themselves programmers, because they mostly do design use it, or people just doing tiny modifications. Nearly everyone building a web application in any kind of language, framework, etc. uses it.
I think the reason there is so much bad stuff in JavaScript is not only something rooted in language design. JavaScript isn’t so much worse than other popular bad languages, it just has a larger base having even more horrible programmers and a lot of them also build some form of frameworks.
Don’t get me wrong, JavaScript is not a great language by any stretch, but blaming the ecosystem of a language who certainly has at least a few of the bright minds designing and implementing (working at/with Google, Mozilla and Joyent for example) it should not result in something so much more unstable.
Of course this doesn’t mean that it’s not about the language at all either. It’s just that I have yet to see a language where there isn’t a group writing micro-libraries, doing bad infrastructure, doing mostly worst-practice, finding ways, to work around protections to not shoot yourself in the foot, etc. Yes, even in Python, Rust, Go, Haskell and LISP that exists.
Maybe it’s just that JavaScript has been around for ages, many learned it do so some animated text, they wrote how they did it, there is a ton of bad resources and people that didn’t really learn the language and there is a lot of users/developers that also don’t care enough, after all it’s just front-end. Validation happens on the server and one wants to do the same sending off some form and loading something with a button, updating some semi-global state anyway.
JavaScript is used from people programming services and systems with it (Joyent, et al.) to a hobby web designer. I think that different approaches also lead to very different views on what is right and what isn’t. Looking at how it started and how the standards-committee has to react to it going into backend, application and even systems programming direction probably is a hard task and it’s probably a great example of how things get (even) worse when trying to be the perfect thing for everything, resulting in the worst.
On a related note: I think the issue the community, if you even can call it like that (there are more communities around frameworks rather than the language itself, which is different from many other scripting languages) doesn’t seem to look at their own history too much, resulting in mistakes to be repeated, often “fixing” a thing by destroying another, sometimes even in different framework-layers. For example some things that people learned to be bad in plain JavaScript and HTML get repeated and later learned to be bad using some framework. So one starts over and builds a new framework working around exactly that problem, overlooking other - or intentionally leaving them out, because it wasn’t part of the use case.
there are more communities around frameworks rather than the language itself, which is different from many other scripting languages
In general I tend to agree, but at least at some time ago I am pretty sure the Rails community was larger than the Ruby community. The Django community in Python also seems to be quite big vocal, but probably not larger than its language community given that the Python community is overall way more diversified and less focused on one particular use of the language.
A lot of Java frameworks predate maven - e.g. Spring was distributed as a single enormous jar up until version 3 or so, partly because they didn’t expect everyone to be using maven. I think there’s still a cultural hangover from that today, with Java libraries ending up much bigger than those in newer languages that have had good package management from early on (e.g. Rust).
Even including all transitive libraries, my (quite large) Android app Quasseldroid has 21 real dependencies. That’s for a ~65kLOC project.
In JS land, even my smallest projects have over 300 transitive dependencies.
It’s absolutely not the same.
In technical terms, npm does not differ much from how python does package management. Culturally, however, there are a big difference in how package development is approached. Javascript has the famous left-pad package (story). It provided a single function to left-pad a string with spaces or zeroes. Lots of javascript libraries are like it, providing a single use case.
Python packages on the other hand usually handle a series of cases or technical area - HTTP requests, cryptography or, in the case of left-pad, string manipulation in general. Python also has PEP8 and other community standards that mean code is (likely to be) more homogeneous. I am using python here as that is what I know best.
Appreciate the honesty here. My take: GitHub stars aren’t real. Twitter followers aren’t real. Likes aren’t real. It’s all a video game. If you want to assess the quality of the code, you have to read it. You can’t rely on metrics except as a weak indicator. I predict there will be services to let you buy Github stars if the current trend of overvaluing them continues.
The endless self-promotion and programmers-masquerarding-as-brands on Twitter and Medium generates a huge amount of noise for an even larger amount of BS. The only winning move is to not engage.
This is more true than one might think. There are a couple of projects on GitHub with thousands of stars, some more than all the BSDs source codes combined, with the promise to bring something amazing, while not even having a working proof of concept, and being completely abandoned.
However, since it is true (to some degree) that having a larger user base in programming historically means that you won’t have to maintain a project on your own in the end it’s easy to be fooled by anything that appears to indicate a large userbase, like GitHub stars.
Many people use GitHub more like a “might be interesting, let’s bookmark it” or “Wow, so many buzzwords”, etc.
On the other hand there is quite a few projects that do one thing and do it well. Programmed to solve a problem, with 0-10 stars.
One might think that are extreme cases, they are only in the sense that 0 stars is the extreme of not being able to have fewer. They are not rare cases.
Another thing to consider is that GitHub is built a lot like a social network, so you have network effects, where people follow other people and one person liking something results in timelines, causing others to like it to remember to look at it, or “in case I need this some day”, and so one ends up having these explosions. Hackernews, Lobsters, reddit, etc. and in general having someone mention it to a bigger audience can help a lot too - and be it just “I have heard about this, but not looked at it yet”. It appears to be similar to the same story having zero upvotes on one day, and hundreds or thousands on another.
The rest is probably rooted in human psychology.
Spot on. On top of the detrimental “programmers-masquerarding-as-brands”, many GH repos are heavily marketed by the companies behind the projects. Covert marketing might be more popular than what people think.
Corporate OSS is winning the mindshare war. Plenty of devs would rather use a massive framework by $MEGACORP instead of something simple that doesn’t box them in. Pragmatism, they say.
(Of course, they don’t think twice about pulling in a community-sourced standard library (JS).)
Favorite example of this was a CTO talking about how they used Sinatra instead of Rails for their API endpoint and the flood of surprised replies, “but what if you need to change feature X?”, to which he said, “well, we understand all of the code, so it’s no big deal. Can you say the same about Rails?”
Yawn. The answers are predictable (Linux tries to emulate Windows, Linux driver quality is bad), often incorrect (FreeBSD on mainframes… right) and not very insightful. FreeBSD currently has virtually no desktop market share compared to Windows, macOS, and Linux, because:
Of course, the more interesting question is why Linux became more popular than FreeBSD, despite FreeBSD having a more friendly license for commercial/proprietary use.
Of course, the more interesting question is why Linux became more popular than FreeBSD, despite FreeBSD having a more friendly license for commercial/proprietary use.
I think there’s a better answer to that on Server Fault. UC Berkeley was fighting off a lawsuit from AT&T over BSD, and by the time all of that was resolved Linux had already gotten off the ground and achieved sufficient popularity that the SCO lawsuit couldn’t stop its momentum.
This is often used as one of the explanations. I am sure that it is one of the factors, but the lawsuit was already settled in 1994. I remember buying a FreeBSD 2.1.5 CD set in 1996, long after the lawsuit was settled. In 1996 Linux was still very primitive and a hobbyist thing. Slackware still reigned, SuSE had just moved from Slackware to Jurix as its base, RPM did not even exist yet. I was surprised at the time how much better FreeBSD was - technically, it’s ports collection, the documentation, etc. Also, FreeBSD and BSD/OS were still much more popular on ‘serious’ servers at the time.
I think there are other important (internal) factors. E.g., the development model (outside OpenBSD) favored long-running stable branches and only branching from -current every 2-4 years, whereas Linux distributions were always pushing the latest (except uneven kernel versions), allowing Linux to surpass the BSDs in driver support, etc. Also, the Linux distributions at the time already focused on a wider user base, e.g. Caldera and others had graphical installers near the end of the nineties. And due to many distributions being commercial, they had more incentive pushing Linux boxes to stores and do marketing. E.g., local book stores in The Netherlands would carry Red Hat, SUSE, etc.
the development model (outside OpenBSD) favored long-running stable branches and only branching from -current every 2-4 years, whereas Linux distributions were always pushing the latest (except uneven kernel versions), allowing Linux to surpass the BSDs in driver support, etc.
Both Richard Gabriel’s Worse is Better and entrepreneurs’ highlighting execution over ideas/quality show that this strategy by itself could cause a lot of the momentum of Linux. Also, Caldera was the first one I used since I could buy a CD with graphical installer at Best Buy for $20.
Citation needed. ;)
While I see where you are getting to I think those are at least partly myths. I have yet to see a person who can use Linux on the desktop on their own and cannot use FreeBSD or OpenBSD.
While I hear these arguments over and over I just don’t see them mapping to the real life. When people start using OpenBSD or FreeBSD they usually end up thinking it would be a lot harder, because of these myths.
Now I don’t wanna say they are easy to install, but if you want to use one of these systems for day to day life, they are certainly more friendly then Debian and Arch Linux for example. About others one might argue, but really, the first half of the Windows install was about as hard as installing either Linux (aside from Gentoo, Arch, etc.) until fairly recently. I really do thing that the effect of the initial installation is overrated.
What is a bigger problem of course is support for recent hardware. Looking at how far FreeBSD lags behind with Intel graphics (OpenBSD and DragonFly do way better here) or its sometimes desktop-unfriendly defaults (changing a sysctl to make Chrome run correctly) are bigger issues. One can be mitigated by using a recent Apple laptop or an older generation Thinkpad, the other by using a “distribution”.
I think a big reason is that all the commercial interest in something for end users was around Playstations. There is no SuSE, no RedHat, no Ubuntu, all having at least some money and grip to push their systems to the desktop, and be it just to get future sysadmins, selling their products for them.
Right now I think the comparison that would make more sense is Arch Linux vs FreeBSD, simply because these are a lot more similar, than Ubuntu which won by a huge initial investment in tech, branding and marketing, more than anything.
Arch Linux and FreeBSD have a lot more in common - speaking purely about desktop. They have a somewhat technical users in mind, they are not backed by some big organization, they value certain forms of simplicity (not exactly the one OpenBSD is thinking about, but still), they both have huge repositories of easy to install and very up to date software, that can be either taken from packages or source, they like to tune, configure and optimize, they enjoy having packages close to upstream, etc.
My best guess here would be stuff like steam and other things that became available as Linux blobs, that were “made possible” due to the investments from various other companies, which started out in the B2B field. Now one might ask why the BSDs don’t have strong companies in that sector, but at least to me it seems that the idea of using BSD outside of networks (routers, servers, etc.) infrastructure and the need of having something GPL-free (gaming consoles, etc.) just never occurred to people until Linux did lift off.
The reasons to use BSD often were a lot more more pragmatic and there weren’t really people with that dream of one day replacing Windows, in which Linux so far succeeded on the phone, but more in a way that one could say it’s Linux + lots of BSD code and macOS and iOS are BSD after all.
Even though BSD people often don’t want to hear that, but the license might be a part, especially on the hardware support side and you simply have code flowing in, for mostly legal reasons that one at least can look at.
I am sure that’s not the only reason and of course it will be a mixture, but the person running Linux on their own free will likely won’t decide against FreeBSD because it looks text based, especially not your average Arch Linux, Gentoo, Debian, Slackware. They might even find it more convenient.
Knowledge is especially historically a huge factor. I haven’t heard about BSD at all before the day I first installed it in 2005, which I think was because I read that Gentoo’s portage was inspired by it.
Extremely subjective, but Linux seems to really have a lot more missionary stuff going on. I have met more than one person who was about to duck and cover when I mentioned Linux, fearing a speech about moral and technical reasons on why they should switch. This used to be worse though. I think with the growth of the Linux community people feel a lot less like they have to defend their decision. There are barely any flame wars about Windows vs Linux vs macOS these days.
So I think marketing and in general network and social effects, as well as a hype and a nice story together with quite a bit of ideological undertone make up a large portion, of the history leading to status quo. I know a few people that tried BSD liked it and only switched back for ideological reasons.
While the BSDs are certainly not easy to use compared to macOS or Windows, I think argument is not holding true as big driving factor at all, when comparing to Linux in general and longer term desktop usage. Even holding on to Ubuntu for an extended period of time (upgrading from one release to another) will require a similar level of interest.
I think a lot of it may also have had to do with GCC being so popular, and the push from the GNU folks towards Linux (at least “until Hurd is ready”). Combine that with Linux often being positioned as Anti-Windows by users (I remember a /lot/ of zealous propaganda back in the day), it certainly started to pick up mindshare quickly on college campuses in the mid 90s.
Any idea how GDPR would relate to P2P software?
Say I wrote some social network that runs as a P2P application. As I want the network to take off, I also host my own nodes. Users, by using the software, will broadcast all kinds of data into the network, which is then stored, cached, and redistributed by the nodes. It will be impossible to give users a “delete all my data” flip; nodes may be hosted by anyone, anywhere, anonymously. I could potentially delete user data from my nodes on demand, but even that may be difficult to arrange for in practice. If an encrypted hash->value store is used, it might actually require me to collect more data in order to identify the rightful “owner” of a given object. But collecting such data about users may be impossible, since much of it depends on what client they use and what nodes they connect through, how they authenticate, etc.
The XMPP community is asking these questions right now and it’s interesting how this would affect distributed systems like Mastadon.
I think this actually is possible to handle. If a third party gives you a statement about GDPR-compliance then that’s possible.
Also note that part of this actually isn’t a new legal problem in most European countries. Even though such topics come up again with GDPR. Informing people about collected data and in many countries the right to correction and deletion of data has been existing for decades. Also why not binding the EU in 1995 adopted the Data Protection Directive, which in large parts was implemented by various European countries.
Yup. I run OpenBSD on all my computers at home (laptop + desktop) and I use it for all my VPSs (smtpd, httpd). At work I have a mac because reasons, but I much prefer my OpenBSD systems. Why?
For (some of these are true of linux as well):
pkg_add
Against:
I’m sure there’s more, but those are the things I really appreciate about OpenBSD.
I have a question on packages. Do you use the M:Tier ones? If so how up to date/stable are they? Eg. when something is in ports is it quickly available there? Do you know if it’s hours, days, weeks or more?
I don’t use them, no. I run -current on my primary machine, so packages are updated as soon as the updates are built and propagated to the mirrors. On my other machine, I’m ok being a bit out of date.
I am planning to run freebsd on my laptop, but can’t yet because it doesn’t support my integrated graphics card yet.
While it’s not generally recommended, depending on how adventurous you feel you might try your luck with FreeBSD 12, so the CURRENT/HEAD branch. It’s what TrueOS uses and while I am not a fan of it, looking at what revision they currently use might give you a reasonably stable system to play around with. While the latest revision of FreeBSD is usually pretty usable and people do use it, don’t use it for anything critical. You don’t want to find out that your RNG wasn’t actually secure or something.
For playing around and maybe seeing if you actually would enjoy running it in future it might be enough though.
Oh, I’ve used freebsd in the past, I know I want to run it. When I tried trueOS, the installer didn’t install a bootloader (??) and in any case, it seems a bit bloated. Just waiting for 11.2 and drm-next-kmod.
For our fine BSD-flavored Lobsters…what’s the deal with GPU support? Is it good, bad, impossible?
I figure that nvidia is probably a no-go on OpenBSD because of binary blobs, but hasn’t AMD finally offered fully open-sourced drivers for their cards? What’s left?
Also, is there any attempt at getting Wayland over to OpenBSD? If there’s one project cranky enough about fixing brain damage to fix the back catalog of X11 apps, I figure it’d be them.
There was an update to radeon drivers committed just in time for OpenBSD 6.3.
The main problem with the graphics stack is that we have too few people working on it, and Linux has too many. Keeping on top of a large code base with rapid upstream code churn and development is not easy if you don’t have enough people who will read all that code to screen it for potential holes, and to integrate and test it.
That’s also why the open source nvidia driver hasn’t been ported. Nobody wants to add the necessary hours required to their voluntary work schedule.
Are you only interested in OpenBSD or do you mean BSDs in general?
If you want NVIDIA binary drivers you can get them with FreeBSD. I know there have been efforts on Wayland as well. I have no clue on the status but there is a port/package you can install:
(last two paragraphs contain the direct answer and a hopefully helpful advice) (the rest is from memories on how I got into BSD and very subjective. I think it’s easier to get technical answers to the question, which is why I added them)
Around 2005 I was trying out various BSDs. That was right after looking for simpler Linux Distributions, and while they existed at that time they had quite some stability issues.
Back then all the ones I tried (FreeBSD, OpenBSD, NetBSD and the fairly new DragonFly) all worked. Just to give some background: I actually used all of them for a couple of months at least, so those weren’t just “Install it until you have a desktop” experiences. For me that meant email, web, watching videos (offline), playing small games, etc. Other than needing FreeBSD if I wanted to make use NVIDIA cards the experience on all systems was really good.
At that time DragonFly despite having a way smaller team than the others was the most convenient. There were a number of things they had. For setup: They had the nicest installer. Since a lot of the devs were not using DragonFly in some big company but actually wanted to have stuff working for themselves the hardware support for consumer hardware was really good (and it still is). There also was a bug that plagued at least FreeBSD and DragonFly, and while I don’t know the details anymore it was that pulling out an USB device usually paniced/crashed the system.
The responses were very different. Most of the FreeBSD people gave it low priority, because that doesn’t usually happen on a server, also not on accident, etc. In the DragonFly community they (someone. Sorry, don’t remember who) sat down and just repeated pulling it out and fixing according to panic fixing stuff. It was considered a big project, but that was a pretty cool experience.
While I didn’t have many problems I remember trying out their features and one of them was resident (keeping a dynamic binary in memory). I think I did something like replacing some binary, I think as part of installworld and accessing it or some information about it crashed the system. I went on their IRC channel and mentioned it, and half an hour later that was fixed (I think including some reviews).
There also were other nice things about DragonFly, that made me stay there the longest. Especially the tiny conveniences. For example there was a little Makefile in /usr/ that would allow you to easily check out the source of the OS and also pkgsrc (which was the default at that time, after there were some bad experiences with the current setup).
Another thing was that at least to me it felt like the project with the lowest amount of politics, drama and hype. People seemed to be too busy enhancing the system. I am sure I didn’t see everything, and it is also funny, given how it came to be, but to me it for a while felt like DragonFly was keeping what OpenBSD was promising in terms of being hype-free.
Since I played around with pkgsrc I also had NetBSD on the side, trying to make sure things compile and install there as well. I think that always was the case even though pkgsrc people fixed the nonsense I made on pkgsrc-wip.
That said, I think that OpenBSD is a prime example of how to do security right, rather than complicated. OpenBSD is not inconvenient to use at all. I think the biggest hurdle right now is that you have to do a minimal amount of configuration to have up to date third party packages, but other than that things work, even as a desktop system on a laptop, which feels unnatural, because optimizing things for being simple, small and secure usually sounds like the opposite of a convenient desktop system. However, for most part it works. Eg. if you would run something like Void Linux, Arch Linux, Debian, etc. as your desktop and feel reasonably secure about it, don’t need to run Steam, etc. for your games then chances are that you really won’t miss stuff, while enjoying what you get.
I haven’t used NetBSD in a while outside of a VM, so I don’t feel confident saying much about the current state.
Today - right now - I am using FreeBSD, which is mostly a convenience, because I use FreeBSD for a couple of projects and sometimes it’s nice to have the same stuff running locally and that being just to know some little stuff, maybe have some muscle memory. Other than maybe NVIDIA drivers and a huge ports collection, which both are big points there is no overwhelming reason to use it, given also that right now Intel drivers might be a reason to go with OpenBSD or DragonFly (not sure about the NetBSD state).
And then comparing it to Arch Linux, which also exists on one of my machines. I actually have to double check sometimes, whether I am on there. They are configured similarly enough to not be obvious. The reason for FreeBSD over Arch Linux right now is on the software side. Not just that the ports collection is huge, but with just one or two lines in a config file I can get rid of pulseaudio for example, if I don’t like it or replace OpenSSL with LibreSSL. Both of these I have done and thanks to the the people maintaining ports that works. As a side note this is also why I use it on servers as well. I am dealing a lot with Postgres and PostGIS and also with nginx. Being able to choose exact Postgres and PostGIS versions, including the latest, without having to use third party repos or only being able to use certain combinations is a huge plus. For nginx there are a lot of patches/third party modules that are just ON/OFF switches, which makes it possibly to just quickly try it out if a project might benefit from it. And most importantly thing around it don’t break. That seems to be a huge reason for people running Docker in real life, but depending on the project that might not be a viable option, and to me personally feels like a huge overkill in some situation.
Having this kind of flexibility is something really nice, if you wanna stick to a desktop system for “production”, because it means that one isn’t really forced to use a certain version of software or a certain dependency. In Linux world I would have to choose a different distribution in some of these cases. There might be a benefit in that, when you actually find something that matches all your tastes, but then these things might change over time and in some cases those changes caused me quite a headache. Maybe that is my taste here, but being able to access latest software, while having a very stable system overall made the BSDs very appealing to me on the server and on the desktop.
Other than that: Drivers usually lag a bit (or a bit more) behind Linux. I tend to not get the latest laptop when it comes out, and while it will usually run and there is official NVIDIA drivers for FreeBSD, if you get your hands on the very latest hardware you might sometimes be out of luck. This never really bit me. I don’t enjoy getting a new system all that often, but that’s likely a downside.
As a direct answer to your question: Yes, it works. However that doesn’t mean it does work for you, so the best is to try it. And nobody will hate you if you switch. There also is a very low amount of missionary people among the BSDs, so you usually can say “X doesn’t work” and might even get responses saying that choosing a Linux distribution or even macOS or Windows might be better for you, without any negative connotations.
Another fair warning: While it might feel a bit like it. BSD is NOT Linux. While I think that is clear on a theoretical level, the specifics of this might throw one off at times, especially when one wants it to be Linux. It will likely take some time to get used to it. So whether you are excited after two weeks or turned down, it might be worthwhile sticking to things a bit longer, to get a bit more into how everything really is meant. That I think might actually be harder if you used Linux for a while, because there is so much resembling it when it’s different nevertheless. Hope that helps, if you try it. :)
Great post. While I see the points I am surprised that the author is using Go and having a Go job. Choosing Go when not liking these decisions strikes me as a very odd thing to do, since we certainly are in no lack of programming languages.
I agree that Go based on this should not be as popular and am sure that if Go wasn’t “backed” by Google it would probably be even less popular than D, because compared to what D offers the philosophies (as in styles of thinking and approaching problems) embraced by Go seem to match way more real life software projects and companies. I know this is a very subjective thing to say, but also looking at all sorts of popular programming languages and in general languages that get developed this sticks out.
Having used quite a few languages in real applications, learning them instead of having to use them (and thereby not trying to replicate programming styles of other languages) I have seen how languages starting out with the philosophy of being simple end up being the very opposite as they become popular. While it is natural that software gets more complex for reasons such as portability, performance, edge cases that are found, etc. a lot of it often stems from a shift where the try to become languages for every use case and every programming style. There is nothing wrong about this and there are languages whose main feature is exactly this.
However, without a very, very opinionated “leadership” (decision maker, arbiter, etc.) every language will turn into something very similar, mostly having differences in terms of syntax, and whether the community agreed on camel or snake case.
I think with Rob Pike and others there is a chance for this not happening with Go, or at least a lot slower than with other languages. However, given that it’s “hot tech by Google” that a lot of people who usually prefer other languages use this is a big challenge.
Something that is a good development in my opinion, but something that Go doesn’t really follow is that people start to reuse existing VMs (JVM, Erlang/BEAM, etc.). Maybe this will lead to people being able to choose the language they want to use a lot less based on the size of communities, as more and more code/knowledge can potentially be reused.
On a related note I wonder whether HTTP/the web will continue to be one of most essential use cases for any kind of developers. I don’t think this will change in short or mid term. Honestly it wouldn’t surprise me if the corner stones still will be there in 50 years, but if it would change, I think this will have a huge influence on programming languages, as many of them (including Go) at least in some regards are optimized for use cases around web applications. At least trend wise a lot about which programming languages are on the rise or coming close to extinction, or being used for way longer than one would have imagined because of how the web developed.
While people might have imagined that C and C++ would still be around 20 years ago, I don’t think many predicted, that PHP will still be around and used by quite a lot major websites on such a huge scale or that JavaScript would be one of the most requested languages, and that there are quite some job offers with more than double the salaries of a senior C++ developer Cray Inc.
But going off topic here.
tl;dr: I think that the author wants to have another job using another programming language, better suiting his personal preferences, which of course are valid.
Honestly, I don’t see why this post is resonating with people so much (which it clearly is!) Most of the author’s technical points are incorrect, or fail to acknowledge the objective superiority of the newer solution. And most of his issues appear to be self inflicted.
No one is saying you need all these fancy new build tools and package managers for brochureware sites. But they are extremely handy when building actual applications.
objective superiority of the newer solution
Is that so? Even though it were somewhat different topics (Object Oriented Programming, Syntax Highlighting, etc.) a lot of things that people used to call objectively superior turn out to be “subjectively superior” at best if one actually bothers to look at it in an objective way.
Other than that I am inclined to claim that it’s really hard to define superiority of software or even techniques. Few people would argue about the superiority of an algorithm without a use case, yet people do the same thing with technologies and call them better, without mentioning the use case at all.
I think a problem of our times is that one loses track of complexities and anatomies of problems, seeing only a very small port of a problem. Then we try to fix it and on the way move the problem to another place. When that new problem bothers us enough we repeat the process.
This looks similar to looking for something like the perfect search or index algorithm for every use case, even disregarding limits such as available memory. It’s good that people love to go and build generic abstractions. It’s of extreme importance in IT, but it’s easy to end up in a state where progress kind of goes in a circle, when disregarding limitations and trying to find a “one size fits all”.
In web development this would be a framework for both real time, non real time, for rest based microservice architectures, but also supporting RPC, real time streaming, as well as being a blog engine and what not, while both being very low level, going down to the HTTP or even TCP level and a blog at the same time, making all of that equally easy.
This sounds great, and it’s certainly not impossible. However, it still might not be the right way to go and someone will always find some use case that they don’t see covered well enough. Something that isn’t easy enough for their use case out of the box and something that can be done in a more easy way by simply writing it from scratch, maybe just with some standard library.
I don’t say that projects like that are bad. However, since they get reinvented over and over I think it would make sense to instead of trying to invent tools for everything it might be worthwhile do strive for completing or extending the tools to pick from.
And I think that is what’s starting to happen more and more anyway. I even would say that the frameworks we see today are a symptom of it. The set of tools is growing and instead of being multitools like they used to be a decade ago (therefor also not working well with others) they nowadays seem more to be like tool belts, with many already available tools in them.
Or to say it in more technical terms. Frameworks nowadays (compared to a decade or so ago) are less like huge libraries forcing you into a corsett, but more software or library distributions with blueprints and maybe manuals on how things can be done.
No one is saying you need all these fancy new build tools and package managers for brochure-ware sites. But they are extremely handy when building actual applications
Except that I see people say that all the time. I see them say it at work. I see them say it on social media. I see them say it at conferences. There’s always some reason why that fancy new tool is needed for the static site they are working on. They need it so they can use LESS or SASS for the CSS. They need it so that they can use react to build the html statically before they serve it… (Yes. I’ve really heard someone say that.). They need it because that one metrics javascript tracking library is only available from npm and they can just use that other build tool to ensure it’s in the right place.
This post resonates with people because while they understand that it should be the way you say it is. They can see people saying clearly silly things with a whole lot of unreasonable excitement everywhere they look. It’s so prevalent that when they see someone in web-dev saying something so eminently reasonable they can’t help but stand up and applaud. It’s not a problem with the technologies themselves. It’s more of a problem with the way the culture looks to the people observing it.
Except that I see people say that all the time.
Have an example? I’ve not seen that. I’ve seen lots of tutorials showing how to do $simpleThing with $complexTool, but that’s just because small examples are necessary. I’ve not seen any claims that $complexTool is required for $simpleThing.
It might be a matter of emphasis. But when the only examples you can find for your responsive static brochure site are the examples you reference above then it sends a perhaps unintended message that this is how you do those things. I can’t point to specific examples around in person conversations for such things for obvious reasons. But in a way you make my point for me. It’s the reason why when you go to many sites that should be just html css and small amount of javascript you end up downloading MB’s of javascript. From the outside looking in it certainly appears that as an industry we’ve decided that this is how you do things, so why fight it?
i remember mr. poettering saying that bsds aren’t relevant anymore in 2011: https://bsd.slashdot.org/story/11/07/16/0020243/lennart-poettering-bsd-isnt-relevant-anymore
guess they are still here.
“Lennart explains that he thinks BSD support is holding back a lot of Free Software development”
I can think of something else which is holding back a lot of Free Software development.
Poettering’s approach to software development seems to make it clear that he doesn’t see any value in the continued existence of the BSDs. I think that they are an important part of the larger open *Nix world/ecosystem and that Linux benefits from their existence so long as there remains some degree of compatibility. I will say that I think the BSDs’ use of a permissive rather than reciprocal licence I think had been bad for them in the long run.
I don’t think that it’s not about the *Nix world/ecosystem or that Poettering just doesn’t care about BSDs. His attitude seems to be more like that people and distros not wanting to buy in on systemd and/or pulseaudio or in general his software or designs are irrelevant - or approaches that aren’t compatible with his. I think the wrong statements he made leading to uselessd disproving them and OpenRC disproving a lot of them as well made that clear.
Now people have different opinions about systemd, but from my experience projects ignoring the rest of the world tend to turn out bad on multiple levels. Other than that portability often (not always) is an indicator for code quality as well.
But going a bit off topic. What I want to say is that even though BSDs are mentioned the statement also targets every distribution not relying on systemd. It’s just that most of them aren’t exactly “mainstream”, which is why I think they are ignored and not mentioned.
I’m also very happy with the (relative) easy of use OpenBSD.
I missed the existence of Void. Is there any real advantage over Debian besides no-systemd?
To each its own poison. But I like void because
$ fortune -o voidThe tools for package cross compile and image building are pretty awesome too.
While there are more packages for the glibc variant than the musl variant, I would not characterise this as “not many packages”. Musl is quite well supported and it’s really only a relatively small number of things which are missing.
Void has good support for ZFS, which I appreciate (unlike say Arch where there’s only unofficial support and where the integration is far from ideal). Void also has an option to use musl libc rather than glibc.
Void has great build system. It builds packages using user namespaces (or chroot on older kernels) so builds are isolated and can run without higher privileges. Build system is also quite hackable and I heard that it’s easy to add new packages.
Never tried adding a package, but modifying a package in my local build repository was painless. (specifically dwm and st)
Things I find enjoyable about Void:
fish shell package uses Python for a few things but does not have an explicit Python dependency. The system doesn’t even come with a crond (which is fine, the few scripts I have running that need one I just put in a script with a sleep).That said, my go-to is FreeBSD (haven’t gotten a chance to try OpenBSD yet, but it’s high on my list).
I’d use void, but I prefer rc.d a lot. It’s why I like FreeBSD. It’s so great to use daemon_option= to do stuff like having a firewall for client only, to easily run multiple uwsgi applications, multiple instances, with different, of tor (for relays, doesn’t really make sense for client), use the dnscrypt_proxy_resolver to set the resolver, set general flags, etc.
For so many services all one needs to do is to set a couple of basic options and it’s just nice to have that in a central point where it makes sense. It’s so much easier to see how configuration relates if it’s at one single point. I know it doesn’t make sense for all things, but when I have a server, running a few services working together it’s perfect. Also somehow for the desktop it feels nicer, because it can be used a bit like how GUI system management tools are used.
In Linux land one has Alpine, but I am not sure how well it works on a desktop. Void and Alpine have a lot in common, even though Alpine seems more targeted at server and is used a lot for containers.
For advantages: If you like runit, LibreSSL and simplicity you might like it more than Debian.
However I am using FreeBSD these days, because I’d consider it closer to Linux in other areas, than OpenBSD. These days there is nothing that prevents me from switching to OpenBSD or DragonFly though. So it’s about choosing which advantages/disadvantages you choose. OpenBSD is simpler, DragonFly is faster and has recent Intel drivers, etc.
For security: On the desktop I think other than me doing something stupid, the by far biggest attack vector is a bug in the browser or other desktop client application, and I think neither OS will safe me from that on its own. Now that’s not to say it’s meaningless or that mitigations don’t work or that it’s the same on servers, but it’s more that this is my threat model for the system and use case.
Even though I am not a heavy Lua user at all I always enjoyed the fact that certain decisions about the syntax at least seem to be the opposites or completely different from what C or a compiled, statically typed language would use, making it more of an extension to C than most other scripting languages.
At least I think that’s what is throwing people off about Lua, when they come from C or a C style language (which in some regard most commonly used scripting languages outside of functional scripting languages are). I think this adds a lot of flexibility when adding it to any kind of project already using another language, making the benefits of adding a language to the stack bigger.
One for example notices that it’s not another system scripting language that is trying to get used instead of shell scripts for example, even though one could and such a project would probably be interesting, I think the lack of this while still being easy to integrate with C and therefor certainly being used by sysadmins in contexts like databases (Redis), web servers (nginx/openresty) speaks for a language and community that knows where the language’s strengths and goals/priorities lie. Personally I really enjoy projects that don’t aim for turning into a “jack of all traits” or yet another emacs so to speak.
That’s from someone who is mostly an outsider. I really wonder whether people using Lua a lot more have a similar view of this being a strong factor in the language design.
Please use media.ccc.de instead of YouTube: https://media.ccc.de/v/34c3-9196-may_contain_dtraces_of_freebsd
It’s a bit sad that there seem to be more and more games that only exist on Steam, but not for example on GOG.
And as anyone knows it really doesn’t work. I’ve yet to see a game, movie, piece of music not made public in a way breaking its license.
It really fails on many levels. Annoying users, not “protecting IP”, breaking rules the W3C made for themselves, etc. The only things really profiting from it are people making money with DRM and I am sure that they could be doing way more useful work.