This is a great usability improvement. Thank you Peter Hessler :)
That said, it’s still a little bit sad that this is only just being introduced in 2018.
That said, it’s still a little bit sad that this is only just being introduced in 2018.
Technically - OpenBSD has had various toolings (1, 2, 3 and others) to do this very task for quite a long time. But none of them were considered the correct approach.
Also, this is something that’s pretty unique to OpenBSD IMO. The end result is the same as with other systems.. sure. But this is unique among the unix world.
Q: What’s the difference?
Glad I asked! This is entirely contained within the base system and requires no tools beyond ifconfig!
Linux has ip, iw, networkmanager, iwconfig..(likely others)… and they are all using some weird combo of wpa_supplicant.. autogen’d text files.. and likely other things.
Have you ever tried to manually configure wireless on linux? It’s a nightmare. Always has been.
NetworkManager does a really good job of making it feel like there isn’t a kludge going on behind the scenes.. It does this by gluing all the various tools together so you don’t have to know about them. IMO this is what happens when you “get it done now” vs “do it right”.
With great simplicity comes great security:NetworkManager@6c3174f6e0cdb3e0c61ab07eb244c1a6e033ff6e:
github.com/AlDanial/cloc v 1.74 T=28.62 s (48.2 files/s, 45506.1 lines/s)
--------------------------------------------------------------------------------
Language files blank comment code
--------------------------------------------------------------------------------
PO File 66 125328 161976 457879
C 541 71112 66531 321839
C/C++ Header 528 10430 15928 34422
XML 59 1406 2307 6692
make 6 885 229 5009
Python 40 1189 1128 4597
NAnt script 65 626 0 3968
m4 8 237 123 1958
Lua 11 212 453 1314
Bourne Shell 21 232 238 1115
XSLT 5 65 3 929
Perl 4 166 243 480
Bourne Again Shell 11 30 35 241
C++ 4 62 121 178
YAML 4 12 6 161
JavaScript 1 33 21 130
Ruby 3 39 92 110
Lisp 2 15 24 23
--------------------------------------------------------------------------------
SUM: 1379 212079 249458 841045
--------------------------------------------------------------------------------
VS
ifconfig@1.368:
github.com/AlDanial/cloc v 1.74 T=0.12 s (32.2 files/s, 58201.7 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
C 2 1009 345 5784
C/C++ Header 1 7 16 58
make 1 3 1 6
-------------------------------------------------------------------------------
SUM: 4 1019 362 5848
-------------------------------------------------------------------------------
Anyway - I guess my point is this:
Have you ever tried to manually configure wireless on linux? It’s a nightmare. Always has been.
No. The Linux’s I use come with an out-of-the-box experience that makes wireless as easy as clicking a box, clicking a name, typing in the password, it works, and it reconnects when nearby. They have been like that since I bought an Ubuntu-specific Dell a long time ago. They knew it was a critical feature that needed to work easily with no effort with some doing that upon installation so parts of the install could be downloaded over WiFi. Then, they did whatever they had to do in their constraints (time/talent/available code) to get it done.
And then I was able to use it with only breaks being wireless driver issues that had answers on Q&A sites. Although that was annoying, I didn’t have to think about something critical I shouldn’t have to think about. Great product development in action for an audience that has other things to do than screw around with half-built wireless services. That’s a complement about what I used rather than a jab at OpenBSD’s which I didn’t use. I’m merely saying quite a few of us appreciate stuff that saves us time once or many times. If common and critical, adoption can go up if it’s a solved problem with minimal intervention out of the box.
That said, props to your project member who solved the problem with a minimally-complex solution in terms of code and dependencies. I’m sure that was hard work. I also appreciate you illustrating that for us with your comparisons. The difference is almost comical in the work people put in with very different talents, goals and constraints. And m4 isn’t gone yet. (sighs)
No. The Linux’s I use come with an out-of-the-box experience that makes wireless as easy as clicking a box, clicking a name, typing in the password, it works, and it reconnects when nearby.
And then something goes wrong in the fragile mess of misfeatures, and someone has to dig in and debug, or a new feature comes along and someone has to understand the stack of hacks to understand it, before it can be added. There’s something to be said for a system that can be understood.
There is something to be said for a system to be understood. I totally agree. I also think there’s something to be said for a reliable, more-secure system that can be effortlessly used by hundreds of millions of people. A slice of them will probably do things that were worth the effort. The utilitarian in me says make it easy for them to get connected. The pragmatist also says highly-usable, effortless experience leads to more benefits in terms of contributions, donations, and/or business models. These seemingly-contradicting philosophies overlap in this case. I think end justifies the means here. One can always refactor the cruddy code later if it’s just one component in the system with a decent API.
One can always refactor the cruddy code later if it’s just one component in the system with a decent API.
The problem isn’t the code, it’s the system that it’s participating in.
One can always refactor the cruddy code later if it’s just one component in the system with a decent API.
This just leads to systemd, and more misfeatures…
There’s Linux’s without systemd. Even those that had it didn’t before they got massive adoption/impact/money. So, it doesn’t naturally lead to it. Just bad, decision-making in groups controlling popular OS’s from what I can tell. Then, there’s also all the good stuff that comes with their philosophy that strict OS’s like OpenBSD haven’t achieved. The Linux server market, cloud, desktops, embedded, and Android are worth the drawbacks if assessing by benefits gained by many parties.
Personally, I’m fine with multiple types of OS being around. I like and promote both. As usual, I’m just gonna call out anyone saying nobody can critique an option or someone else saying it’s inherently better than all alternatives. Those positions are BS. Things things are highly contextual.
This is really great. I wish all other projects can do that, preferring elegancy to throwing code on the wall, but sometimes life really takes its toll and we cave and just make Frankenstein to get shit done.
I really appreciate all the works by OpenBSD folks. Do you have any idea how other *BSD’s deal with the wireless?
Whats really sad is that the security of other operating systems can’t keep up despite having more man power.
It’s almost like if you prioritize the stuff that truly matters, and be willing to accept a little bit of UX inconvenience, you might happen upon a formula that produces reliable software? Who would have thought?
That’s what I told OpenBSD people. They kept on a poorly-marketed monolith in unsafe language without the methods from CompSci that were knocking out whole classes of errors. They kept having preventable bugs and adoption blockers. Apparently, the other OS developers have similarly, hard-to-change habits and preferences with less focus on predictable, well-documented, robust behavior.
I think this is just a matter of what you think matters. There’s no sadness here. The ability to trade off security for features and vice versa is good. It lets us accept the level of risk we like.
On the other hand, it’s really sad, for instance, that OpenBSD has had so many public security flaws compared to my kernel ;P
On the other hand, it’s really sad, for instance, that OpenBSD has had so many public security flaws compared to my kernel ;P
What’s your kernel?
It’s a joke. Mine is a null kernel. It has zero code, so no features, so no security flaws. Just like OpenBSD has fewer features and fewer known security flaws than Linux, mine has fewer features but no security flaws.
Unlike OpenBSD, mine is actually immune to Meltdown and Spectre.
Not having public flaws doesn’t mean you don’t have flaws. Could mean not enough people are even considering checking for flaws. ;)
That said, it’s still a little bit sad that this is only just being introduced in 2018.
Would you like to clarify what you mean by this comment? Cause right now my interpretation of it is that you feel entitled to have complicated features supported in operating systems developed by (largely unpaid) volunteers.
I’m getting a bit tired of every complaint and remark being reduced to entitlement. Yes, I know that there is a lot of unjustified entitlement in the world, and it is rampant in the open source world, but I don’t feel entitled to anything in free or open source software space. As someone trying to write software in my spare time, I understand how hard it is to find spare time for any non-trivial task when it’s not your job.
Though I am not a heavy user, I think OpenBSD is an impressive piece of software, with a lot of thought and effort put into the design and robustness of the implementation.
I just think it’s somewhat disheartening that something this common (switching wireless networks) was not possible without manual action (rewriting a configuration file, or swapping configuration files, and restarting the network interface) every time you needed to switch or moved from home to the office.
Whether you feel like this is me lamenting the fact that there are so few contributors to important open source projects, me lamenting the fact that it is so hard to make time to work on said project, or me being an entitled prick asking for features on software I don’t pay for (in money or in time/effort) is entirely your business.
Just for the record I didn’t think you sounded entitled. The rest of the comment thread got weirdly sanctimonious for some reason.
Volunteers can work on whatever they want, and anybody’s free to comment on their work. Other operating systems have had the ability to switch wifi networks now for a long time, so it’s fair to call that out. And then Peter went and did something about it which is great.
Previously I’ve been using http://ports.su/net/wireless for wifi switching on my obsd laptop, but will use the new built-in feature when I upgrade the machine.
Some of the delay for the feature may be because the OS, while very capable, doesn’t seem designed to preemptively do things on the user’s behalf. Rather the idea seems to be that the user knows what’s best and will ask the OS to do things. For instance when I dock or undock my machine from an external monitor it won’t automatically switch to using the display. I have a set of dock/undock scripts for that. I appreciate the simple “manual transmission” design of the whole thing. The new wifi feature seems to be in a similar spirit, where you rank each network’s desirability and the OS tries in that order.
Interesting, I didn’t know about that to. I used my own bash script to juggle config files and restart the interface, but the new support in ifconfig itself is much easier.
I think the desire for OpenBSD to not do things without explicit user intent are certainly part of why this wasn’t added before, as well as limited use as a laptop OS until relatively recently.
Thanks for taking the time to respond.
To be clear, I don’t believe you’re some sort of entitled prick – I don’t even know you. But, I do care that people aren’t berating developers with: “That’s great, but ____” comments. Let’s support each other, instead of feigning gratitude. It wasn’t clear if that’s what you were doing, hence, my request for clarification.
That being said, my comment was poorly worded, and implied a belief that you were on the wrong side of that. That was unfair, and I apologize.
I just think it’s somewhat disheartening that something this common (switching wireless networks) was not possible without manual action (rewriting a configuration file, or swapping configuration files, and restarting the network interface) every time you needed to switch or moved from home to the office.
Well, I’m just not going to touch this…. :eyeroll:
I apologize if my response was a little bit snide. I’ve been reading a lot of online commentary that chunks pretty much everything into whatever people perceive as wrong with society (most commonly: racism, sexism, or millenial entitlement - I know these are real and important issues, but not everything needs to be about them). I read your remark in the context and may have been a little harsh.
Regarding the last segment - how WiFi switching worked before - there may have been better ways to do this, but I’m not sure they were part of the default install. When I needed this functionality on OpenBSD, I basically wrote a bash script to do these steps for me on demand, and that worked alright for me. It may not have been the best way, so my view of the OpenBSD WiFi laptop landscape prior to the work of Peter may not be entirely appropriate or accurate.
I just think it’s somewhat disheartening that something this common (switching wireless networks) was not possible without manual action (rewriting a configuration file, or swapping configuration files, and restarting the network interface) every time you needed to switch or moved from home to the office.
I’m more blunt here that leaving that to be true in a world with ubiquitous WiFi was a bad idea if they wanted more adoption and donations from market segment that wanted good, out-of-the-box support for WiFi. If they didn’t want that, then it might have been a good choice to ignore it for so long to focus on other things. It all depends on what their goals were. Since we don’t know them, I’ll at least say that it was bad, neutral, or good depending on certain conditions like with anything else. The core userbase was probably OK with whatever they had, though.
First, both free speech and hacker culture say that person can gripe about what they want. They’re sharing ideas online that someone might agree with or act on. We have a diverse audience, too.
Second, the project itself has developers that write cocky stuff about their system, mock the other systems, talk that one time about how they expect more people to be paying them with donations, more recently talk about doing things like a hypervisor for adoption, and so on. Any group doing any of that deserves no exception to criticism or mockery by users or potential users. It’s why I slammed them hard in critiques, only toning it down for the nice ones I met. People liking critiques of other projects or wanting adoption/donations should definitely see others’ critiques of their projects, esp if its adoption/donation blockers. I mean, Mac’s had a seemless experience called Rendevous or something in 2002. If I’m reading the thread right, that was 16 years before OpenBSD something similar they wanted to make official. That OpenBSD members are always bragging when they’re ahead of other OS’s on something is why I’m mentioning it. Equal treatment isn’t always nice.
“But, I do care that people aren’t berating developers with: “That’s great, but ____” comments. Let’s support each other, instead of feigning gratitude. It wasn’t clear if that’s what you were doing, hence, my request for clarification.”
I did want to point out that we’ve had a lots of OpenBSD-related submissions and comments with snarky remarks about what other developers or projects were doing. I at least don’t recall you trying to shut them down with counterpoints assessing their civility or positivity toward other projects (say NetBSD or Linux). Seems a little inconsistent. My memory is broken, though. So, are you going to be countering every negative remark OpenBSD developers or supporters make about projects with different goals telling them to be positive and supportive only? A general rule of yours? Or are you giving them a pass for some reason but applying the rule to critics of OpenBSD choices?
I at least don’t recall you trying to shut them down with counterpoints assessing their civility or positivity toward other projects (say NetBSD or Linux). Seems a little inconsistent.
I’m not the Internet Comment Police, but you seem to think you are for some reason… Consider this particular instance “me griping about what I want.”
Or are you giving them a pass for some reason but applying the rule to critics of OpenBSD choices?
This wasn’t about OpenBSD at all. This started out as a request for clarification on the intent of an ambiguous comment that seemed entitled. There seems to be a lot of that happening today, and a lot of people defending it for whatever reason, which is even worse.
I’m not the Internet Comment Police
Your comments came off that way to me between the original and follow-ups. Far as not about OpenBSD, it’s in a thread on it with someone griping it lacked something they wanted. The OpenBSD members griping about third party projects not having something they wanted to see more of typically got no comment from you. The inconsistency remains. I’m writing it off as you’re just a fan of their style of thinking on code, quality, or something.
That’s certainly one possibility, but not how I took it initially, and why I asked for clarification. I’ve seen too many people over the years attempt to disguise their entitlement by saying “thanks.”
I’d have liked to see this comment worded as:
This is a great usability improvement. Thank you Peter Hessler :) It’s a shame that there isn’t a better way to bring these important usability features to OpenBSD faster. What is the best way to help make that happen? Donations to the OpenBSD Foundation? Sponsor the work directly? Something else?
Now, it’s also possible that the OP has ties to OpenBSD, and the comment was self-deprecating. But, one can’t infer that from the information we see without investigating who the OP is, and their affiliations…
I’m not sure you understand what infer means. One certainly can infer meaning from a comment, based on previous actions, comments, etc..
My point remains: It’d be nice if the OP would clarify what they mean. My interpretation of the OP’s comment is just as likely as your interpretation. My interpretation is damaging to the morale of existing volunteer contributors to FOSS, and gives potential contributors to FOSS reasons to not contribute all together. I don’t know about you, but I want to encourage people to contribute to FOSS, as doing so moves us closer to a free and open society. And, that alone, is the reason I’m even bothering to continue responding to this thread…
“It’s pretty sad that it took someone else so long to prioritize work I think is necessary.”
I think it’s pretty easy to take what was written and read it this way. But maybe my glass is half empty today.
One can infer based on a comment, but the inference will most likely be dimwitted bullshit.
Without the magic trifecta of body language, vocal intonation, and facial expression us monkeys are just shit at picking up on any extra meaning. So take the comment at face value.
It expresses gratitude, it focuses on a specific recipient, and it lauds the feature. After, it regrets that it couldn’t/didn’t happen earlier.
There’s no hidden meaning here, and if the commenter intended a hidden meaning he’s a dufus too, because there’s no unicode character for those. U+F089A6CDCE ZERO WIDTH SARCASTIC FUCK YOU MARK notwithstanding.
At some point we all need to stop insisting that we have near-telepathic powers, especially outside of meatspace.
So, what you’re saying is that I can write anything I want, and since you can’t see or hear other clues, there’s no way you can downvote (in good faith) this comment as trolling?
Not sure text works that way…
Yep, video here https://www.youtube.com/watch?v=_E873DaCLN4
Full video: https://www.youtube.com/watch?v=UaQpvXSa4X8
Yes, Theo gave an impromptu talk where he expressed frustration at rumors of openbsd being untrustworthy and then speculated on possible future intel problems. Screaming happened. But now it seems he was right.
Though the bigger issue of embargo’s and their value remains.
I wish people would stop saying he gave a talk / presentation because that’s not what it was. This was a BOF session. It is a group discussion about a predefined topic and Theo was the BOF organizer. This is why he was talking to the crowd and asking questions. It wasn’t to attack anyone or inflame the situation; it was entirely within the spirit of the BOF.
So (1) nobody cared to improve the old tools, writing new ones is more fun, and (2) updating the old tools to reflect current reality would break old scripts, so the rational choice is to both let the old tool rot (thus quite possibly breaking anything that relies on it) as well as introduce a new tool that definitely isn’t compatible with the old scripts. Why do I feel like this line of arguing has a problem or two?
Pray tell, what happens when the interface provided by the iproute2 utilities no longer reflects reality. Let them rot and write yet another set of new tools? Break them? Introduce subtle lies?
Oh by the way, if you’re configuring IPv6 on linux, don’t use the old tools. They’re subtly broken and can waste you a lot of time. I’ve been there. Don’t mention it.
Meanwhile, I’m glad that OpenBSD’s ifconfig can show more than one IP per interface. And I can use the same tool to set up my wifi too. It’s a tool meant to work.
The BSDs are maintaining the kernel and the base system in locksteps. This is not the case for Linux distributions. Over the years, Linux developers started to do the same. That’s why we now have iproute2, ethtool, iw and perf, which are userland tools evolving at the same speed as the kernel (and sharing the version number).
nobody cared to improve the old tools
The people who want to use the old tools want them to keep working the same way they always have. They already work that way, so the people who want to use the old tools have no motivation to make changes.
updating the old tools to reflect current reality would break old scripts
It would also piss off the people who want to keep using the old tools, since by definition they would no longer keep working the same way.
The names ifconfig and netstat are now claimed and cannot be re-used for a different purpose, in much the same way that filenames like COM1 and AUX are claimed on Windows and cannot be re-used.
Meanwhile, I’m glad that OpenBSD’s ifconfig can show more than one IP per interface.
My understanding is that OpenBSD reserves the right to change anything at any time, from user-interface details down to the C ABI. “The people who want to use the old tools” are discouraged from using OpenBSD to begin with, so it’s not surprising that OpenBSD doesn’t have to wrestle with these kinds of problems.
the right to change anything at any time
While this is true, I think you are taking it a little too literally. You won’t, for example, upgrade to the latest snapshot and find that ls has been replaced by some new tool with a completely different name, or that CVS has been replaced by GIT. And while POSIX doesn’t require (best I can tell) a tool named ifconfig it’s very unlikely you would find it replaced by something else.
Right. And by following the discussions on tech@, I’ve gotten the impression that Theo (as well as many other developers) do deeply care about avoiding unnecessary chance to the user facing parts as tools get replaced or extended. Case in point, daemon configuration. The system got extended with the rcctl tool, but the old way of configuring things via rc.local and rc.conf.local still works as it always did. Nothing like the init system swaps on Linux. Still, extending or changing the behavior of a tool even at the risk of breaking some old script seems to be preferred to making new tools that require everyone to adapt.
After a decade of using Linux as well as OpenBSD, I’d say that OpenBSD is way more committed to keeping the user facing parts finger compatible while breaking ABI more freely (“we live in a source code world”). In the Linux world I’ve come to expect ABI compatibility but user compatibility gets rekt all the time.
Pointing fingers at capitalism or consumers doesn’t get us anywhere. We are all impacted by this. Maybe not today maybe not tomorrow, but certainly in the near future society on the whole will pay the price.
I don’t understand, isn’t this just modifying your own hardware? Why is this treated like some great tragedy?
Because this lets you exploit anything using NVIDIA’s Tegra and that includes things like Tesla vehicles.
It’s super cool that you can mod your switch/tesla now.. but also super not cool that you can’t prevent someone else from moding it for you.
Wait, so it’s a remote sploit? Or you mean if you give your Tesla to your mechanic they can mod it? Or something else?
Presumably because it makes for better headlines? Local code execution on a game console seems interesting only if you figure out something better to do with your game console than playing games.
I switched from Linux to OpenBSD last October on all of my personal machines, and I don’t anticipate going back. I have a few reasons, though some might seem petty.
For me it comes down to things like this:
ifconfig iwm0 nwid PrettyWiFiForAWhiteGuy wpa wpakey 'sekret': on linux it would be: Minimality.us.swapctrlcaps: Set once, on install. Gives me system wide keyboard configuration. In one place.Painless upgrades: pkg_add does what you expect. The enter button is super easy to hit.If you like BSD more than Linux. OpenBSD in particular has a very different ethos than Linux, which many people here find attractive. We have many OpenBSD developers here on Lobsters, so we enjoy greater access to their opinions and philosophies. You should find plenty of top notch content here about BSD if you search for it.
People prefer OpenBSD for different reasons. Security-oriented implementation, secure defaults, excellent documentation, minimalism, emphasis on networking tooling, coherent base system, developer friendliness, and so on and so forth. These apply to desktops as well as servers.
Personally, Linux on the desktop drives me up the wall, since they keep moving fast and breaking things. And the different distros make different decisions about silly little things that keep tripping me up. For example, I write C++ professionally, which means I generate and analyze core dumps. My core dumps were being diverted to some bug-reporting tool, which was silently crashing on my multi-gig core files.
Some smart guy decided automatic bug reporting tools were more important than developer access to core dumps. That decision wasted way more of my time than I care to admit. OpenBSD would never have wasted my time in that way.
I prefer MacOS for desktops, since Apple actually cares about building a coherent user experience. They’ve had some quality issues recently but nothing worse than what I’ve experienced using Linux. And they don’t lose my core dumps. If not MacOS, OpenBSD would be my next choice.
I’m not OP but I run BSD on my desktop (though my desktop has gradually become more of a de facto server these days now that I have a powerful laptop). ZFS is a more reliable and less fiddly way to have both disk redundancy and snapshots than any of the ways of achieving those on linux. Updates are more reliable on BSD - when I ran linux it felt like every year there would be an update that changed how X was configured and I’d have to google how to edit some random XML file to get it to use the correct keyboard layout on the login screen (I touch-type on dvorak and use long passwords that I remember mostly by touch, so when my keyboard layout gets forcibly changed to qwerty I find it pretty hard to even log in), or changed how sound worked, or changed how the init scripts worked, or so on. I don’t particularly use any BSD features unless you count ZFS (e.g. I don’t use jails at all for desktop work), but it works and stays out of my way, which is really all you want from an OS.
Not trying to tell you off or prove you wrong, but Xorg has been configuration-free for most systems for the past 5 years or so. The only exception I can think of is Nvidia-optimus setups where you want to switch between Nvidia and Intel graphics.
Not that the Linux ecosystem has not been seeing major changes in its components, you know, with the whole init-system wars and Wayland becoming a thing. It’s just that as a user, I’ve not been bitten by broken updates in quite a while. I’m not going to try and defend any of the ZFS alternatives on Linux, as I’m not pleased with any of them myself.
My major problem with switching to OpenBSD (or another BSD) is the lack of modern hardware support, especially graphics, and the fact that it’s often harder to find documentation or installation instructions for some new pieces of software.
(or another BSD) is the lack of modern hardware support, especially graphics
FreeBSD 12-CURRENT has great support for AMD Polaris and earlier (and Intel of course), with Wayland, Vulkan, OpenCL, whatever you want :)
Granted, not everything works out of the box yet (especially Wayland: you still have to rebuild the kernel with evdev support if you want any input devices to work, but that’s going to be resolved), but the process of rebuilding stuff on FreeBSD is super easy.
That’s interesting, I didn’t know AMD GPU support has progressed so far in FreeBSD. I should probably try it out again, since I’m running with mostly AMD hardware these days (because of their excellent open-source drivers on Linux.)
Not trying to tell you off or prove you wrong, but Xorg has been configuration-free for most systems for the past 5 years or so.
Yeah, that was the problem. I had an xorg conf that worked and set my keyboard to the right layout, then one day “X went configuration-free” and I had to find some blog post about some random HAL XML file that I had to edit instead. And then a year or two after that HAL got removed and I had to set it in some different place instead.
My major problem with switching to OpenBSD (or another BSD) is the lack of modern hardware support, especially graphics
I’ve always stuck to NVidia cards and used the NVidia official/proprietary drivers (which I think only exist for FreeBSD), so it’s the exact same driver experience as on Linux.
and the fact that it’s often harder to find documentation or installation instructions for some new pieces of software.
It’s really very similar to Linux, unless you’re using software that has a kernel module or something - I’m struggling to think what you’d need specific instructions for because usually what you do on BSD is exactly the same as what you do on Linux. Anything that uses something standard like autotools or CMake will Just Work, in my experience. Occasionally someone has hardcoded /bin/bash or something (but that will break on Ubuntu too these days), but there’s a small number of breakage patterns that you learn. Admittedly when it comes the very new stuff that’s hardcoded against systemd or docker you are just screwed.
To add to that, my last Linux upgrade knocked out WiFi on one of my devices. I’m thinking (once again): “how does an OS upgrade take out something as critical as WiFi?” Only on Linux…
Mostly because once configured it just works.
Not really OpenBSD related but also root on ZFS on FreeBSD with bulletproof upgrade using ZFS Boot Environments.
No systemd.
True channel mixing in kernel using OSS4 instead of ALSA+OSS+PulseAudio setup.
Sound does not hang up which requires reboot (have that on Ubuntu).
Entire machine does not freeze without a cause (had that with Linux Mint).
Also because tools that have been available on UNIX for decades (ifconfig/netstat/…) are not deprecated without any reasonable reason.
The Ports provide really easy way to recompile single, several or all ports/packages with needed options, no Linux equivalent.
… to just name a few.
Entire machine does not freeze without a cause (had that with Linux Mint).
Any idea what kernel version that was? There was an erratum on Skylake silicon that could trigger hard lock-ups in the kernel on some versions. There was a workaround for it in 4.3 and newer if I recall correctly. I understand that you don’t want it to happen at all but if this is the specific bug, it was a hardware bug on a common hardware platform that only triggered under specific loads.
Sound does not hang up which requires reboot (have that on Ubuntu)
hm, I do have that on FreeBSD. Not often, but does happen. Maybe it’s a hardware issue? Realtek kinda sucks…
https://www.romanzolotarev.com/openbsd/why.html
Everything I need is in the base: POSIX shell, X11, vi, tmux, httpd, smptd. There are only things I need, almost nothing else.
I stumbled upon this gem the other day: https://github.com/nodejs/node/blob/master/deps/npm/node_modules/osenv/node_modules/os-tmpdir/index.js
It’s a module included in npm, that duplicates part of the ‘os’ api that has been in the standard node library since 0.9.9.
I’ll see your ‘duplicating the standard lib’ and raise you a ‘defining the alphabet’[1] and a ‘defining each ANSI colour individually’[2] :
The first one is ridiculous. JS apparently has no range() method, perhaps that would have been worthwhile writing as a module. But no. Three arrays of characters.
The second one is just ridiculous modularisation. A single ansi-colours (or -colors if you like your modules “de-queened”) module could export functions and constants (e.g.: ansi(message, ansi.YELLOW) or similar).
This is the same guy who wrote the ‘isEven’/‘isOdd’ I mentioned earlier. He has ~1400 packages on NPM, and based on the ones I’ve looked at in the last few days, just his own packages have the typical nodejs dependency tree that mimics crab grass.
1: https://github.com/jonschlinkert/alphabet/blob/master/index.js
2: https://github.com/jonschlinkert?utf8=✓&tab=repositories&q=ansi&type=&language=
I found this while trying to find more info: http://cs242.stanford.edu/assets/projects/2017/songyang.pdf
“of unsafe Rust”
Using Rust in unsafe mode (protections disabled) can lead to attacks on code like in unsafe languages like C. A well-known, avoidable problem.
Was neat seeing it on a site other than mine. Welcome! Thanks for the high-quality version image. I have updated the image. :)
All good reasons, IMO. But it fails to mention any of the well-known problems with C, which would have prevented many vulnerabilities in SQLite. So it reads like they’re just trying to justify their choice, rather than an honest assessment of C. I don’t know what the intention or purpose of this page is, though. And to be fair, I would probably have made the same choice in 2000.
I don’t know what the intention or purpose of this page is
Probably to stop people asking why it’s not written in Rust.
Yeah, looking at the parent page, it appears it showed up sometime in 2017. I was mislead by the mention of Java as an alternative, because I think it’s rather obviously unsuited for this job.
I tried finding a list of vulnerabilities in SQLite and only this page gave current info. Now, I’m unfamiliar with CVE stats so I don’t know if 15 CVE’s in 8 years is more than average for a project with the codebase and use of SQLite.
[…] I don’t know if 15 CVE’s in 8 years is more than average for a project with the codebase and use of SQLite.
I don’t know either! I looked at the same page before writing my comment, and found plenty of things that don’t happen in memory-safe languages. There were fewer entries than I expected, but also some of them have descriptions like “Multiple buffer overflows […],” so the number of severe bugs seems to be higher than the number of CVEs.
The security community generally considers CVE counts a bad mechanism to argue about the security of a project, for the following reasons:
Security research (and thus vulnerability discovery) are driven by incentives like popularity, impact and monetary gain. This makes some software more attractive to attack, which increases the amount of bugs discovered, regardless of the security properties of the codebase. It’s also hard to find another project to compare with.
(But if I were to join this game, I’d say 15 in 8 years is not a lot ;))
15 vulnerabilities of various levels in the past 10 years.
https://www.cvedetails.com/vendor/9237/Sqlite.html
How does that compare to other products or even similar complicated libraries?
Great! It does need the newest versions of ksql+kcgi to compile resulting code… I write them all in tandem. (I recently had a bug report from somebody trying kwebapp with ksql from ports.)
If you’re including script from another origin, you must absolutely trust them, and their security.
Perhaps the author trusts the 3rd parties?
That’s fine, but unrelated. @peter is saying that it’s not ironic that the author is using third party assets, since the author is not saying “don’t use third party assets” but rather “be careful when using third party assets”.
Regarding your statement though, why should the user be the one deciding who to trust? How would that even work? Would you get a dialog every time you navigate to a web page showing the third party scripts the site uses and letting you opt out of them? Are you going to expect every web user to audit each line of code? How would this work for images?
Maybe we should just have browsers show you the source code and then you can decide if you want to render it or not?
I think browsers and web servers currently do a pretty good job of managing this without putting the user through undue frustration.
why should the user be the one deciding who to trust? How would that even work?
The code is running on their CPU. The decision should always be up to them (and it is, but most users don’t know that going to site A will also include things from external-site B, C, D and E - all of which can now track your activities / location and likely other things that will make ad companies mouths water.).
Would you get a dialog every time you navigate to a web page showing the third party scripts the site uses and letting you opt out of them?
This is essentially what I do with uMatrix. Workflow is basically:
Obviously I don’t expect everyone to do this.
Why do you trust first party assets? How do you know they’re first party? The developers may be hosting third party assets themselves.
How do you know they’re first party?
They come from the domain (or subdomain) I am connecting to. And yes, likely they are 3rd party assets.
People keep thinking it’s existence of 3rd party assets I am worried about.. It’s not the fact that they are 3rd party that worries me. It’s the meta information sent to 3rd parties hosting the resources and that the added infrastructure / complexity opens the door for malicious 3rd party resources.
Why do you trust first party assets?
If all the resources come from the domain I am connecting to, that is 1 system that could be compromised. If the resources come from 100 different locations, that’s 100 systems that can be compromised.
What do you mean? From a user’s perspective what’s the difference between trusting what third party Jake Archibald hosts on jakearchibald.com and what third party content he trusts enough to load off a different origin?
The difference is the number of parties with access to data that relates to the end users. I don’t think that CDN’s offer their services for free because they are altruistic!
First of all, CDNs don’t offer their services for free. Some of them offer introductory plans to lower the barrier to customer acquisition. This is like Blue Apron offering your first meal for free to get you to try them out without risk. They’re betting that the product they offer is compelling.
The number of parties with access to data has very little to do with where the user’s initial HTTP connection terminates. If I run a server there’s little difference for me linking to a third party resource vs proxying a request to it.
First of all, CDNs don’t offer their services for free.
Thanks for the correction. Google’s is entirely free. That aside, the point still stands, end-user data is being gathered regardless of the CDN tier the host (person running the site being visited) picked.
If I run a server there’s little difference for me linking to a third party resource vs proxying a request to it.
The difference is the source of the request, from one it’s your web server, the other it’s my IP address. This likely makes 0 difference to the person running a blog.. but as a reader of your blog, I will automatically assume that you have 0 interest in the privacy of your readers.
I wasn’t implying. I was stating a fact.
And he’s wrong about that.
https://github.com/uutils/coreutils is a rewrite of a large chunk of coreutils in Rust. POSIX-compatible.
So on OpenBSD amd64 (the only arch rust runs on… there are at least 9 others, 8 7 or 6 of which rust doesn’t even support! )… this fails to build:
error: aborting due to 19 previous errors
error: Could not compile `nix`.
warning: build failed, waiting for other jobs to finish...
error: build failed
Yep. The nix crate only supports FreeBSD currently.
The openbsd guys are stubborn of course, though they might have a point. tbh somebody could just fork a BSD OS to make this happen. rutsybsd or whatever you want to call it.
edit: just tried to build what you linked, does cargo pin versions and verify the downloads? fetching so many dependencies at build time makes me super nervous. Are all those dependencies BSD licensed? It didn’t even compile on my machine, maybe the nixos version of rust is too old - i don’t know if the rust ecosystem is stable enough to base an OS on yet without constantly fixing broken builds.
just tried to build what you linked, does cargo pin versions and verify the downloads?
Cargo pins versions in Cargo.lock, and coreutils has one https://github.com/uutils/coreutils/blob/master/Cargo.lock.
Cargo checks download integrity against the registry.
For offline builds, you can vendor the dependencies: https://github.com/alexcrichton/cargo-vendor, downloading them all and working from them.
Are all those dependencies BSD licensed?
Yes. Using: https://github.com/onur/cargo-license
Apache-2.0/MIT (50): bit-set, bit-vec, bitflags, bitflags, block-buffer, byte-tools, cc, cfg-if, chrono, cmake, digest, either, fake-simd, filetime, fnv, getopts, glob, half, itertools, lazy_static, libc, md5, nodrop, num, num-integer, num-iter, num-traits, num_cpus, pkg-config, quick-error, rand, regex, regex-syntax, remove_dir_all, semver, semver-parser, sha2, sha3, tempdir, tempfile, thread_local, time, typenum, unicode-width, unindent, unix_socket, unreachable, vec_map, walker, xattr
BSD-3-Clause (3): fuchsia-zircon, fuchsia-zircon-sys, sha1
MIT (21): advapi32-sys, ansi_term, atty, clap, data-encoding, generic-array, kernel32-sys, nix, onig, onig_sys, pretty-bytes, redox_syscall, redox_termios, strsim, term_grid, termion, termsize, textwrap, void, winapi, winapi-build
MIT OR Apache-2.0 (2): hex, ioctl-sys
MIT/Unlicense (7): aho-corasick, byteorder, memchr, same-file, utf8-ranges, walkdir, walkdir
It didn’t even compile on my machine, maybe the nixos version of rust is too old - i don’t know if the rust ecosystem is stable enough to base an OS on yet without constantly fixing broken builds.
This is one of my frequent outstanding annoyances with Rust currently: I don’t have a problem with people using the newest version of the language as long as their software is not being shipped on something with constraints, but at least they should document and test the minimum version of rustc they use.
coreutils just checks against “stable”, which moves every 6 weeks: https://github.com/uutils/coreutils/blob/master/.travis.yml
Can you give me rustc --version?
Still, “commitment to stability” is a function of adoption. If, say, Ubuntu start shipping a Rust version in an LTS release, more and more people will try to stay backward compatible to that.
You’re probably hitting https://github.com/uutils/coreutils/issues/1064 then.
Also, looking at it, it is indeed that they use combinatior functionality that became available in Rust 1.19.0. std::cmp::Reverse can be easily dropped and replaced by other code if 1.17.0-support would be needed.
Thanks, I filed https://github.com/uutils/coreutils/issues/1100, asking for better docs.
Rust is “stable” in the sense that it is backwards compatible. However it is evolving rapidly so new crates or updates to crates may require the latest compiler. This won’t mean you’ll have to constantly fix broken builds; just that pulling in new crates may require you to update to the latest compiler.
Yes, Cargo writes a Cargo.lock file with versions and hashes. Application developers are encouraged to commit it into version control.
Dependencies are mostly MIT/Apache in the Rust world. You can use cargo-license to quickly look at the licenses of all dependencies.
Redox OS is fully based on Rust :)
Although you’re right to point out that project, one of Theo’s arguments had to do with compilation speeds:
By the way, this is how long it takes to compile our grep:
0m00.62s real 0m00.63s user 0m00.53s system
… which is currently quite undoable for any Rust project, I believe. Cannot say if he’s exaggerating how important this is, though.
Now, at least for GNU coreutils, ./configure runs a good chunk of what rust coreutils needs to compile. (2mins for a full release build, vs 1m20.399 just for configure). Also, the build is faster (coreutils takes a minute).
Sure, this is comparing apples and oranges a little. Different software, different development states, different support. The rust compiler uses 4 cores during all that (especially due to cargo running parallel builds), GNU coreutils doesn’t do that by default (-j4 only takes 17s). On the other hand: all the crates that cargo builds can be shared. That means, on a build farm, you have nice small pieces that you know you can cache - obviously just once per rustc/crate pairing.
Also, obviously, build farms will pull all kinds of stunts to accelerate things and the Rust community still has to grow a lot of that tooling, but I don’t perceive the problem as fundamental.
EDIT: heh, forgot --release. And that for me. Adjusted the wording and the times.
OpenBSD doesn’t use GNU coreutils, either; they have their own implementation of the base utils in their tree (here’s the implementation of ls, for example). As I understand it, there’s lots of reasons they don’t use GNU coreutils, but complexity (of the code, the tooling, and the utils themselves) is near the top of the list.
Probably because most(all?) the openBSD versions of the coreutils existed before GNU did, let alone GNU coreutils. OpenBSD is a direct descendant of Berkeley’s BSD. Not to mention the licensing problem. GNU is all about the GPL. OpenBSD is all about the BSD(and it’s friends) license. Not that your reason isn’t also probably true.
That means, on a build farm, you have nice small pieces that you know you can cache - obviously just once per rustc/crate pairing.
FWIW sccache does this I think
I think it would be more fair to look at how long it takes the average developer to knock out code-level safety issues + compiles on a modern machine. I think Rust might be faster per module of code. From there, incremental builds and caching will help a lot. This is another strawman excuse, though, since the Wirth-like languages could’ve been easily modified to output C, input C, turn safety off when needed, and so on. They compile faster than C on about any CPU. They’re safe-by-default. The runtime code is acceptable with it improving even better if outputting C to leverage their compilers.
Many defenses of not using safe languages is that easy to discount. And OpenBSD is special because someone will point out that porting a Wirth-like compiler is a bit of work. It’s not even a fraction of the work and expertise required for their C-based mitigations. Even those might have been easier to do in a less-messy language. They’re motivated more by their culture and preferences than any technical argument about a language.
It’s a show stopper.
Slow compile times are a massive problem for C++, honestly I would say it’s one of the biggest problems with the language, and rustc is 1-2 orders of magnitude slower still.
It’s a show stopper.
Hm, yet, last time I checked, C++ was relatively popular, Java (also not the fastest in compilation) is doing fine and scalac is still around. There’s people working on alternatives, but show stopper?
Sure, it’s an huge annoyance for “build-the-world”-approaches, but well…
Slow compile times are a massive problem for C++, honestly I would say it’s one of the biggest problems with the language, and rustc is 1-2 orders of magnitude slower still.
This heavily depends on the workload. rustc is quite fast when talking about rather non-generic code. The advantage of Rust over C++ is that coding in mostly non-generic Rust is a viable C alternative (and the language is built with that in mind), while a lot of C++ just isn’t very useful over C if you don’t rely on templates very much.
Also, rustc stable is a little over 2 years old vs. C/C++ compilers had ample headstart there.
I’m not saying the problem isn’t there, it has to be seen in context.
C++ was relatively popular, Java (also not the fastest in compilation) is doing fine and scalac is still around.
Indeed, outside of gamedev most people place zero value in fast iteration times. (which unfortunately also implies they place zero value in product quality)
rustc is quite fast when talking about rather non-generic code.
That’s not even remotely true.
I don’t have specific benchmarks because I haven’t used rust for years, but see this post from 6 months ago that says it takes 15 seconds to build 8k lines of code. The sqlite amalgamated build is 200k lines of code and has to compile on a single core because it’s one compilation unit, and still only takes a few seconds. My C++ game engine is something like 80k if you include all the libraries and builds in like 4 seconds with almost no effort spent making it compile fast.
edit: from your coreutils example above, rustc takes 2 minutes to build 43k LOC, gcc takes 17 seconds to build 270k, which makes rustc 44x slower…
The last company I worked at had C++ builds that took many hours and to my knowledge that’s pretty standard. Even if you (very) conservatively say rustc is only 10x slower, they would be looking at compile times measured in days.
while a lot of C++ just isn’t very useful over C if you don’t rely on templates very much.
That’s also not true at all. Only small parts of a C++ codebase need templates, and you can easily make those templates simple enough that it has little to no effect on compile times.
Also, rustc stable is a little over 2 years old vs. C/C++ compilers had ample headstart there.
gcc has gotten slower over the years…
Even if you (very) conservatively say rustc is only 10x slower,
Rustc isn’t slower to compile than C++. Depends on the amount of generics you use, but the same argument goes for C++ and templates. Rust does lend itself to more usage of generics which leads to more compact but slower-compiling code, which does mean that your time-per-LOC is higher for Rust, but that’s not a very useful metric. Dividing LOCs is not going to get you a useful measure of how fast the compiler is. I say this as someone who has worked on both a huge Rust and a huge C++ codebase and know what the compile times are like. Perhaps slightly worse for Rust but not like a 2x+ factor.
The main compilation speed problem of Rust vs C++ is that it’s harder to parallelize Rust compilations (large compilation units) which kind of leads to bottleneck crates. Incremental compilation helps here, and codegen-units already works.
Rust vs C is a whole other ball game though. The same ball game as C++ vs C.
That post, this post, my experience, lines, seconds… very scientific :) Hardware can be wildly different, lines of code can be wildly different (especially in the amount of generics used), and the amount of lines necessary to do something can be a lot smaller in Rust, especially vs. plain C.
To add another unscientific comparison :) Servo release build from scratch on my machine (Ryzen 7 1700 @ 3.9GHz, SATA SSD) takes about 30 minutes. Firefox release build takes a bit more. Chromium… even more, closer to an hour. These are all different codebases, but they all implement a web browser, and the compile times are all in the same ballpark. So rustc is certainly not that much slower than clang++.
Only small parts of a C++ codebase need templates
Maybe you write templates rarely, but typical modern C++ uses them all over the place. As in, every STL container/smart pointer/algorithm/whatever is a template.
To add another unscientific comparison :) Servo release build from scratch on my machine (Ryzen 7 1700 @ 3.9GHz, SATA SSD) takes about 30 minutes. Firefox release build takes a bit more. Chromium… even more, closer to an hour. These are all different codebases, but they all implement a web browser, and the compile times are all in the same ballpark. So rustc is certainly not that much slower than clang++.
You’re saying compiling 2.25M lines of code for a not feature complete browser that takes 30 minutes is comparable to compiling 18-35M lines of code in ‘a bit more’?
Line counters like this one are entirely wrong.
This thing only counted https://github.com/servo/servo. Servo code is actually split among many many repositories.
HTML parser, CSS parser, URL parser, WebRender, animation, font sanitizer, IPC, sandbox, SpiderMonkey JS engine (C++), Firefox’s media playback (C++), Firefox’s canvas thingy with Skia (C++), HarfBuzz text shaping (C++) and more other stuff — all of this is included in the 30 minutes!
plus,
the amount of lines necessary to do something can be a lot smaller in Rust
Agreed, it grossly underestimates how much code Chromium contains. You are aware of the horrible depot_tools and the amount of stuff they pull in?
My point was, you are comparing a feature incomplete browser that is a smaller code base at least in one order of magnitude but takes 30 minutes compared to “closer to an hour” of Chromium. If think your argument doesn’t hold - you are free to provide data to prove me wrong.
Servo’s not a monolithic codebase. Firefox is monolithic. It’s a bad comparison.
Chromium is also mostly monolithic IIRC.
Free- and OpenBSD can compile userland from source:
So decent compile times are of essence, especially if you are targeting multiple architectures.
The magic words being “There has been no attempt”. With that, especially by saying “attempt”, he’s completely wrong. There have been attempts. At everything he lists. (he lists more here: https://www.youtube.com/watch?v=fYgG0ds2_UQ&feature=youtu.be&t=2112 all of what Theo mentions has been written, in Rust, some even have multiple projects, and very serious ones at that)
For a more direct approach at BSD utils, there’s the redox core utils, which are BSD-util based. https://github.com/redox-os/coreutils
Other magic words are “POSIX compatible”. Neither redox-os nor the uutils linked by @Manishearth seem to care particularly about this. I haven’t looked all that closely, but picking some random utils shows that none of them is fully compliant. It’s not even close, so surely they can’t be considered valid replacements of the C originals.
For example (assuming that I read the source code correctly) both implementations of cat lack the only POSIX-required option -u and the implementations of pwd lack both -L and -P. These are very simple tools and are considered done at least by uutils…
So, Theo may be wrong by saying that no attempts have been made, but I believe a whole lot of rather hard work still needs to be done before he will acknowledge serious efforts.
This rapidly will devolve into a no true scotsman argument.
https://github.com/uutils/coreutils#run-busybox-tests
uutils is running the busybox tests. Which admittedly test for something other than POSIX compliance, but neither the GNU or BSD coreutils are POSIX-compliant anyway.
uutils is based on the GNU coreutils, redox’s ones are based on the BSD ones, which is certainly a step in the right direction and can certainly be counted as an attempt.
For example (assuming that I read the source code correctly) both implementations of cat lack the only POSIX-required option -u and the implementations of pwd lack both -L and -P.
Nobody said they were complete.
All we’re talking about is Theo’s rather strong point that “there has been no attempt”. There has.
I’m curious about this statement in TdR in the linked email
For instance, rust cannot even compile itself on i386 at present time because it exhausts the address space.
Is this true?
As always with these complaints, I can’t find any reference to exact issues. What’s true is that LLVM uses quite a bit of memory to compile and rustc builds tend not to be the smallest themselves. But not that big. Also, recent improvements have definitely worked here
I do regularly build the full chain on a ACER c720p, with FreeBSD, which has a celeron and 2 GB of RAM, I have to shut down the X server and everything before, but it works.
As usual, this is probably an issue of the kind “please report actual problems, and we work fixing that”. “We want to provide a build environment for OpenBSD and X Y Z is missing” is something we’d be happy support, some fuzzy notion of “this doesn’t fulfill our (somewhat fuzzy) criteria” isn’t actionable.
Rust for Haiku does ship Rust with i386 binaries and bootstrapping compilers (stage0): http://rust-on-haiku.com/downloads
As always with these complaints, I can’t find any reference to exact issues.
Only because it’s a thread on the OpenBSD mailing lists, people reading that list have the full context of the recent issues with Firefox and Rust.
I’ll assume you just don’t follow the list so here is the relevant thread lang/rust: update to 1.22.1
- For this release, I had lot of problem for updating i386 to 1.22.1 (too much memory pressure when compiling 1.22 with 1.21 version). So the bootstrap was initially regenerated by crosscompiling it from amd64, and next I regenerate a proper 1.22 bootstrap from i386. Build 1.22 with 1.22 seems to fit in memory.
As I do all this work with a dedicated host, it is possible that ENOMEM will come back in bulk.
And if the required memory still grows, rustc will be marked BROKEN on i386 (and firefox will not be available anymore on i386)
Only because it’s a thread on the OpenBSD mailing lists, people reading that list have the full context of the recent issues with Firefox and Rust.
Sure, but has this:
And if the required memory still grows, rustc will be marked BROKEN on i386 (and firefox will not be available anymore on i386).
Reached the Rust maintainers? (thread on the internals mailing list, issue on rust-lang/rust?)
I’m happy to be corrected.
Reached the Rust maintainers? (thread on the internals mailing list, issue on rust-lang/rust?)
I don’t know. I don’t follow rust development, however the author of that email is a rust contributor like I mentioned to you in the past so I assume that it’s known to people working on the project. Perhaps you should check on that internals mailing list, I checked rust-lang/rust on github but didn’t find anything relevant :)
I checked IRLO (https://internals.rust-lang.org/) and also nothing. (“internals” by the way referring to the “compiler internals”, we have no closed mailing list). The problem on projects of that scale seems to be that information travel is a huge issue and that leads to aggrevation. The reason I’m asking is not that I want to disprove you, I just want to ensure that I don’t open a discussion that’s already happening somewhere just because something is going through social media currently.
Thanks for pointing that out, I will ensure there’s some discussion.
Reading the linked post, it seems to mostly be a regression when doing the jump between 1.21 to 1.22, so that should probably be a thing to keep an eye out for.
Here’s a current Rust bug that makes life hard for people trying to work on newer platforms.
I’m skeptical; this has certainly worked for me in the past.
I used 32 bit lab machines as a place to delegate builds to back when I was a student.
Note that different operating systems will have different address space layout policies and limits. Your effective space can vary from possibly more than 3GB to possibly less than 2GB.
I feel like a lot of people are missing some very key points. Especially given there have been links to posix compliant coreutil replacements - Sure TDR might have been wrong about their existence, but he wasn’t wrong in the sense that those tools are likely not going to ever be able to build on things like macppc or alpha:
alpha, amd64, arm64, armv7, hppa, i386, loongson, macppc, octeon, sgi, sparc64. Even if you had a core utils replacement - it’s not a comprehensive replacement, you would have to maintain two sets of code, one for “safe” and one for the rest of architectures. I don’t care how safe your language is - doubling down on code base size isn’t a good way to prevent bugs!ls.. or the kernel.. or what ever.Given skade‘s comment on not having enough hands available to make a new backend work, I am not holding my breath for new architectures being supported on an OS that isn’t even officially supported!
Given skade‘s comment on not having enough hands available to make a new backend work, I am not holding my breath for new architectures being supported on an OS that isn’t even officially supported!
To be clear: All I’m saying is that we need hands, we can grow them at any time. We are very well set up for contributions (we have stellar merge and discussion times), but we need to focus the resources we can freely allocate.
If the problem grinds someones gears, we’d be very happy to have and support them, but we also understand very much that people have other things to do. The thing is that we don’t want a half-arsed solution now, we’d prefer a better later.
I totally see how rustc is just not for you currently, especially given #1/#2.
There might be understated cultural component IMHO - language package managers probably don’t mesh with an OS that has its own, and I doubt that they want to have to use cargo/cabal/etc in the base system, especially remote packages.
This, on the other hand, is an issue actively being worked on. debian has similar issues.
Issue: https://github.com/rust-lang/rust-roadmap/issues/12
Relevant RFC: https://github.com/aturon/rfcs/blob/build-systems/text/0000-build-systems.md
This talks about integrating with bazel and similar mostly, but the scope includes OS builds and the issues are similar.
Mostly, the plan is to allow to use cargo tooling just for dependency resolution and then build according to that plan with the build system of your choice.
Note that this is an eRFC, which will end up with an experimental feature intended for testing. This isn’t the final cut, there needs to be a proper RFC after that. We need to get that into the hands of users, though.
My self-hosted site:
is intentionally OStatus-compatible, and will work with GNU Social, Frendica, and others. Mastodon hasn’t implemented enough of the spec to work for me, though, and I haven’t got around to putting in the needed changes to support their limited support yet.
Nice! Is the code for the ostatus stuff up anywhere? I am building mycete to do something similar - currently it lets me post from Matrix/Riot to mastodon, twitter and pnut - My original goal for it was to have it post to a “blog” kinda thing!
It’s mostly the pubsubhubub plugin and the salmon plugin (and also the diso actionstream plugin with some custom hacks, but that’s very optional)
Not from Mastodon yet, because they don’t implement the whole OStatus spec. Any fully-compliant implementation like GNU Social will work. Hopefully I’ll find the time soon to bend my implementation to Mastodon’s limitations.
\o/ :D
This really is an apples to oranges comparison - and it didn’t have to be. Yes, you can tunnel ports with ssh (oranges).. but you can also do full on (layer 1, layer2) Virtual Private Networking (apples)!
From the ssh(1) man page:
SSH-BASED VIRTUAL PRIVATE NETWORKS
ssh contains support for Virtual Private Network (VPN) tunnelling using
the tun(4) network pseudo-device, allowing two networks to be joined
securely. The sshd_config(5) configuration option PermitTunnel controls
whether the server supports this, and at what level (layer 2 or 3
traffic).
The following example would connect client network 10.0.50.0/24 with
remote network 10.0.99.0/24 using a point-to-point connection from
10.1.1.1 to 10.1.1.2, provided that the SSH server running on the gateway
to the remote network, at 192.168.1.15, allows it.
On the client:
# ssh -f -w 0:1 192.168.1.15 true
# ifconfig tun0 10.1.1.1 10.1.1.2 netmask 255.255.255.252
# route add 10.0.99.0/24 10.1.1.2
On the server:
# ifconfig tun1 10.1.1.2 10.1.1.1 netmask 255.255.255.252
# route add 10.0.50.0/24 10.1.1.1
Client access may be more finely tuned via the /root/.ssh/authorized_keys
file (see below) and the PermitRootLogin server option. The following
entry would permit connections on tun(4) device 1 from user “jane” and on
tun device 2 from user “john”, if PermitRootLogin is set to
“forced-commands-only”:
tunnel="1",command="sh /etc/netstart tun1" ssh-rsa ... jane
tunnel="2",command="sh /etc/netstart tun2" ssh-rsa ... john
Since an SSH-based setup entails a fair amount of overhead, it may be
more suited to temporary setups, such as for wireless VPNs. More
permanent VPNs are better provided by tools such as ipsecctl(8) and
isakmpd(8).
I am at https://deftly.net. I mostly write about OpenBSD related things :D