I absolutely love that the BSDs are switching to llvm. This makes me giddy like a school child.
By switching to a full llvm toolchain, the BSDs will be able to do some really nifty things that simply cannot be done in Linux. HardenedBSD, for example, is working on integrating Cross-DSO CFI in base (and, later, ports). NetBSD is looking at deeper sanitizer integration in base. From an outsider’s perspective, it seems OpenBSD is playing catch up right now, but they’ve got the talent and the manpower to do so within a reasonable period of time.
It’s my dream that all the BSDs switch fully to llvm as the compiler toolchain, including llvm-ar, llvm-nm, llvm-objdump, llvm-as, etc. Doing so will allow the BSDs to do add some really nifty security enhancements. Want an enterprise OS that secures the entire ecosystem? Choose BSD.
Linux simply cannot compete here. A userland that innovates in lockstep with the kernel is absolutely required to do these kinds of things. Go BSD!
You’re overstating it. Most of the mitigation development of past decade or two was for Linux. Most of the high-security solutions integrated with Linux, often virtualizing it. The most-secure systems you can get right now are separation kernels running Linux along-side critical apps. Two examples. Some of the mitigation work is also done for FreeBSD. Of that, some is done openly for wide benefit and some uses BSD license specifically to lock-down/patent/sue when commercialized. Quick pause to say thanks for you own work on the open side. :)
So, what’s best for people depends on a lot of factors from what apps they want, what they’re trying to mitigate, stance on licensing, whether they have money for proprietary solutions or custom work, time for custom work or debugging if FOSS, and so on. One is not superior to the other. That could change if any company builds a mobile/desktop/server-class processor with memory safety or CFI built checking every sensitive operation. Stuff like that exists in CompSci for both OS’s. Hardware-level security could be an instant win. Past that, all I can say is it depends.
On embedded side, Microsemi says CodeSEAL works with Linux and Windows. CoreGuard, based on SAFE architecture, currently runs FreeRTOS. The next solution needs to be at least as strong at addressing root causes.
Thanks for making me think a bit deeper on this subject. And thanks for the kind words on my own work. :)
With a historical perspective, I agree with you. grsecurity has done a lot with regards to Linux security (and security in general). I think the entire computing industry owes a lot to grsecurity, especially those of us in infosec.
With the BSDs (except FreeBSD) having the core exploit mitigations in place (ASLR, W^X), it’s time to move on to other, more advanced mitigations. There’s only so much the kernel can do to harden userland and keep performance in check. Thus, these more advanced exploit mitigations must be implemented in the compiler. The BSDs are positioning themselves to be able to adopt and tightly integrate compiler-based exploit mitigations like CFI. Due to Linux’s fragmentation, it’s simply not possible for Linux to position itself in the same way. HardenedBSD has already surpassed Linux as far as userland exploit mitigations are concerned. This is due in part because of using llvm as the compiler toolchain.
Microsoft is making huge strides as well. However, the PE file format, which allows individual PE objects to opt-in or opt-out of the various exploit mitigations, is a glaring weakness commonly abused by attackers. All it takes is for one DLL to not be compiled with /DYNAMICBASE, and boom goes the dynamite. Recently, VLC on Windows was found not to have ASLR enabled, even though it was compiled with /DYNAMICBASE and OS-enforced ASLR enabled, due to the .reloc section being stripped. Certain design decisions made decades ago by Microsoft are still biting them in the butt.
I completely agree with you about hardware-based exploit mitigations. The CHERI project from the University of Cambridge in England is doing just that: hardware-enforced capabilities and bounds enforcement. However, it’ll probably take another 20+ years for their work to be available in silicon and an additional 20+ years for their work to be used broadly (and thus, actually usable/used). In the meantime, we need these software-based exploit mitigations.
I absolutely love that the BSDs are switching to llvm.
What does this news story have to do with LLVM?
I guess I view it differently, due to newer versions of gcc being GPLv3, which limits who can adopt it. With llvm being permissively licensed, it can be adopted by a much wider audience. The GPL is driving FreeBSD to replace all GPL code in the base operating system with more permissively-licensed options.
For the base OS, gcc is dead to me.
(Speaking from the perspective of a FreeBSD/HardenedBSD user): gcc has no real future in the BSDs. Because of licensing concerns (GPLv3), the BSDs are moving towards a permissively-licensed compiler toolchain. Newer versions of gcc do contain sanitizer frameworks, they’re not usable in the BSD base operating system.
Good to know! Thanks! Perhaps my perception is a bit skewed towards FreeBSD lines of thinking.
I know NetBSD is working on incorporating llvm. I wonder why if they use newer versions of gcc.
This is a great usability improvement. Thank you Peter Hessler :)
That said, it’s still a little bit sad that this is only just being introduced in 2018.
That said, it’s still a little bit sad that this is only just being introduced in 2018.
Technically - OpenBSD has had various toolings (1, 2, 3 and others) to do this very task for quite a long time. But none of them were considered the correct approach.
Also, this is something that’s pretty unique to OpenBSD IMO. The end result is the same as with other systems.. sure. But this is unique among the unix world.
Q: What’s the difference?
Glad I asked! This is entirely contained within the base system and requires no tools beyond ifconfig!
Linux has ip, iw, networkmanager, iwconfig..(likely others)… and they are all using some weird combo of wpa_supplicant.. autogen’d text files.. and likely other things.
Have you ever tried to manually configure wireless on linux? It’s a nightmare. Always has been.
NetworkManager does a really good job of making it feel like there isn’t a kludge going on behind the scenes.. It does this by gluing all the various tools together so you don’t have to know about them. IMO this is what happens when you “get it done now” vs “do it right”.
With great simplicity comes great security:NetworkManager@6c3174f6e0cdb3e0c61ab07eb244c1a6e033ff6e:
github.com/AlDanial/cloc v 1.74 T=28.62 s (48.2 files/s, 45506.1 lines/s)
--------------------------------------------------------------------------------
Language files blank comment code
--------------------------------------------------------------------------------
PO File 66 125328 161976 457879
C 541 71112 66531 321839
C/C++ Header 528 10430 15928 34422
XML 59 1406 2307 6692
make 6 885 229 5009
Python 40 1189 1128 4597
NAnt script 65 626 0 3968
m4 8 237 123 1958
Lua 11 212 453 1314
Bourne Shell 21 232 238 1115
XSLT 5 65 3 929
Perl 4 166 243 480
Bourne Again Shell 11 30 35 241
C++ 4 62 121 178
YAML 4 12 6 161
JavaScript 1 33 21 130
Ruby 3 39 92 110
Lisp 2 15 24 23
--------------------------------------------------------------------------------
SUM: 1379 212079 249458 841045
--------------------------------------------------------------------------------
VS
ifconfig@1.368:
github.com/AlDanial/cloc v 1.74 T=0.12 s (32.2 files/s, 58201.7 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
C 2 1009 345 5784
C/C++ Header 1 7 16 58
make 1 3 1 6
-------------------------------------------------------------------------------
SUM: 4 1019 362 5848
-------------------------------------------------------------------------------
Anyway - I guess my point is this:
Have you ever tried to manually configure wireless on linux? It’s a nightmare. Always has been.
No. The Linux’s I use come with an out-of-the-box experience that makes wireless as easy as clicking a box, clicking a name, typing in the password, it works, and it reconnects when nearby. They have been like that since I bought an Ubuntu-specific Dell a long time ago. They knew it was a critical feature that needed to work easily with no effort with some doing that upon installation so parts of the install could be downloaded over WiFi. Then, they did whatever they had to do in their constraints (time/talent/available code) to get it done.
And then I was able to use it with only breaks being wireless driver issues that had answers on Q&A sites. Although that was annoying, I didn’t have to think about something critical I shouldn’t have to think about. Great product development in action for an audience that has other things to do than screw around with half-built wireless services. That’s a complement about what I used rather than a jab at OpenBSD’s which I didn’t use. I’m merely saying quite a few of us appreciate stuff that saves us time once or many times. If common and critical, adoption can go up if it’s a solved problem with minimal intervention out of the box.
That said, props to your project member who solved the problem with a minimally-complex solution in terms of code and dependencies. I’m sure that was hard work. I also appreciate you illustrating that for us with your comparisons. The difference is almost comical in the work people put in with very different talents, goals and constraints. And m4 isn’t gone yet. (sighs)
No. The Linux’s I use come with an out-of-the-box experience that makes wireless as easy as clicking a box, clicking a name, typing in the password, it works, and it reconnects when nearby.
And then something goes wrong in the fragile mess of misfeatures, and someone has to dig in and debug, or a new feature comes along and someone has to understand the stack of hacks to understand it, before it can be added. There’s something to be said for a system that can be understood.
There is something to be said for a system to be understood. I totally agree. I also think there’s something to be said for a reliable, more-secure system that can be effortlessly used by hundreds of millions of people. A slice of them will probably do things that were worth the effort. The utilitarian in me says make it easy for them to get connected. The pragmatist also says highly-usable, effortless experience leads to more benefits in terms of contributions, donations, and/or business models. These seemingly-contradicting philosophies overlap in this case. I think end justifies the means here. One can always refactor the cruddy code later if it’s just one component in the system with a decent API.
One can always refactor the cruddy code later if it’s just one component in the system with a decent API.
The problem isn’t the code, it’s the system that it’s participating in.
One can always refactor the cruddy code later if it’s just one component in the system with a decent API.
This just leads to systemd, and more misfeatures…
There’s Linux’s without systemd. Even those that had it didn’t before they got massive adoption/impact/money. So, it doesn’t naturally lead to it. Just bad, decision-making in groups controlling popular OS’s from what I can tell. Then, there’s also all the good stuff that comes with their philosophy that strict OS’s like OpenBSD haven’t achieved. The Linux server market, cloud, desktops, embedded, and Android are worth the drawbacks if assessing by benefits gained by many parties.
Personally, I’m fine with multiple types of OS being around. I like and promote both. As usual, I’m just gonna call out anyone saying nobody can critique an option or someone else saying it’s inherently better than all alternatives. Those positions are BS. Things things are highly contextual.
This is really great. I wish all other projects can do that, preferring elegancy to throwing code on the wall, but sometimes life really takes its toll and we cave and just make Frankenstein to get shit done.
I really appreciate all the works by OpenBSD folks. Do you have any idea how other *BSD’s deal with the wireless?
Whats really sad is that the security of other operating systems can’t keep up despite having more man power.
It’s almost like if you prioritize the stuff that truly matters, and be willing to accept a little bit of UX inconvenience, you might happen upon a formula that produces reliable software? Who would have thought?
That’s what I told OpenBSD people. They kept on a poorly-marketed monolith in unsafe language without the methods from CompSci that were knocking out whole classes of errors. They kept having preventable bugs and adoption blockers. Apparently, the other OS developers have similarly, hard-to-change habits and preferences with less focus on predictable, well-documented, robust behavior.
I think this is just a matter of what you think matters. There’s no sadness here. The ability to trade off security for features and vice versa is good. It lets us accept the level of risk we like.
On the other hand, it’s really sad, for instance, that OpenBSD has had so many public security flaws compared to my kernel ;P
On the other hand, it’s really sad, for instance, that OpenBSD has had so many public security flaws compared to my kernel ;P
What’s your kernel?
It’s a joke. Mine is a null kernel. It has zero code, so no features, so no security flaws. Just like OpenBSD has fewer features and fewer known security flaws than Linux, mine has fewer features but no security flaws.
Unlike OpenBSD, mine is actually immune to Meltdown and Spectre.
Not having public flaws doesn’t mean you don’t have flaws. Could mean not enough people are even considering checking for flaws. ;)
That said, it’s still a little bit sad that this is only just being introduced in 2018.
Would you like to clarify what you mean by this comment? Cause right now my interpretation of it is that you feel entitled to have complicated features supported in operating systems developed by (largely unpaid) volunteers.
I’m getting a bit tired of every complaint and remark being reduced to entitlement. Yes, I know that there is a lot of unjustified entitlement in the world, and it is rampant in the open source world, but I don’t feel entitled to anything in free or open source software space. As someone trying to write software in my spare time, I understand how hard it is to find spare time for any non-trivial task when it’s not your job.
Though I am not a heavy user, I think OpenBSD is an impressive piece of software, with a lot of thought and effort put into the design and robustness of the implementation.
I just think it’s somewhat disheartening that something this common (switching wireless networks) was not possible without manual action (rewriting a configuration file, or swapping configuration files, and restarting the network interface) every time you needed to switch or moved from home to the office.
Whether you feel like this is me lamenting the fact that there are so few contributors to important open source projects, me lamenting the fact that it is so hard to make time to work on said project, or me being an entitled prick asking for features on software I don’t pay for (in money or in time/effort) is entirely your business.
Just for the record I didn’t think you sounded entitled. The rest of the comment thread got weirdly sanctimonious for some reason.
Volunteers can work on whatever they want, and anybody’s free to comment on their work. Other operating systems have had the ability to switch wifi networks now for a long time, so it’s fair to call that out. And then Peter went and did something about it which is great.
Previously I’ve been using http://ports.su/net/wireless for wifi switching on my obsd laptop, but will use the new built-in feature when I upgrade the machine.
Some of the delay for the feature may be because the OS, while very capable, doesn’t seem designed to preemptively do things on the user’s behalf. Rather the idea seems to be that the user knows what’s best and will ask the OS to do things. For instance when I dock or undock my machine from an external monitor it won’t automatically switch to using the display. I have a set of dock/undock scripts for that. I appreciate the simple “manual transmission” design of the whole thing. The new wifi feature seems to be in a similar spirit, where you rank each network’s desirability and the OS tries in that order.
Interesting, I didn’t know about that to. I used my own bash script to juggle config files and restart the interface, but the new support in ifconfig itself is much easier.
I think the desire for OpenBSD to not do things without explicit user intent are certainly part of why this wasn’t added before, as well as limited use as a laptop OS until relatively recently.
Thanks for taking the time to respond.
To be clear, I don’t believe you’re some sort of entitled prick – I don’t even know you. But, I do care that people aren’t berating developers with: “That’s great, but ____” comments. Let’s support each other, instead of feigning gratitude. It wasn’t clear if that’s what you were doing, hence, my request for clarification.
That being said, my comment was poorly worded, and implied a belief that you were on the wrong side of that. That was unfair, and I apologize.
I just think it’s somewhat disheartening that something this common (switching wireless networks) was not possible without manual action (rewriting a configuration file, or swapping configuration files, and restarting the network interface) every time you needed to switch or moved from home to the office.
Well, I’m just not going to touch this…. :eyeroll:
I apologize if my response was a little bit snide. I’ve been reading a lot of online commentary that chunks pretty much everything into whatever people perceive as wrong with society (most commonly: racism, sexism, or millenial entitlement - I know these are real and important issues, but not everything needs to be about them). I read your remark in the context and may have been a little harsh.
Regarding the last segment - how WiFi switching worked before - there may have been better ways to do this, but I’m not sure they were part of the default install. When I needed this functionality on OpenBSD, I basically wrote a bash script to do these steps for me on demand, and that worked alright for me. It may not have been the best way, so my view of the OpenBSD WiFi laptop landscape prior to the work of Peter may not be entirely appropriate or accurate.
I just think it’s somewhat disheartening that something this common (switching wireless networks) was not possible without manual action (rewriting a configuration file, or swapping configuration files, and restarting the network interface) every time you needed to switch or moved from home to the office.
I’m more blunt here that leaving that to be true in a world with ubiquitous WiFi was a bad idea if they wanted more adoption and donations from market segment that wanted good, out-of-the-box support for WiFi. If they didn’t want that, then it might have been a good choice to ignore it for so long to focus on other things. It all depends on what their goals were. Since we don’t know them, I’ll at least say that it was bad, neutral, or good depending on certain conditions like with anything else. The core userbase was probably OK with whatever they had, though.
First, both free speech and hacker culture say that person can gripe about what they want. They’re sharing ideas online that someone might agree with or act on. We have a diverse audience, too.
Second, the project itself has developers that write cocky stuff about their system, mock the other systems, talk that one time about how they expect more people to be paying them with donations, more recently talk about doing things like a hypervisor for adoption, and so on. Any group doing any of that deserves no exception to criticism or mockery by users or potential users. It’s why I slammed them hard in critiques, only toning it down for the nice ones I met. People liking critiques of other projects or wanting adoption/donations should definitely see others’ critiques of their projects, esp if its adoption/donation blockers. I mean, Mac’s had a seemless experience called Rendevous or something in 2002. If I’m reading the thread right, that was 16 years before OpenBSD something similar they wanted to make official. That OpenBSD members are always bragging when they’re ahead of other OS’s on something is why I’m mentioning it. Equal treatment isn’t always nice.
“But, I do care that people aren’t berating developers with: “That’s great, but ____” comments. Let’s support each other, instead of feigning gratitude. It wasn’t clear if that’s what you were doing, hence, my request for clarification.”
I did want to point out that we’ve had a lots of OpenBSD-related submissions and comments with snarky remarks about what other developers or projects were doing. I at least don’t recall you trying to shut them down with counterpoints assessing their civility or positivity toward other projects (say NetBSD or Linux). Seems a little inconsistent. My memory is broken, though. So, are you going to be countering every negative remark OpenBSD developers or supporters make about projects with different goals telling them to be positive and supportive only? A general rule of yours? Or are you giving them a pass for some reason but applying the rule to critics of OpenBSD choices?
I at least don’t recall you trying to shut them down with counterpoints assessing their civility or positivity toward other projects (say NetBSD or Linux). Seems a little inconsistent.
I’m not the Internet Comment Police, but you seem to think you are for some reason… Consider this particular instance “me griping about what I want.”
Or are you giving them a pass for some reason but applying the rule to critics of OpenBSD choices?
This wasn’t about OpenBSD at all. This started out as a request for clarification on the intent of an ambiguous comment that seemed entitled. There seems to be a lot of that happening today, and a lot of people defending it for whatever reason, which is even worse.
I’m not the Internet Comment Police
Your comments came off that way to me between the original and follow-ups. Far as not about OpenBSD, it’s in a thread on it with someone griping it lacked something they wanted. The OpenBSD members griping about third party projects not having something they wanted to see more of typically got no comment from you. The inconsistency remains. I’m writing it off as you’re just a fan of their style of thinking on code, quality, or something.
That’s certainly one possibility, but not how I took it initially, and why I asked for clarification. I’ve seen too many people over the years attempt to disguise their entitlement by saying “thanks.”
I’d have liked to see this comment worded as:
This is a great usability improvement. Thank you Peter Hessler :) It’s a shame that there isn’t a better way to bring these important usability features to OpenBSD faster. What is the best way to help make that happen? Donations to the OpenBSD Foundation? Sponsor the work directly? Something else?
Now, it’s also possible that the OP has ties to OpenBSD, and the comment was self-deprecating. But, one can’t infer that from the information we see without investigating who the OP is, and their affiliations…
I’m not sure you understand what infer means. One certainly can infer meaning from a comment, based on previous actions, comments, etc..
My point remains: It’d be nice if the OP would clarify what they mean. My interpretation of the OP’s comment is just as likely as your interpretation. My interpretation is damaging to the morale of existing volunteer contributors to FOSS, and gives potential contributors to FOSS reasons to not contribute all together. I don’t know about you, but I want to encourage people to contribute to FOSS, as doing so moves us closer to a free and open society. And, that alone, is the reason I’m even bothering to continue responding to this thread…
“It’s pretty sad that it took someone else so long to prioritize work I think is necessary.”
I think it’s pretty easy to take what was written and read it this way. But maybe my glass is half empty today.
One can infer based on a comment, but the inference will most likely be dimwitted bullshit.
Without the magic trifecta of body language, vocal intonation, and facial expression us monkeys are just shit at picking up on any extra meaning. So take the comment at face value.
It expresses gratitude, it focuses on a specific recipient, and it lauds the feature. After, it regrets that it couldn’t/didn’t happen earlier.
There’s no hidden meaning here, and if the commenter intended a hidden meaning he’s a dufus too, because there’s no unicode character for those. U+F089A6CDCE ZERO WIDTH SARCASTIC FUCK YOU MARK notwithstanding.
At some point we all need to stop insisting that we have near-telepathic powers, especially outside of meatspace.
So, what you’re saying is that I can write anything I want, and since you can’t see or hear other clues, there’s no way you can downvote (in good faith) this comment as trolling?
Not sure text works that way…
are we really giving free advertising to a company that offers large sums of money to anyone that introduces vulnerabilities to open source OSes?
Financial opportunities for FOSS hackers is how I read it. They could even hit a rival BSD or Linux to make money to pay for developer time, features and/or security reviews, on their own project.
I at least considered doing something like that at one point. Although I didn’t, I wouldn’t be surprised if someone rationalized it away: greater good of what money bought; fact that vulnerabilities were already there waiting to be found in product that will be hacked anyway; blame demand side where FOSS and commercial users willingly use buggy/risky software for perceived benefits instead of security-focused alternatives.
The amount of trust people need to put in others for a functioning FOSS world is very high. Groups that have strong financial behavior to betray their surroundings have to behave in an extremely paranoid way, and it’s far easier to introduce vulnerabilities in your own project than find a vulnerability in another.
suppose I find a vulnerability, I report it to security-officer@somebsd.org, they didn’t fix it yet. what am I supposed to understand? that they are behind on handling tickets (it happens), or that security-officer had 500,000 reasons to stay quiet?
What about the person who is creating the release - he can do the build with an extra change. Are all builds that aren’t reproducible suspicious now?
Suppose you do find a vulnerability in your own project. You can see who introduced this. Are you kicking them out of your project or assuming it’s a mistake?
Yes, I should review the work of others and I do, but there’s a limit for how much one person can check.
re vulnerability brokers in general
You’re giving me a lot of examples but missing or disagreeing with a fundamental point. I’m going straight for it instead. It’s been bugging me a lot in past few years. It’s that most users and developers want their product to be vulnerable to achieve other goals. They willingly choose against security in a lot of ways. Users will go with a product that has lots of failures or hacks even with safer ones are available because it has X, Y, or Z traits that they think is worth that. The companies usually go with profit and/or feature maximization even when they can afford to boost QA or simplify. Both commercial and FOSS developers often use unsafe languages (or runtimes), limited security tooling, or small amounts of code reviews. These behaviors damn-near guarantee a lot of this software is going to be hacked. They do it anyway.
So, the market is pro-getting hacked to the point they almost exclusively use things with lots of prior CVE’s. The network effects and oligopolistic tactics of companies mean there’s usually just a few things in each category. Black hats and exploit vendors are putting lots of time and money into the bug hunting that those suppliers aren’t doing and customers are voting for with their wallet. There’s going to be 0-days found in them. If there’s damage to be done, it will be done just as each party decided with their priorities. With that backdrop, will your bug in a Linux or BSD make a difference to whether folks buying from Zerodium will hack that platform? Probably not. Will it make a difference as to who gets paid and how much if you choose responsible disclosure over them? Probably so.
To drive that home, Microsoft, IBM, Google, and Apple all have both the brains and money to make their TCB’s about as bug-proof as they can get. If they care about security, then that’s a good thing to do. If their paying users care, then it’s even more a good thing to do. They spend almost nothing on preventative security compared to what they make on their products and services. They don’t care. They’ll put the vulnerabilities in themselves just to squeeze more profit out of customers. Letting a broker have them before someone else isn’t making much difference. That’s at least on damage assessment angle.
I think about it differently if the customer is paying a lot extra for what’s supposed to be good security. I think the supplier should be punished in courts or something for lying with the cost high enough that they start doing security or stop lying about what they’re not doing. Also, I think suppliers who have put good effort in shouldn’t be punished over a little slip or a new class of attack. I’d rather people finding those get paid so well by the companies and/or a government fund that they don’t go to vulnerability brokers most of the time. I’m just not having much sympathy for either users or suppliers griping about vulnerability brokers if they both favor products they know will get hacked because they accepted the tradeoffs. Whereas, projects that focus on balance of features and security with strong review often languish with low revenues or (for FOSS) hardly any financial contributions.
re suspicious builds
All software is insecure and suspicious until proven otherwise by strong review. That’s straight-up what security takes. Since you mentioned it, the guy (Paul Karger) that invented the compiler-compiler attack that Thompson demod laid out some requirements for dealing with threats like that. Reproducible builds don’t begin to cover it, esp malicious developers. For the app, you need precise requirements, design, security policy, and proof that they’re all consistent with nothing bad added or good subtracted. Then, a secure repo like described here. Object code validation like in DO-178C regulations if worried about compilers. Manual per app or use a certifying compiler like CompCert after it is validated. Then, Karger et al recommended all of that be sent via protected channel to customers so they can re-run the analyses/tests and build from source locally. All of that was what would be required to stop people like him from doing a subversion attack. Those were 1970’s to early 1990’s era requirements they used in military and commercial products.
re someone introduces vulnerability
I’d correct the vulnerability. I’d ask them if it was a slip up or they’d like to learn more about preventing that. I’d give them some resources. I’d review their submissions more carefully throwing some extra tooling at them, too. Anyone that keeps screwing up will be out of that project. People who improve will get a bit less review. However, as you saw above, my standard for secure software would already include some strong review plus techniques for blocking root causes of code injection and (if needed) covert channels. Half-ass code passing such a standard should usually not lead to big problems. If it is, they or their backers are so clever you aren’t going to beat them by ejecting them anyway. Ejection is symbolic. Just fix the problem. Add preventative measures for it if possible.
Notice what I’m doing focuses on the project deliverables and their traits instead of the person. That’s intentional. If I have to trust them, my process is doing it wrong. At the least, I need more peer review and/or machine checks in it. As Roger Schell used to say, software built with the right methods is trustworthy enough that you can “buy it from your worst enemy.” He oversold it but it seems mostly true on low-to-mid-hanging fruit.
free advertising to a company
or a heads up for people running those systems that a vendor is actually restocking targeting those platforms. Which implies that either the exploits they had for the platform were recently patched or they were actually approached by a customer for targeted exploitation.
Funny to mention nested functions. GCC does allow them, and people passionately hate their existence because they require making the stack executable which is highly detrimental for security.
I know where the author is heading, but some browsers building with one compiler doesn’t strike me as a monoculutre. Not too long ago “everyone” (with few exceptions) using or programming for Linux was using GCC and glibc. Now people use clang, gcc and probably others (icc, etc.). So it’s more like things became a lot less of a monoculture and probably mostly for the effort of BSD and MacOS users and developers making sure that software doesn’t only work with GCC.
Yes, it’s at least Mozilla and Chrome now using clang, but these are neither the only browsers nor is big projects focusing mostly on a defined set of tools something very uncommon.
It’s just a guess, but I also think that it will not suddenly become a huge undertaking to try to compile Firefox with another compiler. For the Rust parts maybe, but it’s already like that.
Not to say it’s a good thing, but there of course are up- and downsides. Especially for such a big project and especially for a project already using said implementation, helping to develop it it makes a lot more sense than in various other cases where you often only have one supported version of GCC. People using source based approaches to install packages probably know this. Compiling some version of some compilers, maybe taking hours just to compile a little piece of software that absolutely requires it.
Other than that, even if Mozilla now uses one compiler over different platforms I hope they won’t start “ruling out” compilation with other compilers or rejecting a few lines of code to keep or establish compatibility. At least from the article it sounds like that would be the case.
It makes me really appreciate the projects that require only a c89/c99 compliant compiler, like sqlite and lua. Admittedly their dependencies are also minimal, only require the c standard library iirc, but it sure is nice.
I know where the author is heading, but some browsers building with one compiler doesn’t strike me as a monoculutre.
So, even for a “toy” project, we used to build again:
And we would’ve built on an Alpha if we had one lying around–helps reveal the really thorny issues.
The thing is, not using multiple compilers (and architectures!) helps hide bugs.
Completely agree, but it’s still not unusual for projects to use one compiler for their official releases.
It’s just a guess, but I also think that it will not suddenly become a huge undertaking to try to compile Firefox with another compiler.
Suddenly? No, but I’m afraid that sooner or later having both clang and rust will become required for any platform that wants to ship firefox. Which is a shame, since Mozilla mission is:
Our mission is to ensure the Internet is a global public resource, open and accessible to all.
Suddenly? No, but I’m afraid that sooner or later having both clang and rust will become required for any platform that wants to ship firefox. Which is a shame, since Mozilla mission is:
That’s already the case. Stylo needs clang to build. You can build some parts with GCC though.
I agree. Sadly the browser already without this has very big difference in platform support, even without that. For example WebRTC (the multimedia part), sandboxing capabilities, etc. But then of course supporting that on many platforms isn’t easy. Would be great of course, if that mission lead to a focus on not only supporting Windows, Linux and MacOS.
Maybe someone has more insights, but something that makes me wonder a lot about how things work internally at Mozilla is that there is quite a few bug reports with ready to integrate patches remaining unanswered for often years, yet there is often changes that completely surprise users, some of them being very far away from Mozilla’s stated mission.
While I get that not all the people working for Mozilla work in all areas it seems a bit like on the “accepting and integrating contributions” side of things there is a problem. As a foundation asking for monetary contribution it’s often a bad sign when contribution in form of work gets not taken care of. I hope Mozilla can fix this, so contributors don’t get too frustrated.
So it’s more like things became a lot less of a monoculture and probably mostly for the effort of BSD and MacOS users and developers making sure that software doesn’t only work with GCC.
Not to belittle works of BSD people, a lot of Clang portability work was done by Debian before BSD decided on Clang. https://clang.debian.net/ goes back to Clang 2.9.
FreeBSD initially imported Clang at revision r72732 into the tree June 2nd 2009:
https://svnweb.freebsd.org/base?view=revision&revision=193323 https://llvm.org/viewvc/llvm-project/?pathrev=72732
This was long before FreeBSD 9.0-RELEASE (January 2012).
The public documentation of the effort starts back in Feburary of 2009:
https://wiki.freebsd.org/action/recall/BuildingFreeBSDWithClang?action=recall&rev=2
As of June 2009, Clang was at version 2.5. Version 2.6 didn’t happen until October 2009.
http://lists.llvm.org/pipermail/llvm-announce/2009-March/000031.html http://lists.llvm.org/pipermail/llvm-announce/2009-October/000033.html
So this means the devs were working with the devel/llvm-devel FreeBSD port, which would have been based on HEAD or slightly newer than Clang 2.4.
So I’m not sure that I believe the story that Debian was that invested in LLVM/Clang before FreeBSD was. There was no reason to; the Linux kernel had so many GCC-isms to overcome, what would be the gain? (other than some faster compiling of packages but poorer performing binaries)
edit: FreeBSD was trying to build all of the ports collection with Clang around May 2010. This still predates Debian by over a year
https://wiki.freebsd.org/action/recall/PortsAndClang?action=recall&rev=1
Okay, wrong perspective then. From my angle I saw how tons of projects got pull requests, patches, etc. so they’d work with clang.
Do you have any background on why the Debian clang community even popped up early? I’d have considered them to be be philosophically closer to sticking to GCC (other than for where it’s necessary).
Also saw that Wikipedia actually does have a nice timeline. However it doesn’t mention where Debian starts only where it “finishes”: https://en.wikipedia.org/wiki/Clang#Status_history
Do you have any background on why the Debian clang community even popped up early?
Debian is so large that it has a lot of (pardon me) crazy people. As an evidence, I submit the existence of Debian GNU/kFreeBSD.
Unimportant passwords I generate for each place and keep in a single unecrypted file without any extra frills. The few important ones I remember. I use uuidgen to generate passwords, mostly because it’s short to write and already available on a netbsd install.
My phone stays logged in to the few unimportant places that require a password, I don’t sync anything to it.
Capitalism is killing us in a very literal sense by destroying our habitat at an ever accelerating rate. The fundamental idea of needing growth and having to constantly invent new things to peddle leads to ever more disposable products, that are replaced for the sake of being replaced. There’s been very little actual innovation happening in the phone space. The vendors are intentionally building devices using the planned obsolescence model to force the upgrade cycle.
The cancer of consumerism affects pretty much every aspect of society, we’ve clear cut unique rain forests and destroyed millions of species we haven’t even documented so that we can make palm oil. A product that causes cancer, but that’s fractionally cheaper than other kinds of oil. We’ve created a garbage patch the size of a continent in the ocean. We’re poisoning the land with fracking. The list is endless, and it all comes down to the American ethos that making money is a sacred right that trumps all other concerns.
Capitalism is killing us in a very literal sense by destroying our habitat at an ever accelerating rate.
The cancer of consumerism affects pretty much every aspect of society, we’ve clear cut unique rain forests and destroyed millions of species we haven’t even documented so that we can make palm oil.
One can get into a big debate about this, but the concept of externalities has existed for a long time and specifically addresses these concerns. Products do not cost what they should when taken their less tangible environment impact into account. It’s somewhat up to the reader to decide if the inability of society to take those into account is capitalism’s fault, or just human nature, or something else. I live in a country that leans much more socialist than the US but is unequivocally a capitalist country and they do a better job of managing these externalities. And China is not really capitalistic in the same way the US is but is a pretty significant polluter.
Indeed, it’s not the fault of the economic system (if you think Capitalistic societies are wasteful, take a look at the waste and inefficiency of industry under the USSR). If externalities are correctly accounted for, or to be safe, even over-accounted for by means of taxation or otherwise, the market will work itself out. If the environmental cost means the new iPhone costs $2000 in real costs, Apple will work to reduce environmental cost in order to make an affordable phone again and everyone wins. And if they don’t, another company will figure it out instead and Apple will lose.
Currently, there is basically no accounting for these externalities, and in some cases (although afaik not related to smart phones), there are subsidies and price-ceiling regulations and subsidies that actually decreases the cost of some externalities artificially and are worse for the environment than no government intervention at all.
The easy example of this is California State water subsidies for farmers. Artificially cheap water for farmers means they grow water-guzzling crops that are not otherwise efficient to grow in arid parts of the state, and cause environmental damage and water shortage to normal consumers. Can you imagine your local government asking you to take shorter showers and not wash your car, when farmers are paying 94% less than you to grow crops that could much more efficiently be grown in other parts of the country? That’s what happens in California.
Step 1 and 2 are to get rid of the current subsidies and regulations that aggravate externalities and impose new regulation/taxes that help account for externalities.
I have talked to a factory owner in china. He said China is more capitalist than the USA. He said China prioritizes capital over social concerns.
It’s just impressive that a capitalist would say. If China was even remotely communist, don’t you find it interesting that most capitalists who made deals with China seem ok helping ‘the enemy’ become the second largest economy in the world? I prefer to believe the simpler possibility that China is pretty darn capitalist itself.
I did not say China was not capitalist, I said it’s not in the same way as the US. There is a lot more state involvement in China.
Is your claim then that state involvement means you have more pollution? Maybe I’m confused by what you were trying to get at, sorry :-/
No, I was pointing out that different countries are doing capitalism differently and some of them are better at dealing with externalities and some of them are worse. With the overall point being that capitalism might be the wrong scapegoat.
I think the consumer could be blamed more than capitalism, the companies make what sells, the consumers are individuals who buy products that hurt the environment, I think that it is changing though as people become more aware of these issues, they buy more environmentally friendly products.
You’re blaming the consumer? I’d really recommend watching Century of the Self. Advertising has a massive impact and the mass of humans are being fed this desire for all the things we consume.
I mean, this really delves into the deeper question of self-awareness, agency and free will, but I really don’t think most human beings are even remotely aware.
Engineers, people on Lobster, et. al do really want standard devices. Fuck ARM. Give me a god damn mobile platform. Microsoft for the love of god, just publish your unlock key for your dead phone line so we can have at least one line of devices with UEFI+ARM. Device tree can go die in a fire.
The Linux-style revolution of the 2000s (among developers) isn’t happening on mobile because every device is just too damn different. The average consumer could care less. Most people like to buy new things, and we’re been indoctrinated to that point. Retailers and manufactures have focus groups geared right at delivering the dopamine rush.
I personally hate buying things. When my mobile stopped charging yesterday and the back broke again, I thought about changing it out. I’ve replaced the back twice already and the camera has spots on the sensor under the lenses.
I was able to get it charging when I got home on a high amp USB port, so instead I just ordered yet another back and a new camera (I thought it’d be a bitch to get out, but a few YouTube videos show I was looking at the ribbon wrong and it’s actually pretty easy to replace).
I feel bad when I buy things, but it took a lot of work to get to that point. I’ve sold or given away most of my things multiple times to go backpacking, I run ad block .. I mean if everyone did what I’d did, my life wouldn’t be sustainable. :-P
We are in a really solidly locked paradigm and I don’t think it can simply shift. If you believe the authors of The Dictators Handbook, we literally have to run our of resources before the general public and really push for dramatically different changes.
We really need more commitment to open standards mobile devices. The Ubuntu Edge could have been a game changer, or even the Fairphone. The Edge never got funded and the Fairphone can’t even keep parts sourced for their older models.
We need a combination of people’s attitudes + engineers working on OSS alternatives, and I don’t see either happening any time soon.
Edit: I forgot to mention, Postmarket OS is making huge strides into making older cellphones useful and I hope we see more of that too.
I second the recommendation for The Century of the Self. That movie offers a life-changing change of perspective. The other documentaries by Curtis are also great and well worth the time.
Century of the Self was a real eye opener. Curtis’s latest documentary, HyperNormalisation, also offers very interesting perspectives.
Capitalism, by it’s very nature, drives companies to not be satisfied with what already sells. Companies are constantly looking to create new markets and products, and that includes creating demand.
IOW, consumers aren’t fixed actors who buy what they need; they are acted upon to create an ever increasing number of needs.
There are too many examples of this dynamic to bother listing.
It’s also very difficult for the consumer to tell exactly how destructive a particular product is. The only price we pay is the sticker price. Unless you really want to put a lot of time into research it is hard to tell which product is better for the environment.
It’s ridiculous to expect everyone to be an expert on every supply chain in the world, starting right from the mines and energy production all the way to the store shelf. That’s effectively what you are requiring.
I’m saying this as a very conscious consumer. I care about my carbon footprint, I don’t buy palm oil, I limit plastic consumption, I limit my consumption overall, but it’s all a drop in the ocean and changes nothing. There are still hundreds of compounds in the everyday items I buy whose provenance I know nothing about and which could be even more destructive. Not to mention that manufacturers really don’t want you to know, it’s simply not in their interest.
You’re creating an impossible task and setting people up to fail. It is not the answer.
“It’s ridiculous to expect everyone to be an expert on every supply chain in the world, starting right from the mines and energy production all the way to the store shelf. That’s effectively what you are requiring.”
I don’t think it is what they’re requiring and it’s much easier than you describe. Here’s a few options:
People who are really concerned about this at a level demanding much sacrifice to avoid damaging the environment should automatically avoid buying anything they can’t provably trust by default. The Amish are a decent example that avoids a lot of modern stuff due to commitment to beliefs.
There’s groups that try to keep track of corporate abuse, environmental actions, and so on of various companies. They maintain good and bad lists. More people that supposedly care can both use them and join them in maintaining that data. It would be split among many people to lessen each’s burden. Again, avoid things by default until they get on the good lists. Ditch them if they get on the bad ones.
Collectively push their politicians for laws giving proper labels, auditing, etc that help with No 2. Also, push for externalities to be charged back to the companies somehow to incentivize less-damaging behavior.
Start their own businesses that practice what they preach. Build the principles into their charters, contracts, and so on. Niche businesses doing a better job create more options on the good lists in No 2. There’s entrepreneurs doing this.
So, not all-knowing consumers as you indicated. Quite a few strategies that are less impossible.
@ac specifically suggested consumer choice as the solution to environmental issues, and that’s what I disagreed with.
Your point number 3 is quite different from the other three, and it’s what I would suggest as a far more effective strategy than consumer choice (along with putting pressure on various corporations). As an aside, I still wouldn’t call it easy - it’s always a hard slog.
Your points 1, 2 and 4 still rely on consumer choice, and effectively boil down to: either remove yourself from modern civilisation, or understand every supply chain in the world. I think it’s obvious that the first choice is neither desirable nor “much easier” for the vast majority of people (and I don’t think it’s the best possible solution). The second is impossible, as I said before.
“consumer choice as the solution to environmental issues”
edit to add: consumer choice eliminated entire industries worth of companies because they wanted something else. It’s only worsened environmental issues. That’s probably not an argument against consumer choice so much as in favor of them willing to sacrifice the environment overall to get the immediate things they want.
“either remove yourself from modern civilisation, or understand every supply chain in the world”
This is another false dichotomy. I know lots of people who are highly-connected with other people but don’t own lots of tech or follow lots of fads. In many cases, they seem to know about them enough to have good conversations with people. They follow what’s going on or are just good listeners. Buying tons of gadgets or harmful things isn’t necessary for participation. You can get buy with a lot less than average middle or upper class person.
What you said is better understood as a spectrum to be in like most things. Lots of positions in it.
I think we might actually be mostly in agreement, but we’re talking past each other a bit.
That’s probably not an argument against consumer choice so much as in favor of them willing to sacrifice the environment overall to get the immediate things they want.
I agree with this. But even when consumer choice is applied with environmental goals in mind, I believe its effect is very limited, simply because most people won’t participate.
This is another false dichotomy.
Yeah, but it was derived from your points :) I was just trying to hammer the point that consumer choice isn’t an effective solution.
You can get buy with a lot less than average middle or upper class person.
Totally. I’ve been doing that for a long time: avoiding gadgets and keeping the stuff I need (eg a laptop) as long as I can.
“But even when consumer choice is applied with environmental goals in mind, I believe its effect is very limited, simply because most people won’t participate.”
Oh OK. Yeah, I share that depressing view. Evidence is overwhelmingly in our favor on it. It’s even made me wonder if I should even be doing the things I’m doing if so few are doing their part.
The blame rests on the producers, not on the consumers.
Consumers are only able to select off of the menu of available products, so to speak. Most of the choices everyday consumers face are dictated by their employers and whatever is currently available to make it through their day.
No person can reasonably trace the entire supply chain for every item they purchase, and could likely be impossible even with generous time windows. Nor would I want every single consumer to spend their non-working time to tracing these chains.
Additionally, shifting this blame to the consumer creates conditions where producers can charge a premium on ‘green’ and ‘sustainable’ products. Only consumers with the means to consume ‘ethically’ are able to do so, and thus shame people with less money for being the problem.
The blame falls squarely on the entities producing these products and the states tasked with regulating production. There will be no market-based solution to get us out of the climate catastrophe, and we certainly can’t vote for a green future with our dollars.
Consumers are only able to select off of the menu of available products, so to speak. Most of the choices everyday consumers face are dictated by their employers and whatever is currently available to make it through their day.
That’s not true even though it seems it is. The consumers’ past behavior and present statements play a major role in what suppliers will produce. Most of what you see today didn’t happen overnight. There were battles fought where quite a few companies were out there doing more ethical things on supply side. They ended up bankrupt or with less marketshare while the unethical companies got way ahead through better marketing of their products. With enough wealth accumulated, they continued buying the brands of the better companies remaking them into scumbag companies, too, in many cases.
For instance, I strongly advise against companies developing privacy- or security-oriented versions of software products that actually mitigate risks. They’ll go bankrupt like such companies often always did. The companies that actually make lots of money apply the buzzwords customers are looking for, integrate into their existing tooling (often insecure), have features they demand that are too complex to secure, and in some cases are so cheap the QA couldn’t have possibly been done right. That has to be private or secure for real against smart black hats. Not going to happen most of the time.
So, I instead tell people to bake cost-effective security enhancements and good service into an otherwise good product advertised for mostly non-security benefits. Why? Because that’s what demand-side responds to almost every time. So, the supply must provide it if hoping to make waves. Turns out, there’s also an upper limit to what one can achieve in that way, too. The crowds’ demands will keep creating obstacles to reliability, security, workers’ quality of life, supplier choice, environment… you name it. They mostly don’t care either where suppliers being honest about costs will be abandoned for those delivering to demand side. In face of that, most suppliers will focus on what they think is in demand across as many proven dimensions as possible.
Demand and supply side are both guilty here in a way that’s closely intertwined. It’s mostly demand side, though, as quite a few suppliers in each segment will give them whatever they’re willing to pay for at a profit.
I agree with a lot of your above point, but want to unpack some of this.
Software security is a strange case to turn to since it has less direct implications on the climate crisis (sure anything that relies on a datacenter is probably using too much energy) compared to the production of disposable, resource-intensive goods.
Demand and supply side are both guilty here in a way that’s closely intertwined. It’s mostly demand side, though, as quite a few suppliers in each segment will give them whatever they’re willing to pay for at a profit.
I parse this paragraph to read: we should blame consumers for buying what’s available and affordable, because suppliers are incapable of acting ethically (due to competition).
So should we blame the end consumer for buying a phone every two years and not the phone manufacturers/retailers for creating rackets of planned obsolescence?
And additionally, most suppliers are consumers of something else upstream. Virtually everything that reaches an end consumer has been consumed and processed several times over by suppliers above. The suppliers are guilty on both counts by our separate reasoning.
Blaming individuals for structural problems simply lets suppliers shirk any responsibility they should have to society. After all, suppliers have no responsibility other than to create profits. Suppliers’ bad behavior must be curtailed either through regulation, public education campaigns to affect consumption habits, or organizing within workplaces.
(As an aside, I appreciate your response and it’s both useful and stimulating to hear your points)
“I parse this paragraph to read: we should blame consumers for buying what’s available and affordable, because suppliers are incapable of acting ethically (due to competition).”
You added two words, available and affordable, to what I said. I left affordable off because many products that are more ethical are still affordable. Most don’t buy them anyway. I left availability off since there’s products appearing all the time in this space that mostly get ignored. The demand side not buying enough of what was and currently is available in a segment sends a message to suppliers about what they should produce. Especially if it’s consistent. Under vote with your wallet, we should give consumers their share of credit or blame for anything their purchasing decisions as a whole are supporting or destroying. That most won’t deliberately try to obtain an ethical supplier of… anything… supports my notion demand side has a lot to do with unethical activities of financially-successful suppliers.
For a quick example, there are often coops and farmers markets in lots of rural areas or suburban towns in them. There’s usually a segment of people who buy from them to support their style of operation and/or jobs. There’s usually enough to keep them in business. You might count Costco in that, too, where a membership fee that’s fixed cost gets the customers a pile of stuff at a promised low-markup and great service. There’s people that use credit unions, esp in their industry, instead of banks. There’s people that try to buy from nonprofits, public beneit companies, companies with good track record, and so on. There’s both a demand side (tiny) and suppliers responding to it that show this could become a widespread thing.
Most consumers on demand side don’t do that stuff, though. They buy a mix of necessities and arbitrary stuff from whatever supplier is lowest cost, cheapest, most variety, promoting certain image, or other arbitrary reasons. They do this so much that most suppliers, esp market leaders, optimize their marketing for that stuff. They also make more money off these people that let them put lots of ethical, niche players out of business over time. So, yeah, I’d say consumer demand being apathetic to ethics or long-term thinking is a huge part of the problem given it puts tens of billions into hands of unethical parties. Then, some of that money goes into politicians’ campaign funds so they make things even more difficult for those companies’ opponents.
“Blaming individuals for structural problems simply lets suppliers shirk any responsibility they should have to society.”
Or the individuals can buy from different suppliers highlighting why they’re doing it. Other individuals can start companies responding to that massive stated demand. The existing vendors will pivot their operations. Things start shifting. It won’t happen without people willing to buy it. Alternatively, using regulation as you mentioned. I don’t know how well public education can help vs all the money put into advertising. The latter seems more powerful.
“(As an aside, I appreciate your response and it’s both useful and stimulating to hear your points)”
Thanks. Appreciate you challenging it so I think harder on and improve it. :)
Only consumers with the means to consume ‘ethically’ are able to do so, and thus shame people with less money for being the problem.
This is ignoring reality, removing cheaper options does not make the other options cheaper to manufacture. It is not shaming people.
You are also ignoring the fact that in a free country the consumers and producers are the same people. A dissatisfied consumer can become a producer of a new alternative if they see it as possible.
Exactly. The consumers could be doing more on issues like this. They’re complicit or actively contribute to the problems.
For example, I use old devices for as long as I can on purpose to reduce waste. I try to also buy things that last as long as possible. That’s a bit harder in some markets than others. For appliances, I just buy things that are 20 years old. They do the job and usually last 10 more years since planned obsolescence had fewer tricks at the time. ;) My smartphone is finally getting unreliable on essential functions, though. Bout to replace it. I’ll donate, reuse, or recycle it when I get new one.
On PC side, I’m using a backup whose age I can’t recall with a Celeron after my Ubuntu Dell w/ Core Duo 2 died. It was eight years old. Attempting to revive it soon in case it’s just HD or something simple. It’s acting weird, though, so might just become a box for VM experiments, fuzzing, opening highly-untrustworthy URLs or files, etc. :)
Capitalism is killing us in a very literal sense by destroying our habitat at an ever accelerating rate
Which alternatives would make people happier to consume less – drive older cars, wear rattier clothing, and demand fewer exotic vacations? Because, really, that’s the solution to excessive use of the environment: Be happier with less.
Unfortunately, greed has been a constant of human nature far too long for capitalism to take the blame there.
Which alternatives would make people happier to consume less – drive older cars, wear rattier clothing, and demand fewer exotic vacations?
Why do people want new cars, the latest fashions, and exotic vacations in the first place? If it’s all about status and bragging rights, then it’s going to take a massive cultural shift that goes against at least two generation’s worth of cultural programming by advertisers on the behalf of the auto, fashion and travel industries.
I don’t think consumerism kicked into high gear until after the end of World War II when modern advertising and television became ubiquitous, so perhaps the answer is to paraphrase Shakespeare:
The first thing we do, let’s kill all the advertisers.
OK, maybe killing them (or encouraging them to off themselves in the tradition of Bill Hicks) is overkill. Regardless, we should consider the possibility that advertising is nothing but private sector psyops on behalf of corporations, and should not be protected as “free speech”.
If there was an advertising exception for free speech, people would use it as an unprincipled excuse to ban whatever speech they didn’t like, by convincing the authorities to classify it as a type of advertising. After all, most unpopular speech is trying to convince someone of something, right? That’s what advertising fundamentally is, right?
Remember that the thing that Oliver Wendell Holmes called “falsely shouting fire in a crowded theater” wasn’t actually shouting “fire” in an actual crowded theater - it was a metaphor he used to describe protesting the military draft.
I agree: there shouldn’t be an advertising exception on free speech. However, the First Amendment should only apply to homo sapiens or to organisms we might eventually recognize as sufficiently human to possess human rights. Corporations are not people, and should not have rights.
They might have certain powers defined by law, but “freedom of speech” shouldn’t be one of them.
It would be a start if we designed cities with walking and public transportation in mind, not cars.
My neighborhood is old and walkable. I do shopping on foot (I have a bicycle but don’t bother with it). For school/work, take a single bus and a few minutes walking. Getting a car would be a hassle, I don’t have a place to park it, and I’d have to pay large annual fees for rare use.
Newer neighborhoods appear to be planned with the idea that you’ll need a car for every single task. “Residential part” with no shops at all, but lots of room for parking. A large grocery store with a parking lot. Even train stations with a large parking lot, but no safe path for pedestrians/cyclists from the nearby neighborhoods.
The new features on phones are so fucking stupid as well. People are buying new phones to get animated emojis and more round corners. It’s made much worse with phone OEMs actively making old phones work worse by slowing them down.
There has been no evidence to my knowledge that anyone is slowing old phones down. This continues to be an unfounded rumor
There’s also several Lobsters that have said Android smartphones get slower over time at a much greater rate than iPhones. I know my Galaxy S4 did. This might be hardware, software bloat, or whatever. There’s phones it’s happening on and those it isn’t in a market where users definitely don’t want their phones slowing down. So, my theory on Android side is it’s a problem they’re ignoring on purpose or even contributing to due to incentives. They could be investing money into making the platform much more efficient across devices, removing bloat, etc. They ain’t gonna do that.
Android smartphones get slower over time at a much greater rate than iPhones.
In my experience, this tends to be 3rd party apps that start at boot and run all the time. Factory reset fixes it. Android system updates also make phones faster most of the time.
I’m still using a Nexus 6 I got ~2.5 years ago. I keep my phone pretty light. No Facebook or games. Yet, my phone was getting very laggy. I wiped the cache (Settings -> Storage -> Cached data) and that seemed to help a bit, but overall, my phone was still laggy. It seemed to get really bad in my text messaging app (I use whatever the stock version is). I realized that I had amassed a lot of text messages over the years, which includes quite a lot of gifs. I decided to wipe my messages. I did that by installing “SMS Backup & Restore” and telling it to delete all of my text messages, since apparently the stock app doesn’t have a way to do this in bulk. It took at least an hour for the deletion to complete. Once it was done, my phone feels almost as good as new, which makes me really happy, because I really was not looking forward to shelling out $1K for a Pixel.
My working theory is that there is some sub-optimal strategy in how text messages are cached. Since I switch in and out of the text messaging app very frequently, it wouldn’t surprise me if I was somehow frequently evicting things from memory and causing disk reads, which would explain why the lag impacted my entire phone and not just text messages. But, this is just speculation. And a factory reset would have accomplished the same thing (I think?), so it’s consistent with the “factory reset fixes things” theory too.
My wife is still on a Nexus 5 (great phone) and she has a similar usage pattern as me. Our plan is to delete her text messages too and see if that helps things.
Anyway… I realize this basically boils down to folk remedies at this point, but I’m just going through this process now, so it’s top of mind and figured I’d share.
I’ll be damned. I baked up and wiped the SMS, nothing else. The phone seems like it’s moving a lot snappier. Literally a second or two of delay off some things. Some things are still slow but maybe app just is. YouTube always has long loading time. The individual videos load faster now, though.
Folk remedy is working. Appreciate the tip! :)
w00t! Also, it’s worth mentioning that I was experiencing much worse delay than a second or two. Google Nav would sometimes lock up for many seconds.
Maps seems OK. I probably should’ve been straight-up timing this stuff for better quality of evidence. Regardless, it’s moving a lot faster. Yours did, too. Two, strong anecdotes so far on top of factory reset. Far as we know, even their speed gains might have come from SMS clearing mostly that the reset did. Or other stuff.
So, I think I’m going to use it as is for a week or two to assess this change plus get a feel for a new baseline. Then, I’ll factory reset it, reinstall some apps from scratch, and see if that makes a difference.
I’ll try to remember to. I’m just still stunned it wasn’t 20 Chrome tabs or all the PDF’s I download during the day. Instead, text messages I wasn’t even using. Of all things that could drag a whole platform down…
I thought the contacts were but messages were on phone. I’m not sure. The contacts being on there could have an effect. I’d have hoped they cached a copy of SIM contents onto in-phone memory. Yeah, SIM access could be involved.
Now, that’s fascinating. I don’t go in and out of text a lot but do have a lot of text messages. Many have GIF’s. There’s also at least two other apps that accumulate a lot of stuff. I might try wiping them. Btw, folk remedies feel kind of justified when we’re facing a complex, black-box system with nothing else to go on. ;)
Official from apple: https://www.apple.com/au/iphone-battery-and-performance/
They slow phones with older batteries but don’t show the user any indication that it can be fixed very cheaply by replacing the battery (Until after the recent outrage) and many of them will just buy a new phone and see it’s much faster.
Wow, so much to unpack here.
You said they slow old phones down. That is patently false. New versions of iOS are not made to run slowly on older model hardware.
Apple did not slow phones down with old batteries. They throttled the CPU of phones with failing batteries (even brand new ones!) to prevent the phone from crashing due to voltage drops. This ensured the phone was still functional even if you needed your phone in an emergency. Yes it was stupid there was no notification to the user. This is no longer relevant because they now provide notifications to the user. This behavior existed for a short period of time in the lifespan of the iPhone: less than 90 days between introduction of release with throttling and release with controls to disable and notifications to users.
Please take your fake outrage somewhere else.
Apple did not slow phones down with old batteries. They throttled the CPU of phones with failing batteries (even brand new ones!) to prevent the phone from crashing due to voltage drops.
In theory this affects new phones as well, but we know that as batteries grow older, they break down, hold less charge, and have a harder time achieving their design voltage. So in practice, this safety mechanism for the most part slows down older phones.
You claim @user545 is unfairly representing the facts by making Apple look like this is some evil ploy to increase turnover for their mobile phones.
However, given the fact that in reality this does mostly make older phones seem slower, and the fact that they put this in without ever telling anyone outside Apple and not allowing the user to check their battery health and how it affected the performance of their device, I feel like it requires a lot more effort not to make it look like an intentional decision on their part.
Sure, but if you have an old phone with OK batteries, then their code did not slow it down. So I think it is still more correct to say they slowed down those with bad batteries than those that were old even if most of those with bad batteries were also bad which really depended on phone’s use.
The difference is not just academic. For example I have “inherited” iPhone6 from my wife that still has a good battery after more than 2 years and performs fine.
the fact that they put this in without ever telling anyone outside Apple
It was in the release notes of that iOS release…
edit: additionally it was known during the beta period in December. This wasn’t a surprise.
Again, untrue. The 11.2 release notes make no mention of batteries, throttling, or power management. (This was the release where Apple extended the throttling to the 7 series of phones.) The 10.2.1 release notes, in their entirety, read thus:
iOS 10.2.1 includes bug fixes and improves the security of your iPhone or iPad. It also improves power management during peak workloads to avoid unexpected shutdowns on iPhone.
That does not tell a reader that long-term CPU throttling is taking place, that it’s restricted to older-model iPhones only, that it’s based on battery health and fixable with a new battery (not a new phone), etc. It provides no useful or actionable information whatsoever. It’s opaque and frankly deceptive.
You’re right, because I was mistaken and the change was added in iOS 10.2.1, 1/23/2017
https://support.apple.com/kb/DL1893?locale=en_US
It also improves power management during peak workloads to avoid unexpected shutdowns on iPhone.
A user on the day of release:
Hopefully it fixes the random battery shutoff bug.
additionally in a press release:
In February 2017, we updated our iOS 10.2.1 Read Me notes to let customers know the update ‘improves power management during peak workloads to avoid unexpected shutdowns.’ We also provided a statement to several press outlets and said that we were seeing positive results from the software update.
Please stop trolling. It was absent from the release notes for a short period of time. It was fixing a known issue affecting users. Go away.
Did you even read the comment you are responding to? I quoted the 10.2.1 release notes in full–the updated version–and linked them too. Your response is abusive and in bad faith, your accusations of trolling specious.
[Comment removed by moderator pushcx: We've never had cause to write a rule about doxxing, but pulling someone's personal info into a discussion like this to discredit them is inappropriate.]
I don’t hate Apple. I’m not going to sell my phone because I like it. The battery is even still in good shape! I wish they’d been a little more honest about their CPU throttling. I don’t know why this provokes such rage from you. Did you go through all my old comments to try to figure out what kind of phone I have? Little creepy.
I’m not angry about anything here. It’s just silly that such false claims continue to be thrown around about old phones intentionally being throttled to sell new phones. Apple hasn’t done that. Maybe someone else has.
edit: it took about 30 seconds to follow your profile link to your website -> to Flickr -> to snag image metadata and see what phone you own.
They throttled the CPU of phones with failing batteries (even brand new ones!)
This is untrue. They specifically singled out only older-model phones for this treatment. From the Apple link:
About a year ago in iOS 10.2.1, we delivered a software update that improves power management during peak workloads to avoid unexpected shutdowns on iPhone 6, iPhone 6 Plus, iPhone 6s, iPhone 6s Plus and iPhone SE. [snip] We recently extended the same support to iPhone 7 and iPhone 7 Plus in iOS 11.2.
In other words, if you buy an iPhone 8 or X, no matter what condition the battery is in, Apple will not throttle the CPU. (In harsh environments–for example, with lots of exposure to cold temperatures–it’s very plausible that an 8 or X purchased new might by now have a degraded battery.)
You are making a claim without any data to back it up.
Can you prove that the batteries in the new iPhones suffer voltage drops when they are degraded? If they use a different design with more/smaller cells then AIUI they would be significantly less likely to have voltage drops when overall capacity is degraded.
But no, instead you continue to troll because you have a grudge against Apple. Take your crap elsewhere. It’s not welcome here.
You’re moving the goalposts. You claimed Apple is throttling the CPU of brand new phones. You were shown this to be incorrect, and have not brought any new info to the table. Your claim that the newer phones might be designed so as to not require throttling is irrelevant.
Please don’t accuse (multiple) people of trolling. It reflects poorly on yourself. All are welcome here.
You can buy a brand new phone directly from Apple (iPhone 6S) with a faulty battery and experience the throttling. I had this happen.
Google services update in the background even when other updates are disabled. Even if services updates are not intended to slow down the phone, they still do.
The new features on phones are so fucking stupid as well.
I think the consumer who pays for it is stupid.
It’s both. The user wants something new every year and OEMs don’t have anything worthwhile each year so they change things for the sake of change like adding rounded corners on the LCD or cutting a chunk out of the top. It makes it seem like something is new and worth buying when not much worthwhile has actually changed.
I think companies would always take the path of least resistance that works. If consumers didn’t fall for such stupid tricks the companies that did them would die off.
Yep. I guess humanity’s biggest achievement will be to terraform itself out of existence.
This planet does neither bargain nor care about this civilizations’ decision making processes. It will keep flying around the sun for a while, with or without humans on it.
I’m amazed by the optimism people display in response to pointing out that the current trajectory of climate change makes it highly unlikely that our grand-grand-children will ever be born.
The list is endless, and it all comes down to the American ethos that making money is a sacred right that trumps all other concerns.
s/American/human
You can’t fix a problem if you misunderstand what causes it.
Ideology matters, and America has been aggressively promoting toxic capitalist ideology for many decades around the world. Humans aren’t perfect, but we can recognize our problems and create systems around us to help mitigate them. Capitalism is equivalent of giving a flamethrower to a pyromaniac.
If you want to hash out how “toxic capitalism” is ruining everything, that’s fine–I’m just observing that many other countries (China, Germany, India, Mozambique, Russia, etc.) have done things that, to me at least, dispel the notion of toxic capitalism as purely being American in origin.
And to avoid accusations of whataboutism, the reason I point those other countries out is that if a solution is put forth assuming that America is the problem–and hence itself probably grounded in approaches unique to an American context–it probably will not be workable in other places.
Nobody is saying that capitalism alone is the problem or that it’s unique to America. I was saying that capitalism is clearly responsible for a lot of harm, and that America promotes it aggressively.
Don’t backpedal. You wrote:
The list is endless, and it all comes down to the American ethos that making money is a sacred right that trumps all other concerns.
As to whether or not capitalism is clearly responsible for a lot of harm, it’s worth considering what the alternatives have accomplished.
Nobody is backpedaling here, and pointing at other failed systems saying they did terrible things too isn’t much of an argument.
When people tell me to not use the AGPL because Google doesn’t like it, I think “working as intended”.
I think the EUPL is probably the nicest license that Google doesn’t like and compatible with a lot of other licenses.
Seems like EUPL allows conversion of code into MPL and LGPL, which makes the anti-google point moot.
Google has banned the EUPL on their premises so it doesn’t make it moot at all…
It seems that the EUPL (v1.1) ban exists for longer amount of time ( https://web.archive.org/web/20170329041441/https://opensource.google.com/docs/thirdparty/licenses/ ), at least before the last may, when EUPL v1.2 was released. And v1.2 seems to allow conversion to permissive licences that google doesn’t mind.
The EUPL still only allows conversion when being part of a larger project, not simply copy-pasting into another license, IIRC, so I’m not sure how google thinks about that. But atm EUPL is banned at Google.
https://opensource.google.com/docs/thirdparty/licenses/ (I think that’s been posted here some time ago?)
Google is allergic to WTFPL and Beerware, extremely cautious about “just public domain”, and accepts Unlicense and CC0.
Thanks for the link. I generally never cared about “fun” licenses, and chose to either use public domain, or ISC, but the fact that Google is allergic to WTFPL makes a pretty strong case for it, in my book. I might chose to use it in the future.
I knew Google didn’t like AGPL, but I don’t like AGPL either, so that was not an option for me.
The WTFPL can expose the developer to liability. See Dan Berlin’s comment about the WTFPL on HN. He is a lawyer.
interestingly, fish seems to be unaffected by this exact attack because it doesn’t interpret newlines as “end of command”.
Security questions are silly and should definitely not be relied upon.
My only malicious hacking event was accessing someone’s account after receiving their username and password. When they changed their password, I still wanted to access their things. I got access to their email by answering a secret question to access their email, which was something like “what’s your favourite sports team”. I’m from the other side of the world, but Wikipedia had a list of teams for their country and I just tried a few of them until one worked.
The Montréal metro system uses RFID cards to pay, but as I understand it, without centrally tracking the buyer. Instead, the card itself records the number and type of fares bought.
This has an inconvenient downside: the only way to recharge the card is at kiosks in the metro. If you want to do it online, you have to buy a USB card reader that the Montréal transport society will sell you, so you can recharge your RFID card online.
I like this, but a lot of people are unhappy about the inconvenience of not being able to recharge the card online. So I think we’re going to be moving into a system where the cards are centrally managed, along with everyone’s purchase history of them.
It’s always so convenient to allow surveillance on ourselves.
The OPUS cards themselves are anonymous, but purchase info could be tracked if you don’t pay with cash. Cards can also be registered with STM at a service centre. This kills the anonymity factor but is useful if you lose it. I’ve gotten a free replacement this way without having to pay for a full fare again.
I found the USB card reader setup the STM came up with to be kinda lame overall. The last time I tried it (admittedly a couple of years ago), it required some deprecated NPAPI plugins that were no longer supported by their vendor and I had to whitelist them in my web browser, following instructions that would probably scare an average end user. The browser plugin mechanism they used has since been removed by the major browsers. The plugin also only worked on Windows and Mac when I tried it. The next time I tried to set it up, there were a lot of dead links on their website.
However, I get around the renewal hassle by signing up online for a yearly subscription. In this case, they send you a new OPUS smart card, which comes with some benefits (like only paying 11 of 12 months each year and getting a decent discount off of the Bixi bike sharing and/or Communauto car sharing programs, one free guest on evenings and weekends, and free rides on RTC in Quebec City after your first year).
This card is auto-renewed and you can access your account online, so you avoid waiting at the kiosk, and it saves you from having to buy the $16.66 USB card reader. Of course, it only works if you’re a frequent enough STM user to justify a yearly subscription. The yearly subscription cards are also automatically registered with the STM. If you want to take advantage of some of the benefits (free rides on RTC), you have to have your picture taken and stored on the back of the card. Before I did this, I would lend my yearly subscription out to my friends to use when I was travelling out of town but now I can’t anymore.
Since OPUS cards have been hacked several times, an artificial life span of 3 years is imposed so they can push out new revisions using different encryption methods.
I bought the USB card reader, anyway, because I like to collect gadgets. It was cheap and I wanted to mess around with OpenSC in Linux. It’s a Watchdata W1981-Plus and I believe it is the same device used by STIB/MIVB (Brussels) and RATP (Paris).
I had originally thought OPUS was a province-wide smartcard system but STO in Gatineau uses a different card, MULTI. To make things even worse, Ottawa’s OC Transpo, which overlaps some services with STO, uses yet another competing card- Presto, which is also used in the Greater Toronto Area. I’m really disappointed that a country with a population the size of Canada can’t get their smart card act together to standardize on one system. In the Netherlands, you use one card for all transit systems and it seemed to work beautifully.
Did you know “carte OPUS” is a pun on “carte à puce”?
(Not really, but it’s too good of a factoid to not tell it.)
we have a similar system locally which allows recharging on the buses themselves (smaller buses let the driver access it, bigger buses have a vending machine) and in train stations, so you don’t have to go out of your way. It might be more convenient than online payments.
The problem with the Montréal system is that for whatever reason the fares are tied to calendar dates. If you want to buy a monthly pass, it can only start at the first day of the calendar month and ends at the last one. Weekly passes can only be bought from Monday to Sunday. This creates long lines at the start of the month, hence the desire to buy online.
It also makes it easier for people to hack their own cards in their possession to give themselves free rides. There’s possibly a cryptocurrency-like solution to this problem, that would make it possible for the transit system to centrally store the amount of money a given patron has loaded onto their card and used for farepaying, without tracking exactly where they go within the system, but I don’t think it’s a straightforward problem at all. Unfortunately, centralized tracking of where and when people get on and off the system is actually a very natural fit to the problem at hand of letting people pay for use of a public transit system.
Besides, public transit cars generally have security cameras, right? You can get tracked that way too.
It also makes it easier for people to hack their own cards in their possession to give themselves free rides.
At least for the Montréal situation, it’s probably far easier to just jump the turnstiles than to attempt any sophisticated trickery. I see people jumping turnstiles frequently enough.
I think if you have a system that most people will not abuse, it can all work out. No need to make it absolutely draconian and tamper-proof unless it’s an actual problem.
That was the main risk that critics said about the Mondex card from what I read. Too bad since it was one of only high-assurance, security developments in commercial sector.
Japan has a similar cash-card system (Suica, among others) that you can buy using cash and recharge online (although I think online recharging needs it to be tied to a bank account/mobile account, or to own a special, if common, card-reader/writer for your computer). I don’t see why the Montréal system wouldn’t be able to do the same, other than perhaps the slow-moving nature of the STM and the relatively small (compared to Japan) usage.
It is a pretty heavily used cash card though, so perhaps all the vendors (other than just transit) accepting it helps things like that along. Probably not as decentralized as I think it is, either, now that I’ve spent some time puzzling it out.
I’d love to know how many Seamonkey users there were, in the shallow hope of beating the Opera users.
Is @liwakura == nero?
Seamonkey 4, Opera 7.
Yes. I checked the box that im the author of the submitted story, so my nick should be light-blue.
Used to use seamonkey, but latest firefox was just too damn fast so i switched. When seamonkey get’s the latest engine, maybe i’ll switch back.
I don’t know if that will ever happen. I’m not sure there is the man-power.
Seamonkey has always been “Firefox but more sane”. Whilst it’s slipping, I think there’s still a need for a project that does this (but uses the quantum- code).
I’d really like to use anything that isn’t Firefox, but addons seem to be a problem with Seamonkey - how do people get around that?
There’s an extension that adds an ‘addon history’ thingamabob to the addons site, so you can select older versions of addons:
https://github.com/lemon-juice/AMO-Browsing-for-SeaMonkey
It’s really imperfect and I have older addons breaking. My heart may soon follow.
I think it’s no coincidence this story dropped just as Intel announced to ramp up production in Israel.
I could imagine these “security experts” (all with strong ties to Israel) were thrown some bone (the chipset vuln) by an Israel/Intel contact and filled the report up with other bogus vulns to try to make some Hedge-fund bucks with it.
Without doubt, these guys are no researchers in the traditional sense (24h disclosure time, what?) and it all smells very fishy.
what is the significance of pointing out them being Israeli? shouldn’t American researchers be questioned for their affiliation more harshly, as Intel is an American company?
should my own work be questioned because I am also Israeli?
It’s not about Intel being an American company, but the fact they chose Israel for manufacturing. I don’t want to spread false facts or something and only stated it as something that I could imagine to have happened.
Don’t be offended I pointed out the Israeli connection. I work with Israelis on a daily basis and as with any country, there are good and foul apples found within.
affects any package manager that runs on tags. Simply tag malicious changes beyond the current release and it would be deployed to many users likely with little actual review.
We checksum the sources we use for packages at commit time for this reason. Actually GitHub tags frequently do change and have been a pain, sometimes unintentionally. Which is also why we have a distfile mirror, which avoids the original issue.
Are there other kernels that have a better culture, and are easier to approach if one would like a chance to work at this level? One of the BSDs? Something springing from L4 somewhere I don’t know about? Haiku? ReactOS? Something more esoteric? That Rust one whose name escapes me right now?
Linux is so unique in its development style and pace that I think most of those comparisons are misleading. There are probably less than 5 or 10 projects you can compare Linux too, and even then it’s hard to.
Something like L4 is done by a small group of people, who probably have physical contact with each other. It’s more like Google’s Fuschia than Linux. Likewise with Minix – it’s a relatively small group of people and thus cohensive, at least compared to Linux.
As far as I understand, FreeBSD is the “biggest” of the BSDs. Not sure if that means it has the most developers. But I would guess that Linux has 10x the developers of the next biggest one. And the developers come from 10x more diverse institutions (corporations, etc.)
This is my informal sense of things; I’m sure there are numbers, and if anyone has them I’d appreciate it. But I’d be surprised if the pace / #developers isn’t at least 10x, even 100x.
Redox OS is the Rust one. I only know a little bit about it, but it also has the property of being nascent, which will make for a very different culture than Linux. There is a lot “at stake” in Linux, hence the disagreements.
Also, for better or worse, almost all the projects you mention are kernel + user space, not just a kernel.
I don’t mean “I want to make value judgements about which communities are better because they deal with more stuff”, I mean “I would love to hack on kernels but want no part in this kind of environment, where should I go?”
I haven’t hung out here myself, but I’ve heard good things about: https://wiki.osdev.org/Expanded_Main_Page
This is a good time to shill my favourite project (It’s NetBSD, and has great culture), but to be honest - I don’t know many projects with governance as insane as Linux. Even in the article, Daniel Vetter refers to group maintainership and handing out commit access as “more like a standard project”.
I’ve seen people who had potential to be toxic maintainers that drive away contributions but their impact was limited by them not having absolute power.
Also, by having a very weak hierarchy, nobody is immune from being kicked from the project. Toxicity will get you pulled aside and have someone ask you to stop/apologize, continuing will result in being kicked from the project, and I’ve seen it happen in practice.
For anyone who avoids HN, here’s cpercivia’s response and my follow up:
“If you read my 2005 paper, you’ll see that I devoted a section to providing the background on covert channels, dating back to Lampson’s 1973 paper on the topic. I was very much aware of that earlier work. My paper was the first to demonstrate that microarchitectural side channels could be used to steal cryptologically significant information from another process, as opposed to using a covert channel to deliberately transmit information.” (cperciva)
My response:
Hmm. It’s possible you made a previously-unknown distinction but I’m not sure. The Ware Report that started INFOSEC field in 1970 put vulnerabilities in three categories: “accidental disclosures, deliberate penetration, and physical attack.” The diagram on Figure 3 (p 6) shows with radiation and crosstalk risks they were definitely considering hardware problems and side channels at least for EMSEC. When talking of that stuff, they usually treat it as a side effect of program design rather than deliberate.
Prior and current work usually models secure operation as a superset of safe/correct operation. Schell, Karger, and others prioritized defeating deliberate penetration with their mechanisms since (a) you had to design for malice from the beginning and (b) defeating one takes care of the other as a side effect. They’d consider the ability for any Sender to leak to any Receiver to be a vulnerability if that flow violates the security policy. That’s something they might not have spelled out since they habitually avoided accidental leaks with mechanisms. Then again, you might be right where they never thought of it while working on the superset model. It’s possible. I’m leaning toward they already considered side channels to be covert channels given descriptions from the time:
“A covert channel is typically a side effect of the proper functioning of software in the trusted computing base (TCB) of a multilevel system… Also, as we explain later, malicious users can exploit some special kinds of covert channels directly without using any Trojan horse at all.”
“Avoiding all covert channels in multilevel processors would require static, delayed, or manual allocation of all the following resources: processor time, space in physical memory, service time from the memory bus, kernel service time, service time from all multilevel processes, and all storage within the address spaces of the kernel and the multilevel processes. We doubt that this can be achieved in a practical, general purpose processor. “
The description is it’s incidental problem from normal, software functioning that can be maliciously exploited with or without a Trojan horse. They focus on penetration attempts since that was culture of time (rightly so!) but know it can be incidental. They also know in second quote just how bad the problem is with later work finding covert channels in all of that. Hu did the timing channels in caches that same year. Wray made a SRM replacement for timing channels year before. They were all over this area but without a clear solution that wouldn’t kill the performance or pricing. We may never find one if talking timing channels or just secure sharing of physical resources.
Now far as your work, I just read it for refresher. It seems to assume, not prove, that the prior research never considered incidental disclosure. Past that, you do a great job identifying and demonstrating the problem. I want to be extra clear here I’m not claiming you didn’t independently discover this or do something of value: I give researchers like you plenty credit elsewhere on researching practical problems, identifying solutions, and sharing them. I’m also grateful for those like you who deploy alternatives to common tech like scrypt and tarsnap. Much respect.
My counter is directed at the misinformation than you personally. My usual activity. I’m showing this was a well-known problem with potential mitigations presented at security conferences, one product was actually built to avoid it, it was higly cited with subsequent work in high-security imitating some of its ideas, these prior works/research is not getting to new ones concerned about similar problems, some people in security field are also discouraging or misrepresenting it on top of that, and I’m giving the forerunners their due credit plus raising awareness of that research to potentially speed up development of next, new ideas. My theory is people like you might build even greater things if you know about prior discoveries in problems and solutions, esp on root causes behind multiple problems. That I keep seeing prior problems re-identified makes me think it’s true.
So, I just wanted to make that clear as I was mainly debunking this recent myth of cache-based, timing channels being a 2005 problem. It was rediscovered in 2005, perhaps under a new focus on incidental leaks, in a field where majority of breakers or professionals either didn’t read much prior work or went out of their way to avoid it depending on who they are. Others and I studying such work also have posted that specific project in many forums for around a decade. You’d think people would’ve have checked out or tried to imitate something in early secure VMM’s or OS’s by now when trying to figure out how to secure VMM’s or OS’s. For some reason, they don’t in majority of industry and FOSS. Your own conclusion echos that problem of apathy:
“Sadly, in the six months since this work was first quietly circulated within the operating system security community, and the four months since it was first publicly disclosed, some vendors failed to provide any response.”
In case you wondered, that was also true in the past. Only the vendors intending to certify under higher levels of TCSEC looked for or mitigated covert channels. The general market didn’t care. There’s a reason: the regulations for acquisition said they wouldn’t get paid their five to six digit licensing fees unless they proved to evaluators they applied the security techniques (eg covert-channel analysis). They also knew the evaluators would re-run what they could of the analyses and tests to look for bullshit. It’s why I’m in favor of security regulations and certifications since they worked under TCSEC. Just gotta keep what worked while ditching bullshit like excess paperwork, overly prescriptive, and so on. DO-178B/DO-178C has been really good, too.
Whereas, understanding why FOSS doesn’t give a shit I’m not sure on. My hypothesis is cultural attitudes, how security knowledge disseminates in the groups, and rigorous analysis of simplified software not being fun to most developers versus piles of features they can quickly throw together in favorite language. Curious what your thoughts are on FOSS side of it given FOSS model always had highest potential for high-security given labor advantage. Far as high-security, it never delivered it even once with all strong FOSS made by private parties (esp in academia) or companies that open-sourced it after the fact. Proprietary has them beat from kernels to usable languages several to nothing.
I think at least in FOSS people were discussing on how timing attacks make certain protection models unfeasible, e.g. KASLR can’t do anything to protect from local users who can already run code, because the hardware is leaking timing information. JavaScript is an inherently bad idea. Not trying to desperately combat the leaks.
very surprising that the BSDs weren’t given heads up from the researchers. Feels like would be a list at this point of people who could rely on this kind of heads up.
The more information and statements that come out, the more it looks like Intel gave the details to nobody beyond Apple, Microsoft and the Linux Foundation.
Admittedly, macOS, Windows, and Linux covers almost all of the user and server space. Still a bit of a dick move; this is what CERT is for.
Plus, the various BSD projects have security officers and secure, confidential ways to communicate. It’s not significantly more effort.
Right.
And it’s worse than that when looking at the bigger picture: it seems the exploits and their details were released publicly before most server farms were given any head’s up. You simply can’t reboot whole datacenters overnight, even if the patches are available and you completely skip over the vetting part. Unfortunately, Meltdown is significant enough that it might be necessary, which is just brutal; there have to be a lot of pissed ops out there, not just OS devs.
To add insult to injury, you can see Intel PR trying to spin Meltdown as some minor thing. They seem to be trying to conflate Meltdown (the most impactful Intel bug ever, well beyond f00f) with Spectre (a new category of vulnerability) so they can say that everybody else has the same problem. Even their docs say everything is working as designed, which is totally missing the point…
Wasn’t there a post on here not long ago about Theo breaking embargos?
Note that I wrote and included a suggested diff for OpenBSD already, and that at the time the tentative disclosure deadline was around the end of August. As a compromise, I allowed them to silently patch the vulnerability.
He agreed to the patch on an already extended embargo date. He may regret that but there was no embargo date actually broken.
@stsp explained that in detail here on lobste.rs.
So I assume Linux developers will no longer receive any advance notice since they were posting patches before the meltdown embargo was over?
I expect there’s some kind of risk/benefit assessment. Linux has lots of users so I suspect it would take some pretty overt embargo breaking to harm their access to this kind of information.
OpenBSD has (relatively) few users and a history of disrespect for embargoes. One might imagine that Intel et al thought that the risk to the majority of their users (not on OpenBSD) of OpenBSD leaking such a vulnerability wasn’t worth it.
Even if, institutionally, Linux were not being included in embargos, I imagine they’d have been included here: this was discovered by Google Project Zero, and Google has a large investment in Linux.
Actually, it looks like FreeBSD was notified last year: https://www.freebsd.org/news/newsflash.html#event20180104:01
By late last year you mean “late December 2017” - I’m going to guess this is much later than the other parties were notified.
macOS 10.13.2 had some related fixes to meltdown and was released on December 6th. My guess is vendors with tighter business relationships (Apple, ms) to Intel started getting info on it around October or November. Possibly earlier considering the bug was initially found by Google back in the summer.
Windows had a fix for it in November according to this: https://twitter.com/aionescu/status/930412525111296000
A sincere but hopefully not too rude question: Are there any large-scale non-hobbyist uses of the BSDs that are impacted by these bugs? The immediate concern is for situations where an attacker can run untrusted code like in an end user’s web browser or in a shared hosting service that hosts custom applications. Are any of the BSDs widely deployed like that?
Of course given application bugs these attacks could be used to escalate privileges, but that’s less of a sudden shock.
there are/were some large scale deployments of BSDs/derived code. apple airport extreme, dell force10, junos, etc.
people don’t always keep track of them but sometimes a company shows up then uses it for a very large number of devices.
Presumably these don’t all have a cron job doing cvsup; make world; reboot against upstream *BSD. I think I understand how the Linux kernel updates end up on customer devices but I guess I don’t know how a patch in the FreeBSD or OpenBSD kernel would make it to customers with derived products. As a (sophisticated) customer I can update the Linux kernel on my OpenWRT based wireless router but I imagine Apple doesn’t distribute the Airport Extreme firmware under a BSD license.
This advice goes for everything, except “use your distro version”.
The practice of “distros” to maintain the version they had at the release + backport anything that sounds security-related is detrimental to security IMHO. It’s very often that security problems are not reported as such, because people writing software aren’t necessarily good at writing exploits.