1.  

    APFS was publicly released slightly less than a year ago. The entire timeline of the project seems extraordinarily, even foolishly rushed. Is it really such a surprise that there are bugs like this that suggest Apple cannot deliver reliability and stability for something as important as a filesystem in such a short time?

    This is a genuine question; I think the answer is “it’s not” but I’m not sure.

    1.  

      A common adage is that it takes a decade to mature a file system. And my experience with ZFS more or less confirms this. Apple will be delivering APFS broadly with 3-4 years of development so will need to accelerate quickly to maturity.

      It seems like they were acutely aware of the truncated timeline (source).

    1. 4

      Does anyone know why previous efforts weren’t merged? Is upstream interested in these kind of changes?

      1. 3

        Interesting to see some backlash over this, Dave Winer’s objections have caught my eye in particular.

        On the one hand I’m not sure google should be punishing sites for being http only.

        On the other hand, what is the open web if your ISP can inject ads into a page where there are none?

        1. 2

          I didn’t see a link to Winer’s objection in the linked article. Do you have a reference?

            1. 4

              He sounds a bit, well

              HTTPS is going to burn huge portions of the open web

              His entire shtick seems to be that he thinks HTTPS is a conspiracy by Google to control the web, somehow.

              1. 3

                He seems to be confounding Google’s motives, which in fairness are probably not altruistic, with the technology itself which is obviously pretty sound.

                1. 2

                  I’ve literally never seen so much FUD in my life. He must have some fundamental misconception about how HTTPS works. I just don’t see how he could be arguing these points otherwise.

                  I mean, I would be mad if Google really was doing what he thinks they’re doing. But they’re not. He’s also totally missing (ignoring?) the fact that Mozilla is also taking steps matching Google’s.

                  1. 4

                    I hate to say it because I have a lot of respect for his work, but I think basically he’s got a lot of domains and can’t be bothered converting them. I totally get the objections against the way Google are approaching this, but going after https itself is dumb.

                    Why would you think it’s a bad thing that you can guarantee that the site you are viewing has not been tampered with?

                    I’ve seen him call out Mozilla too in fairness.

                    1. 1

                      Meh. Honestly I have no issues with the way Google is approaching this. They (and Mozilla) give plenty of time before making even the tiniest changes, and in the end really all they’re doing is changing the UI to reflect reality.

                      And without them doing that, people exactly like Winer just wouldn’t care.

                    2. 3

                      I’m skimming through, trying to understand it, and he never really states an objection anywhere that I can see. I am familiar with several reasonable objections to the concentration of power created by the CA system and to the burden it imposes on content creators; I just don’t see Winer actually expressing any of them.

                2. 1

                  On the other hand, what is the open web if your ISP can inject ads into a page where there are none?

                  May be this is better served by adding signatures to the basic HTTP rather than forcing HTTPS everywhere?

                  1. 2

                    Wouldn’t that involve the same trust infrastructure but without actually encrypting the traffic?

                    1. 4

                      Not completely. The benefit is that intermediaries can cache it if required, and clients can verify the signature only when needed. With the forcing of HTTPS everywhere, a lot of caching infrastructure that existed previously has become useless without any alternatives. These are especially important in low bandwidth countries or communities relying on low bandwidth gateways.

                  1. 1

                    Yikes. Meant to put that into the title, obviously, not the URL. Apparently I can’t edit the URL though.

                  1. 12

                    I’ve changed my tune on Bitcoin recently for two reasons, despite still liking its ideals:

                    1. The government intervening in the economy is sometimes a feature, not a bug. In times of economic crisis, for example, the government has unique powers to help. Sometimes it is a bug, but Bitcoin seems to assume that any intervention by any centralized entity, at ALL, is malicious. In fact I intend to take an economics class to be better informed on this very issue.

                    2. The energy use is unconscionable. We’re already destroying the environment at a ridiculous pace and the Bitcoin space (to me, at least, bearing in mind that I don’t REALLY pay attention) seems to be full of anarchists who are determined to have their uncontrollable system at any cost, with absolutely no regard to seemingly unrelated consequences.

                    1. 13

                      The government intervening in the economy is sometimes a feature, not a bug.

                      If by “sometimes a feature” you mean “the only thing that prevents repeated economic collapse” then yes.

                      If you’re interested at all then definitely take a macroeconomics class. And history while you’re at it, especially pre-industrial and early industrial America.

                      1. 5

                        Sometimes == every time bitcoiners fall for a scam and lose money (and suddenly drop all the libertarian stuff and start crying for government help).

                        Look at /r/Buttcoin, the amount of fraud in the cryptocurrency space is beyond ridiculous.

                        1. 1

                          I agree with your observation, but I think understanding the cause is more useful than poking fun at it. I’ve gotten the sense that falling for scams is an expected cost to a certain constituency, specifically the people who are using cryptocurrency as a medium of exchange for things the governments they live under don’t approve of. I don’t expect the prevalence of scams to scare that group away. People who don’t share that driving concern should take note and understand that it’s always likely to be high-risk.

                        2. 1

                          Not that I’m in favor of Bitcoin at all (and I seriously agree with your first point) but I’ve also seen arguments that Bitcoin is used in some places (perhaps it was China?) to help mop up excess energy from renewable sources when they’re at peak output hours. I think the argument went that when the sun is high in the sky on a clear day, or when the wind is really blowing, energy companies will often turn off windmills or solar panels to avoid producing too much energy. In this case, Bitcoin can help use up that excess energy, and by turning it into cash, become a sort of renewable subsidy that makes it more attractive to build more renewable energy sources. I do know there are definitely places where a renewables-powered grid overproduces so much that energy prices become negative.

                          Perhaps this isn’t true, but I think it illustrates that maybe the energy problem is a more complex issue than it appears?

                          1. 7

                            Sounds like some fairy tale told by miners implying they are not mining 24h/7d a week.

                            1. 3

                              Mm, that matches my understanding of how energy production works, but it’s also the case that that energy could go into other things. I think it was actually here on lobste.rs that I learned about kinetic energy storage (roll a ball up a hill, to roll it back down later… that sort of thing) and how it’s used to smooth out energy demand.

                              There’s no way that Bitcoin miners aren’t making things difficult for grid operators. I agree with @isra17 that it’s an extremely self-serving claim.

                            2. -1

                              The energy seems like a fairly trivial cost to me. It’s a fraction of a percent. I’m willing to pay that price, and I’m also optimistic about the future of renewable energy.

                              1. 13

                                The per-transaction electricity cost was 215kwh back in November - that’s not trivial in the slightest. At market rates where I live it’s $7 or so.

                                Credit cards processors use several orders of magnitude less per payment made.

                                1. 1

                                  Well in dollars terms it either is worth it or its not. I’m not particularly concerned about the environmental impact.

                                  1. 9

                                    And whom do you expect to deal with the environmental consequences?

                                    1. 2

                                      whoever’s dealing with it for the other 99.9% of the environmental impact from non-renewable energy sources

                                      1. 7

                                        That would be your descendants.

                                        1. 1

                                          o/ yo

                                          1. 0

                                            if their solution ends up involving defining standards for sufficiently useful computations, well, uh, godspeed

                                  2. 9

                                    A fraction of a percent of what? Energy use? Today Bitcoin is estimated to use as much energy as the country of Denmark. By 2020 is estimated it’ll use literally as much energy as we use in the entire planet today. I don’t particularly see how that’s trivial. Source: https://arstechnica.com/tech-policy/2017/12/bitcoins-insane-energy-consumption-explained/

                                    1. 6

                                      Today Bitcoin is estimated to use as much energy as the country of Denmark

                                      That’s far out of date. Denmark consumes approximately 3.5GW; bitcoin is now at about 5GW, somewhere between Hong Kong and Bangladesh.

                                      https://digiconomist.net/bitcoin-energy-consumption

                                      By 2020 is estimated it’ll use literally as much energy as we use in the entire planet today.

                                      No credible extrapolation is possible, obviously. Energy usage will drop fast when the bubble bursts.

                                      1. 0

                                        Because denmark has like 5 million people? I’m about as worried about bitcoin as I am another denmark popping up (the world gains like 12x the population of denmark every year)

                                        edit: re 2020: https://xkcd.com/605/

                                      2. 1

                                        I know next to nothing about cryptocurrencies, but my understanding is that Proof of Stake means we don’t need to use this energy. Many coins don’t use this because they weren’t sure whether it was secure. But recently the IOHK team has proven a secure Proof of Stake algorithm for Cardano.

                                        Is there a downside to this approach?

                                        1. 4

                                          The “Criticism” section on the Wikipedia article on Proof of Stake lists a few:

                                          https://en.wikipedia.org/wiki/Proof-of-stake#Criticism

                                          Note that Wikipedia is an ideological battleground when it comes to cryptocurrencies, so make sure to check the citations for a more comprehensive view.

                                          1. 2

                                            I can’t find the source for this despite having seen it just last night (sigh) but IOHK apparently makes you generate your own seed, which has resulted in lots of people using web-based generators that then steal your money. This is a really bad idea and it’s not that hard to read from /dev/urandom and then say “here write this thing down.”

                                            So I wouldn’t really trust them to have done stuff correctly, including Proof of Stake. Obviously that doesn’t mean it can’t be done or even that they haven’t done it - just that I would like to see a lot of scrutiny from experts.

                                            1. 2

                                              So I wouldn’t really trust them to have done stuff correctly, including Proof of Stake.

                                              The point is you don’t have to, they have proofs.

                                      1. 4

                                        Oh, the author was thinking of dead trees when he said ‘immutable’. I thought he was going somewhere else with this, like IPFS.

                                        1. 1

                                          Heh. This was my immediate thought, and I was waiting for them to bring it up basically the entire time.

                                          1. 1

                                            Would using IPFS remedy the issue of temporal permanence raised in the first paragraphs of the linked post?

                                            I’m asking because I don’t really know enough about IPFS to compare to “normal” content storage.

                                            1. 1

                                              IIUC, addressing the impermanence problem was one of the design goals of IPFS. They tout it as a solution that provide permanence. I can’t personally attest to their claim. Here’s some marketing materials: https://ipfs.io/#why

                                              1. 1

                                                Thanks for this link! For some reason I thought IPFS was just the latest iteration of the FreeNet idea. It looks as if there’s a bit more meat in it than that.

                                                1. 2

                                                  I don’t see where the meat is. It always looked like a worse freenet to me. Especially with regard to retention (which is a real issue on freenet), I don’t see any solution. “Each network node stores only content it is interested in” sounds like a real problem. In practice, you’d have to always run your server to ensure the content you want available is available.

                                          1. 14

                                            Why just blogs? I belive everything should be kept as light as possible. And as static, as possible. And as clean, as possible. etc. …

                                            Sadly, the web has “too much” power, that people abuse “too easily”, wanting to create a new, never before seen design, while trying to impose their imagined conceptions on a website, without regard for the actual medium. Then again, there are developers, too lazy to think about how to intelligently or properly implement something, and just grab the newest multi-megabyte framework, regardless of the task. I wonder how long this mentality will go on?

                                            And on a related note, neocities is also highly recommendable when it comes to a free host for static content.

                                            1. 6

                                              I wonder how much better the web would be if browsers artificially slowed down all network requests when the devtools were open and didn’t provide a way to disable that functionality.

                                              1. 2

                                                I’m naively guessing that people would use different browsers.

                                                If on the other hand ithwere the ISPs that were to do the throtteling, that would be an entirely different thing.

                                                1. 1

                                                  Right, I was assuming a magical state of the world in which it would be coordinated across vendors.

                                                  Also ISPs getting to know when I’m messing around with the devtools panel sounds beyond disastrous.

                                                2. 1

                                                  They are already slowed down by ads, but people get a faster connection.

                                                  Faster connection enable (some of) us to adapt to the bloatedness of the web instead of transferring more content it seems.

                                              1. 2

                                                https://blogs.msdn.microsoft.com/philipsu/2006/06/14/broken-windows-theory/

                                                Windows code is too complicated. It’s not the components themselves, it’s their interdependencies. An architectural diagram of Windows would suggest there are more than 50 dependency layers (never mind that there also exist circular dependencies). After working in Windows for five years, you understand only, say, two of them. Add to this the fact that building Windows on a dual-proc dev box takes nearly 24 hours, and you’ll be slow enough to drive Miss Daisy.

                                                I haven’t been around in the industry too long, i was in school when this blog entry was posted. But I’ve seen a few projects struggle and fail because of bad architecture and increasing technical debt. The OPs article definitely reflects the struggle between new features, legacy support, and paying down the technical debt (improving security, etc.).

                                                1. 2

                                                  The microservices that are all the rage these days adds a whole new layer of challenge to understanding dependencies. While monoliths have their own challenges, at least all of the information is there to understand what is connected. I’m still not sure if this has been adequately solved.

                                                  1. 2

                                                    Arguably microservices can simplify this dependency tree tremendously. In the world to date, it has been essentially impossible to compile many differently versioned libraries together into one monolithic application, which is what generally happens when you have a large number of teams doing separate development.

                                                    With microservices, again arguably, encapsulation happens at the whole-service layer, so each team is free to develop using whatever versions they like, and just provide HTTP (or whatever) as their high level API.

                                                    Where this tends to break down in my experience is (a) where true shared dependencies exist, which can happen if you either were bad at data modeling to begin with or if your needs organically grew differently than your original design, and (b) operationally, in a world of incredibly broken and insecure software, processors, etc., resulting from C (and now JS) and the shared memory model, where it is no longer possible to understand what in the opaque blobs need patching.

                                                    1. 1

                                                      C obviously has memory bugs but I’m curious what insecurity you see stemming from JS. Is it the automatic type casting? (I write JavaScript every day and think a good portion of the new parts of the language are good, but I will fully admit it spent its formative years on crack.)

                                                      1. 1

                                                        I don’t see how adding more dependencies simplifies anything, that can only make it more complicated. It may be convenient, but it’s not simpler. And in order to have that architecture one needs to have network protocols and serialization going on which has a performance and cognitive cost. There certainly are reasons to have a microservice architecture but I have a hard time seeing simplification as one of them.

                                                      2. 1

                                                        Microservices exist mostly to facilitate development by many teams on a large system. They are one of the best examples of Conway’s Law.

                                                        You are correct that they add complexity, and they tend to be adopted regardless of if they solve a real problem.

                                                    1. 1

                                                      The author should file an issue at https://github.com/WICG/interventions.

                                                      1. 2

                                                        Is this the Internet equivalent of a bomb threat?

                                                        1. 7

                                                          I don’t think so. In my reading, it is that Let’s Encrypt basically did the right thing: mitigate (by shutting down) and inform quickly, provide a detailed writeup ASAP (which they did, 7 hours later).

                                                          1. 1

                                                            Heh, I agree. I was just making a (bad) joke :P

                                                            1. 1

                                                              Too close to home… too close to home ;).

                                                        1. 6

                                                          Man I would much prefer a separation between the doc web and the app web. I’d be interested to see a secure browser that can be composed with small protocol downloaders and documents viewers.

                                                          1. 5

                                                            But where do you draw that line? There will always be things that are a mix of both. For example, a public read-only Google Doc, or an article on Medium. What about a YouTube video? You could argue that the video’s actually a document because it’s mostly static, but what about the comments?

                                                            Even search engines are a mix; clearly they’re not documents but they’re a critical part of the “documents web”. In fact, they don’t really work so well with the app web.

                                                            I think this is a strength of the web, not a detriment.

                                                            1. 5

                                                              But where do you draw that line?

                                                              One obvious line could be JavaScript. If it works without JS it is part of the “documents web”.

                                                              1. 5

                                                                Gmail runs without JavaScript. And a lot of people sure spend a lot of time complaining about apps that don’t gracefully degrade in the same way.

                                                                1. 3

                                                                  Meaningful URLs also seems like a prerequisite.

                                                                  I could imagine a web mail client that used URLs correctly and presented itself in terms of each message or thread as its own document; it would be a big improvement over any web mail client I’ve actually used.

                                                              2. 2

                                                                I don’t think one should try to concive of sucb a separation, while at the same time expecting that everything would stay the same, and no tradeoffs would be payed. And ultimately, “app web” would be just what we have today, so if you were to insist on using GDoc, Medium or YouTube, all products of the current way of things, they could still exist. Doc web could in that case just as well be Gopher+Markdown or something, and taking this example, there shouldn’t be any issue with search engines either, “app web” (or maybe a third system, so that we were to have a holy Trinity of the web) would host a search engine, with a few http links and a few gopher links. Since URIs exist, this really shouldn’t be too much of an issue.

                                                                1. 1

                                                                  I guess I still don’t understand what the purported benefit is here. A “secure” browser as OP mentioned? Would Firefox with NoScript meet that criteria? Or is the argument that HTML in general is too complicated to parse and as such represents a security risk?

                                                                  1. 2

                                                                    If you’d ask me, simplicity is preferable to complexity. When something is simple, it’s easier to implement (hence multiple implementation can compete), easier to maintain and certainly, as you mention, it has a higher chance of being more secure. Nowadays, a browser nearly fulfills the function of a virtual machine, of sorts. It’s implementation is, partially for historical reasons, is so difficult and unhandy, that for practical purposes, 3 or 4 web engines predominate, and none of them are satisfactory: Memory leaks, security holes, incompatibility with standards, slow, etc. And it’s not like someone could just fix the issue by implementing a new engine - the problem is the situation itself, that necessitates browsers.

                                                                    Splitting the task up, into separate frameworks appropriate to the task, could help to remedy these problems, and no, this isn’t solvable by installing a plugin.

                                                                    1. 1

                                                                      I guess I was talking more about the app part mostly abandoning the html documents, You can keep using html for the document part (and other formats) While apps would be completely javascript (or other languages). So the secure browser role would be mostly about making this seamless and safe to use.

                                                                      For instance a blog post could be a markdown document and it’s comments could be a separate app that just has a doc url, an auth system, comments and comment editing / managing features. Just use a window manager to have them both side by side.

                                                                      But I’m mostly spitballing here

                                                                      Edit: I mean this is already a bit similar to how I use tor for static websites and firefox for web apps, but I’d like more flexibility in terms of protocol use, file formats and vm’s

                                                              1. 3

                                                                I think the answer is simple, and most people in this thread don’t want to admit it or are too idealistic: People create GitHub accounts en masse, and use these to artificially boost their own (or someone elses) repositories reputation. After all, who hasn’t had to decide between two similar GitHub projects, and chose one based on the amount of stars or forks they had?

                                                                1. 2

                                                                  That’s it. I hadn’t thought of that but it makes perfect sense.

                                                                  I also think I chose a bad example here, because this person could definitely just be saving a whole bunch of repos for later.

                                                                  1. 1

                                                                    1.3k repositories for later? Maybe, why not, but the person certainly would seem to have quite a lot planned

                                                                    I just took a look your example again, and the only profile they are following is one by “Brainlabs Digital”: https://github.com/BrainlabsDigital - they seem to be some data analysis organisation, so either the profile you posted is just an account they use gather data from GitHub or “Brainlabs” is just an elaborate scheme to make people believe just that, and prevent them from thinking that it’s just a reputation-bot. Both cases are equally likely, if you ask me.

                                                                    1. 1

                                                                      Well, at some point maybe they were meant for ‘later’ and then moved on to other things :) I mean, I’m only a mild github addict, and I have >500 repos starred (of which probably 90% I thought there was a chance I might use it at some point).

                                                                      The odd thing that strikes me is there is so much forkage going on but not a single contribution.

                                                                      1. 1

                                                                        I use stars extremely loosely. If I find a repository even remotely interesting, even if it’s e.g. in a language I don’t use, I star it; consequently I have 1.4k stars.

                                                                  1. 1

                                                                    Honestly I agree with most of what the author wrote, but I wish they hadn’t been so forceful. Lots of the points were stated like facts even if they weren’t (e.g. the various things the author called “dead ends”). That just be?

                                                                    1. 3

                                                                      It always bothers me when people just complain, loudly, when things like this happen. Shouldn’t developers of all people be sympathetic to a bug getting through and wreaking havoc? It probably wasn’t anyone’s fault and npm was probably working on it super hard so I just don’t see the point of being so negative. Like, being able to install JavaScript just isn’t that important, so just shut up and let them fix it, you know? #hugops

                                                                      1. 13

                                                                        Shouldn’t developers try to learn from mistakes and design systems that are more resilient? Or anti-fragile if I understand the term correctly.

                                                                        There was a thread about npm not that long ago comparing OpenBSD ports tree (or Debian apt, if that’s your poison) to npm, and left pad came up, and the response was “that’ll never happen again.” Here we are, it happened again, but somehow OpenBSD and Debian have maintained their disaster free records. Were they just lucky? Or is it possible there’s something about their design that reduces risk? Is there anything we can learn besides “shit happens, better luck next time”?

                                                                        1. 3

                                                                          Debian’s package set is intended to work together. There are policies and maintainers and tools. Things like npm are more like a big heap of unvetted, unmaintained (except by upstream), packages that can change or disappear at any time

                                                                          1. 3

                                                                            What I am categorically not saying is “don’t criticize npm” (or anyone who has an outage). All I’m saying is that during the actual incident people shouldn’t berate ops teams since at best that does nothing and at worst adds to presumably already very high stress levels which may even impede progress towards restoring services. I’m not really sure that I have a super well-defined gripe other than “don’t you think they already know, duh??”

                                                                            Hope that clarifies what I meant since for whatever reason talking about this outage seems to make me phrase things in the most misleading possible way :P

                                                                            1. 2

                                                                              Ah, gotcha. Funny enough, I just watched the James Mickens talk where he uses the frenemy as a service monitoring service. Tell all your high school classmates about your cool startup to make them jealous, then they’ll always be the first to text you when something doesn’t work.

                                                                            2. 0

                                                                              I believe this issue doesn’t have the same culprit as left-pad, but we’ll see what the npm team is going to figure out.

                                                                              Also, IIRC npm is running a much bigger operation than Debian/OpenBSD packages, so I wouldn’t try to compare them. npm has just way bigger chance to screw something and make many more people upset just because of how much bigger they are than OpenBSD or Debian.

                                                                              1. 2

                                                                                So scale may have something to do with it. How large should a package manager be? One answer is infinite, but I’m not convinced that’s the case. If the answer is finite, what is it? Is there an upper bound on packages in a reliable ecosystem? Should we cap package managers at that number? Why or why not?

                                                                            3. 4

                                                                              How many times do you go back to the same restaurant after getting food poisoning? I realize these people might be good intentioned but at some point it’s time to cut your losses and find a better place to patronize. With better standards.

                                                                            1. 6

                                                                              very surprising that the BSDs weren’t given heads up from the researchers. Feels like would be a list at this point of people who could rely on this kind of heads up.

                                                                              1. 13

                                                                                The more information and statements that come out, the more it looks like Intel gave the details to nobody beyond Apple, Microsoft and the Linux Foundation.

                                                                                Admittedly, macOS, Windows, and Linux covers almost all of the user and server space. Still a bit of a dick move; this is what CERT is for.

                                                                                1. 5

                                                                                  Plus, the various BSD projects have security officers and secure, confidential ways to communicate. It’s not significantly more effort.

                                                                                  1. 7

                                                                                    Right.

                                                                                    And it’s worse than that when looking at the bigger picture: it seems the exploits and their details were released publicly before most server farms were given any head’s up. You simply can’t reboot whole datacenters overnight, even if the patches are available and you completely skip over the vetting part. Unfortunately, Meltdown is significant enough that it might be necessary, which is just brutal; there have to be a lot of pissed ops out there, not just OS devs.

                                                                                    To add insult to injury, you can see Intel PR trying to spin Meltdown as some minor thing. They seem to be trying to conflate Meltdown (the most impactful Intel bug ever, well beyond f00f) with Spectre (a new category of vulnerability) so they can say that everybody else has the same problem. Even their docs say everything is working as designed, which is totally missing the point…

                                                                                2. 7

                                                                                  Wasn’t there a post on here not long ago about Theo breaking embargos?

                                                                                  https://www.krackattacks.com/#openbsd

                                                                                  1. 12

                                                                                    Note that I wrote and included a suggested diff for OpenBSD already, and that at the time the tentative disclosure deadline was around the end of August. As a compromise, I allowed them to silently patch the vulnerability.

                                                                                    He agreed to the patch on an already extended embargo date. He may regret that but there was no embargo date actually broken.

                                                                                    @stsp explained that in detail here on lobste.rs.

                                                                                    1. 10

                                                                                      So I assume Linux developers will no longer receive any advance notice since they were posting patches before the meltdown embargo was over?

                                                                                      1. 3

                                                                                        I expect there’s some kind of risk/benefit assessment. Linux has lots of users so I suspect it would take some pretty overt embargo breaking to harm their access to this kind of information.

                                                                                        OpenBSD has (relatively) few users and a history of disrespect for embargoes. One might imagine that Intel et al thought that the risk to the majority of their users (not on OpenBSD) of OpenBSD leaking such a vulnerability wasn’t worth it.

                                                                                        1. 5

                                                                                          Even if, institutionally, Linux were not being included in embargos, I imagine they’d have been included here: this was discovered by Google Project Zero, and Google has a large investment in Linux.

                                                                                    2. 2

                                                                                      Actually, it looks like FreeBSD was notified last year: https://www.freebsd.org/news/newsflash.html#event20180104:01

                                                                                      1. 3

                                                                                        By late last year you mean “late December 2017” - I’m going to guess this is much later than the other parties were notified.

                                                                                        macOS 10.13.2 had some related fixes to meltdown and was released on December 6th. My guess is vendors with tighter business relationships (Apple, ms) to Intel started getting info on it around October or November. Possibly earlier considering the bug was initially found by Google back in the summer.

                                                                                        1. 2

                                                                                          Windows had a fix for it in November according to this: https://twitter.com/aionescu/status/930412525111296000

                                                                                      2. 1

                                                                                        A sincere but hopefully not too rude question: Are there any large-scale non-hobbyist uses of the BSDs that are impacted by these bugs? The immediate concern is for situations where an attacker can run untrusted code like in an end user’s web browser or in a shared hosting service that hosts custom applications. Are any of the BSDs widely deployed like that?

                                                                                        Of course given application bugs these attacks could be used to escalate privileges, but that’s less of a sudden shock.

                                                                                        1. 1

                                                                                          DigitalOcean and AWS both offer FreeBSD images.

                                                                                          1. 1

                                                                                            there are/were some large scale deployments of BSDs/derived code. apple airport extreme, dell force10, junos, etc.

                                                                                            people don’t always keep track of them but sometimes a company shows up then uses it for a very large number of devices.

                                                                                            1. 1

                                                                                              Presumably these don’t all have a cron job doing cvsup; make world; reboot against upstream *BSD. I think I understand how the Linux kernel updates end up on customer devices but I guess I don’t know how a patch in the FreeBSD or OpenBSD kernel would make it to customers with derived products. As a (sophisticated) customer I can update the Linux kernel on my OpenWRT based wireless router but I imagine Apple doesn’t distribute the Airport Extreme firmware under a BSD license.

                                                                                        1. 9

                                                                                          There’s an incredible lengthy reply in this thread, which is completely made up.

                                                                                          The thing is that to run those 1 & 0, it has to, technically, store them in a physical way so that it can be passed through to what’s next. As it’s 0 & 1, it’s not encrypted or protected. It’s pure raw data. The encryption and protection are usually done after the data has passed through the processor… by a task handled by the processor (ironically). Now, what they have “found” (which is false. it’s has been known since the 80’s) is that it’s possible to access this raw data by force feeding some 0 & 1 to the processor which can be hidden in anything and makes it start an hidden small software which, for example, could send a copy of the raw data through the web.

                                                                                          Fascinating.

                                                                                          1. 8

                                                                                            It’s not just completely made up, it’s gibberish.

                                                                                            1. 3

                                                                                              This almost sounds like it was written by some AI…

                                                                                              1. 3

                                                                                                Looks more like a markov chain to me.

                                                                                            2. 1

                                                                                              I saw hints of the truth in there which I thought were pretty funny. Like the bit about force feeding 1s and 0s I assumed was referring to specially crafted instructions to starve the CPU cache or trick the branch predictor or something. Hilarious.

                                                                                              Permalink for those who want it: https://www.epicgames.com/fortnite/forums/news/announcements/132642-epic-services-stability-update?p=132713#post132713

                                                                                            1. 1

                                                                                              I like this post for a lot of reasons but my absolute favorite is the way it’s phrased - not as something the author just really likes, but as a love letter. So wholesome. <3

                                                                                              1. 6

                                                                                                why would such a site have google analytics?

                                                                                                1. 1

                                                                                                  People often include google analytics without really thinking about the privacy implications, just because publishing blind is so annoying I suppose. Is there a better alternative?

                                                                                                  1. 2

                                                                                                    Well, there’s Piwik. I find it quite nice, though I’ve heard Google Analytics is in a league of its own. Wouldn’t know since I don’t use it for these exact privacy concerns.

                                                                                                    1. 1

                                                                                                      You probably also punish yourself with google search ranking by not using google analytics too. bummer.

                                                                                                      1. 3

                                                                                                        Anecdotally, this seems to be the case, based on what I’ve played with this on my own site.

                                                                                                        Currently, if you search for “Benjamin Pollack” on Google, my blog is (usually, because Google) about third on the page. About two years ago, I noticed that it had suddenly and without any warning plummeted to almost the bottom of page one. Sometimes, it wasn’t even on page one, which was even worse. While I generally don’t like doing SEO, I didn’t really like not having my blog rank highly, either, and the sudden drop didn’t make much sense to me. So, I spent some time poking.

                                                                                                        I knew I’d gotten some whining from Google about not looking good on mobile platforms and some other things, so I started there: gave the site a responsive design, turned on HTTPS, added a site map, improved favicon resolution, and some other stuff. But while those changes did help a bit on some other search engines, none of it really seemed to help much on Google. In frustration, I started looking through what I’d changed recently to see if I’d perhaps broken something that Google cared about.

                                                                                                        Turned out, I did: while I’d used Mint in practice to track my site’s usage, I’d accidentally left Google Analytics on as well for quite some time. I’d caught it shortly before the rankings drop, and removed it from my site. On a hunch, I added Google Analytics back in, and…presto, back up to roughly my old position.

                                                                                                        I don’t actually think this is malice. I think that Google absolutely factors in the traffic patterns they see when calculating search results. In the case of my blog, their being able to see people showing up there based on my name, and then staying on the site, probably helps, and likewise probably gave them insight they might otherwise lack that I tend to have a few key pages that get a lot of traffic.

                                                                                                        So, yeah: unfortunately, I do think you punish yourself with Google by not using analytics. For some, that might be okay; for others, perhaps not.

                                                                                                        1. 5

                                                                                                          I don’t actually think this is malice. I think that Google absolutely factors in the traffic patterns they see when calculating search results.

                                                                                                          Perhaps not active malice, but this is the exact sort of thing people mean when they say that algorithms encode values.

                                                                                                          It may not be active malice, but it still has malicious effect, and it’s still incumbent upon Google to clarify, fix, and/or restate their values accordingly.

                                                                                                          1. 1

                                                                                                            I knew I’d gotten some whining from Google about not looking good on mobile platforms and some other things, so I started there: gave the site a responsive design, turned on HTTPS, added a site map, improved favicon resolution, and some other stuff.

                                                                                                            in what form did you receive the “whining”? as someone with an irrational hatred of the web 2.0 “upgrades” that have been sweeping the web, making fonts huge, breaking sites under noscript or netsurf, etc., i have been wondering about the reasons for this. like is there some group of PR people going around making people feel bad about their “out-dated” websites, convincing them to use bootstrap?

                                                                                                            would motherfuckingwebsite.com live up to google’s standards of “responsiveness”?

                                                                                                          2. 3

                                                                                                            For what it’s worth: When I worked on Google Analytics a few years ago, that was definitely not true. And I’d bet that it’s still not true and will never be true. Search ranking is heavily silo’d from the rest of the company’s data, both due to regulatory reasons and out of principle. Just getting the Search Console data linked into GA was a big ordeal.

                                                                                                            Edit: Just did a quick search, here’s a more official statement from somebody more relevant: https://twitter.com/methode/status/598390635041673217, I’m pretty sure there were many other similar statements made by other people over the years too.

                                                                                                            1. 1

                                                                                                              thanks for that info.

                                                                                                              1. 1

                                                                                                                I understand if you can’t say anything but I’m wondering if there’s a different explanation for https://lobste.rs/s/3o3acu/decentralized_web#c_ltcs3n then?

                                                                                                                1. 2

                                                                                                                  I don’t work there anymore, so there’s no way for me to know for sure.

                                                                                                                  If I had to guess I’d say it’s a similar deal to the dozens/hundreds of “I spoke about X in private and now I’m seeing ads for X, so my phone/car/alexa/dishwasher is spying on me” stories. We’re really good at attributing things incorrectly.

                                                                                                                  The comment you link already mentioned various things that happened which likely ruined the ranking: Unresponsive design, no HTTPS, whatever else was wrong with it. The thing is, it takes time for ranking to get updated and propagate. Even if everything was fixed yesterday and the site got crawled today, it can take weeks or months for relative ranking in a specific keyword to improve. It’s very hard to attribute an improvement to any specific thing—all you can do is do your best across the board over the long term.

                                                                                                                  Some other possible things that might have gone wrong which the comment didn’t already mention: Maybe Mint was doing something bad, like loading slowly or insecurely or something else. Maybe some high-value incoming links disappeared. Maybe Google rolled out one of their big algorithm changes and the site was affected by some quirk of it (it happens fairly regularly, lots of rants about it out there).

                                                                                                                  1. 1

                                                                                                                    Hmm, thanks. That makes sense; I appreciate the explanation!

                                                                                                              2. [Comment removed by author]

                                                                                                                1. 2

                                                                                                                  they got rid of that along with the serifs on their logo

                                                                                                            2. 1

                                                                                                              what is so annoying about publishing blind? i am publishing this comment blind and it doesn’t bother me.

                                                                                                              isn’t it easier to do nothing, than to do something and set up google analytics?

                                                                                                              1. 1

                                                                                                                Eh, well, there’s actually up down vote buttons on your comment. So the tracking was already there for you. Likes and claps and shit… people want to see who’s seeing them.

                                                                                                                1. 1

                                                                                                                  tracking is different from allowing voluntary participation.

                                                                                                          1. 2

                                                                                                            This was VERY interesting, but am I the only one who found it a little long and wordy? I think the ideas are fascinating but honestly I eventually bailed since it was too much to wade through.

                                                                                                            1. 12

                                                                                                              Docker has not been very good software for my team at all. We’ve managed to trigger non-stop kernel semaphore leak bugs as well as lvm filesystem bugs. Some of them going through multiple different attempts to fix. And any attempt to try to figure it out yourself by reading their code is stymied by the weird Moby/Docker disconnect that seems to be there.

                                                                                                              If you are thinking about running docker by yourself and not in someone else’s managed docker solution then beware. It’s very sensitive to the kernel you are running and the filesystem drivers you are using it with. As far as I can tell if you aren’t running in Amazon, or Googles docker hosted solutions you are in for a bad time. And only Amazon is actually running docker. Google just sidestepped the whole issue by using their own container technology under the hood.

                                                                                                              The whole experience has soured me on Docker as a deployment solution. It’s wonderful for the developer but it’s a nightmare for whoever has to manage the docker hosts.

                                                                                                              1. 11

                                                                                                                A few things that bit me:

                                                                                                                • containers don’t report real memory limits. Running top will report all 32GB of system memory even if the container is limited to 2GB. Scala/Java or other JVM apps aren’t aware of this limit, so you have to wrap the Java process with -X memory limit flags, otherwise your container will get killed (you don’t even get an OutOfMemory exception) and marathon/k8s/whatever scheduler will start a new one. Eventually most interpreters (python, ruby, jvm, etc.) will have built in support to check cgroup memory limits, but for now it’s a pain.
                                                                                                                • Not enough tooling in the container. I don’t want to have to apt-get nc each time I rebuild a container to see if my network connections work. I’ve heard good things about sysdig bridging this gap though.
                                                                                                                • Tons of specific Kernel flags (really only matters if you use Gentoo or you compile your own kernel).
                                                                                                                • Weird network establishment issues. If you expose a port on the host, it will be available before it’s available to a linked container. So if you want to do a check to see if something like a database is ready, you have to do it in a container.

                                                                                                                I’m sure there are more. Overall I actually do like Docker, despite some of the weirdness. However I hate how we have k8s/marathon/nomad/swarm .. there’s no one scheduler or scheduler format and if you switch from one to the other, you’re redoing a lot of tooling, labels and config to get all your services to connect together. Consul makes me want to stab myself. DC/OS uses up 2GB ~ 4GB of ram just for the fucking scheduler on each node! k8s is a nightmare to configure without a team of at least three and really ten. None of these solutions scale up from one node to a ton easily (minikube is a hack).

                                                                                                                Containers are nice. The scheduling systems around them can go die in a fire.

                                                                                                                1. 4
                                                                                                                  containers don’t report real memory limits
                                                                                                                  

                                                                                                                  [X] we’ve been bitten by this.It also has implications for monitoring so you get double the fun.

                                                                                                                  Not enough tooling in the container.
                                                                                                                  

                                                                                                                  [X] we’ve established out own baseline container images and

                                                                                                                  Weird network establishment issues.
                                                                                                                  

                                                                                                                  [X] container and k8s networking was, at least until a few months ago, a mess.

                                                                                                                  Consul makes me want to stab myself.

                                                                                                                  [X] we hacked our own

                                                                                                                  without a team of at least three and really ten.

                                                                                                                  [X] confirmed, we’re throwing money and people at it.

                                                                                                                  None of these solutions scale up from one node to a ton easily (minikube is a hack).

                                                                                                                  [X] I’ve thrown up my hands on having a working developer environment without running it on a cloud provider. We can’t trust minikube to behave sufficiently similarly as staging and production.

                                                                                                                  Containers are nice. The scheduling systems around them can go die in a fire.

                                                                                                                  I’m not even sure containers are that nice, the idea of containers is nice but the execution is still half-baked.

                                                                                                                  1. 2

                                                                                                                    Why do you need so many people to operate kubernetes well? And what is it enabling, to make that kind of expenditure worth it?

                                                                                                                    1. 2

                                                                                                                      We’re developing a commercial turn-key, provider-independent platform based on it. Dog-fooding our own stuff has exposed many sharp bits and rough edges.

                                                                                                                      1. 1

                                                                                                                        Thanks.

                                                                                                                2. 7

                                                                                                                  I’ve had a positive experience with Triton. It doesn’t support all of Docker’s features, since like Google they opted for emulating Docker and apparently decided some things weren’t having, but for the features Triton does it Just Works.

                                                                                                                  Of course, that means getting used to administering a different ecosystem.

                                                                                                                  1. 1

                                                                                                                    I love the idea of Triton, but having rolled it out for a past position I worked at I can say honestly that I would not recommend it. There is no high-availability for many of the internal services by default (you need to roll your own replicas etc), there is no routing across networks (static routes and additional interfaces in every instance is not a good solution). I love Joyent as a company, and their products have a great hypothetical appeal to me as a technologist but there are just too many “buts” to justify spending the kind of money they charge for the solution they offer.

                                                                                                                    1. 2

                                                                                                                      I’m just curious how old the version of Triton was, because it has had software-defined networking for ~3 years or so. Was there a limitation with it?

                                                                                                                  2. 2

                                                                                                                    That stinks, but sounds more like a critique of the Linux kernel? Are you running anything custom?

                                                                                                                    Newer Docker defaults to overlayfs (no more aufs), and runs fine for us on stock Debian 9 kernels (without the extra modules package, or any dkms modules). This is both on bare metal and the AMIs Debian provides. Though we run on plain ext4, without LVM.

                                                                                                                    1. 4

                                                                                                                      My experience is purely anecdotal so shouldn’t be taken as more than that.

                                                                                                                      However we aren’t on anything custom. Running latest CentOS kernels for everything and we keep it patched. The bugs aren’t in the linux kernel. It’s the way docker does things when it sets up the cgroups and manages them. My early experimentation with other container runtimes seems to indicate that they don’t have the same problems.

                                                                                                                      Just searching for the word hang in the moby project shows 171 open bugs and 521 closed. Most of them from a cursory examination look very similar to our issues. For us the tend to manifest as a deadlock in the docker engine which then causes the managed containers to go unhealthy and start a reboot loop. We’ve had to have cronjobs run and kill the docker daemons periodically in the past to keep things up and running.

                                                                                                                      1. 2

                                                                                                                        Maybe there are bugs in the way Docker sets up cgroups too, but you mentioned kernel semaphore leaks and LVM bugs which seem to be squarely in the kernel? Which seems to track to me - I know when systemd started exposing all this Linux kernel-specific stuff, they were the first really big consumer so they also exposed lots of kernel bugs.