1. 1

    We’re back to the state of affairs before the Apple / Google collaboration on WebKit fell apart. Same number of web engines under development.

    1. 5

      No, it’s less, right? I count WebKit (Apple/Google), Gecko (Mozilla), EdgeHTML (Microsoft) and Presto (Opera). Presto was technically switched out for WebKit before the Blink fork but really they happened at the same time - within a month or two IIRC. Close enough that Opera announced they would switch to Blink instead before almost any sort of work had been done on the switch.

      Now all we’ve got is Gecko, WebKit and Blink. And it’s worse than just those numbers would imply because market share these days is more inbalanced in favor of a Blink monopoly ([citation needed]).

      1. 4

        Same number of engines but with fewer origins - all except Gecko now share the WebKit origin and has the basis of the architecture choices made in that

      1. 4

        I would be less sad if Microsoft had chosen Gecko/Servo here but I’m not too sad all the same. I don’t (yet) understand what rendering engine/JavaScript VM diversity really gave web developers. I can get behind browser diversity but it seems like what’s beneath the surface doesn’t matter anymore. I’d point to iOS as an example of this—Safari vs. Chrome is a worthwhile debate but it’s all WKWebView under the hood, and because of that iOS users can all benefit from the performance/battery life and site compatibility.

        1. 4

          What plurality amongst engines gives is an insurance that the web will be developed against actual standardized behavior rather than just the implemented version of the majority engine.

          There are lots of examples of eg. optimizations that assume that all browsers work like browsers with a WebKit origin does, but such optimization may not at all help in eg. Gecko or even make it worse there.

          1. 2

            There are 2 ways to address this: having even more browsers with substantial marketshare or having just one open source rendering engine that is used by all.

          2. 2

            And all sites running anywhere on iOS as a consequence suffer from WebKit’s poor and generally laggard support for newer standards.

          1. 10

            One major problem with C (and I like C) is that undefined behavior can come from just about any direction. I just recently learned (one a mailing list not normally associated with linking) that linking (you know, with the ld program) can invoke undefined behavior, even if the compilation phase did not invoke any undefined behavior. Even reading the C standard was ambiguous, with different people interpreting the text differently (with respect to the linking issue).

            1. 9

              This is a good point. I’m new to C and while I’ve picked up the basics of the language, I feel like actually learning to write good C is impossible. For example, minefields like the str* functions. I know there are safe alternatives to those, but what other dangerous stuff is left over from that era that I might accidentally use? The advice in this article is great - avoid undefined behavior at all cost - but I have no idea how to actually follow through with it, especially when I had no idea that e.g. int promotion was even a thing. I feel like I have to be an absolute expert in obscure semantics of the language in order to write even vaguely safe C because undefined behavior can pop out of virtually any situation.

              Has anyone found a good solution to this problem?

              1. 7

                The definitive list is Annex J of the C99/C11 standard, but fair warning—the language is tortuous. For instance, the very first bullet point under Annex J, which lists all the undefined behaviors:

                | A ‘‘shall’’ or ‘‘shall not’’ requirement that appears outside of a constraint is violated (clause 4).

                where “clause 4” seems to actually reference section 4, paragraph 2:

                | If a ‘‘shall’’ or ‘‘shall not’’ requirement that appears outside of a constraint is violated, the behavior is undefined. Undefined behavior is otherwise indicated in this International Standard by the words ‘‘undefined behavior’’ or by the omission of any explicit definition of behavior. There is no difference in emphasis among these three; they all describe ‘‘behavior that is undefined’’.

                But even though English is my first (and tragically, only) language, I’m still not sure how to interpret “a ‘shall’ or ‘shall not’ requirement that appears outside of a constraint is violated.” What is that even saying? And yes, any situation not covered in the Standard becomes undefined behavior, pretty much by definition.

                And a word of warning—Annex J takes 13 pages to list undefined behavior.

                Unfortunately, I don’t know of any real advice. 30 years of C programming and I’m still surprised as what is and isn’t technically allowed.

                1. 5

                  At some point I’d suggest just reading the C standard. For some reason this never seems to occur to many people. The parts describing the language aren’t that many pages, and then you’d know about integer conversions and promotions, etc. I mean, I can understand not immediately memorizing every nuance, but I think I lot of “nobody ever told me that!” could be avoided by simply reading the original source.

                  I guess the other approach is just to pay more attention to what you’re doing. You write a line of code. You think it does something. What is the basis for that belief?

                  1. 4

                    You are quite possibly further on the road to understanding UB in C/C++ than something like 97% of people who code in those languages for a living. I personally started to realize there’s a problem only after 10 years writing C++ (5 of them professionally). (Fortunately for me, I moved to Go soon afterwards.) Sadly, there are tons of people who don’t have a slightest idea about UB, and casually dismiss, ridcule, or even aggresively reject any explanations (maybe strenghtened by subconscious fear of having their life’s work undermined). Hmm; I just now thought it maybe in some ways resembles the situation with small particles pollution (including global warming), in that it has the problem of visibility.

                    1. 2

                      I second tedu’s suggestion to read the standard.

                      As for the unsafe functions, I would highly recommend you get in the habit of using OpenBSD man pages. They do a decent job of pointing out caveats and antipatterns while often referring to a better solution. mdoc.su/o/ is a handy shortcut to get there if you’re not sitting on an OpenBSD shell. For example: mdoc.su/o/malloc

                      Honestly, C is not that big of a language, and the minefield you mention as an example is but a handful of functions..

                      If books are your thing, I’d also recommend TAoSSA ch6 which was published as a freebie. URLs change but remember the filename and you’ll find it on various hosts: dowd_ch06.pdf.

                      You might learn a fact or three from Expert C Programming, although if you’ve been reading the standard, there’s not going to be that much to pick up from it.

                  1. 1

                    I don’t understand why the author suggests that distributions often don’t update ImageMagick. Do they not know that security fixes get backported to stable repositories? I find that surprising given that they’re clearly quite competent.

                    I’m also curious if GraphicsMagick is affected. I looked on their news page, but the most recent release doesn’t mention the CVE number (I just did a Find). But maybe it’s just that no one looked.

                    1. 4

                      I don’t know if the author is being sarcastic, but it fits COBOL to a T including the fixed point calculations, and it is even listed. Can OP confirm?

                      1. 3

                        The buttons at the bottom are clickable.

                        1. 4

                          The post is an extreme case of burying the lede.

                          1. 1

                            Ah, missed that :(

                        1. 2

                          Hopefully this breaks every “security” middlebox. I wonder what other kinds of implications this has, I heard that ISPs do funny stuff with UDP packets because they are low priority.

                          1. 7

                            It won’t break middleboxes. Middleboxes were one of the driving forces behind building on top of UDP; I don’t know the details but TCP has some problems that should be fixed but can’t be because any change to TCP would break middleboxes too badly, so TCP is stuck in the past. Ditto to any (new) protocol directly on top of IP that isn’t TCP or UDP.

                            On the bright side, QUIC fixes these issues. QUIC has had a crypto layer since day 1 with the sole, explicit purpose of preventing middleboxes from working.

                          1. 2

                            When I first started not just tinkering with computers but actually coding (around middle school, which for context was ~2010 or so for me) I was definitely an enthusiast. And because I always wanted to build cathedrals of code and have grand fancy architectures that I didn’t need, just for fun (and use new technology to boot), I never actually got anything done.

                            It’s not the whole story but a big reason that I’m not stuck there anymore is because over time my definition of quality software changed. I believe in and more importantly understand the Unix philosophy now in a way I didn’t then. So even though I think I still have a big element of enthusiast, I’m enthusiastic for code that is really, really aggressively simple. And that’s much easier to build. The article frames the enthusiast as writing code for the sake of the art and being beautiful, and I love that metaphor, because beauty is in the eye of the beholder.

                            1. 3

                              Funny it doesn’t mention anything about the security / risk of having HT enabled, only performance.

                              1. 3

                                I thought the same thing, but it’s from 2017, before we were all collectively freaked out about this stuff. The title should be edited.

                              1. 2

                                https://www.eff.org/deeplinks/2018/10/new-exemptions-dmca-section-1201-are-welcome-dont-go-far-enough has a lot more details on the restrictions that still exist under the new Copyright Office rules.

                                1. 3

                                  Just want to point out (since I couldn’t find it on the site at first) “the license is in development” while it’s in beta, but the plan is not to make this free software. AFAICT the community edition will be Creative Commons BY-NC-SA. So that’s a bummer.

                                  1. 1

                                    What exactly does it mean that the community edition is BY-NC-SA? That you have to attribute and publicly share any code that you write with Alan?

                                    1. 1

                                      The BY bit means you have to provide attribution if you make a derivative work. The SA means that derivative works have to be under the same license, similar to the GPL. NC means that commerical use is forbidden (the NC stands for non-commercial), which violates freedom 0 and makes this not free software. They don’t specify what version they’d use but presumably it would be the latest, which would make https://creativecommons.org/licenses/by-nc-sa/4.0/ the license in question.

                                      1. 1

                                        What does derivative work mean here? An app you build using their framework, or just making changes to the framework itself?

                                        1. 1

                                          I don’t know. There may be an answer out there, but Creative Commons is not designed for software. If we were discussing, say, the GPL, there might be a clearer answer. I have a very hazy guess, but it’s based on no research and overall I’m so uninformed about your question that I don’t want to speculate in public :P

                                          I don’t know how prevalent software using CC licenses is, but it’s possible no one would know and we’d have to wait for a court to decide.

                                  1. 20

                                    “(For the record, I’m pretty close to just biting the bullet and dropping $1800 on a Purism laptop, which meets all my requirements except the fact that I’m a frugal guy…)”

                                    One more thing to consider: vote with your wallet for ethical companies. One of the reasons all the laptop manufacturers are scheming companies pulling all kinds of bloatware, quality, and security crap is that most people buy their stuff. I try where possible to buy from suppliers that act ethically to customers and/or employees even if it costs a reasonable premium. An recent example was getting a good printer at Costco instead of Amazon where price was similar. I only know of two suppliers of laptops that try to ensure user freedom and/or security: MiniFree and Purism. For desktops, there’s Raptor but that’s not x86.

                                    Just tossing the philosophy angle out there in case anyone forgets we as consumers contribute a bit to what kind of hardware and practices we’ll see in the future every time we buy things. The user-controllable and privacy-focused suppliers often disappear without enough buyers.

                                    1. 10

                                      One more thing to consider: vote with your wallet for ethical companies

                                      Don’t forget the ethics of the manufacturing and supply chain of the hardware itself. I would imagine that the less well-known a Chinese-manufactured brand is the more likely it is to be a complete black box/hole in terms of the working conditions of the people who put the thing together, who made the parts that got assembled, back to the people who dug the original minerals out of the ground.

                                      I honestly don’t know who (if anyone) is doing well here - or even if there’s enough information to make a judgement or comparison. I think a while back there was some attention to Apple’s supply chain, I think mostly in the context of the iPhone and suicides at Foxconn, but I don’t know where that stands now - no idea if it got better, or worse.

                                      1. 6

                                        Apple has been doing a lot of work lately on supplier transparency and working conditions, including this year publishing a complete list of their suppliers, which is pretty unusual. https://www.apple.com/supplier-responsibility/

                                        1. 1

                                          Technically their list of suppliers covers the top 98% of their suppliers, so not a complete list, but still a very good thing to have.

                                          1. 1

                                            Most other large public companies do that too, just not getting the pat on the back as much as Apple.

                                            http://h20195.www2.hp.com/v2/getpdf.aspx/c03728062.pdf

                                          2. 2

                                            You both brought up a good concern and followed up with reason I didn’t include it. I have no idea who would be doing good on those metrics. I think cheap, non-CPU components, boards, assembly and so on are typically done in factories of low-wage workers in China, Malaysia, Singapore, etc. When looking at this, the advice I gave was to just move more stuff to Singapore or Malaysia to counter the Chinese threat. Then, just make the wages and working conditions a bit better than they are. If both are already minimal, the workers would probably appreciate their job if they got a little more money, air conditioning, some ergonomic stuff, breaks, vacations, etc. At their wages and high volume, I doubt it would add a lot of cost to the parts.

                                          3. 9

                                            Funnily enough

                                            The Libreboot project recommends avoiding all hardware sold by Purism.

                                            1. 5

                                              Yeah, that is funny. I cant knock them for not supporting backdoored hardware, though. Of the many principles, standing by that one make more sense than most.

                                              1. 1

                                                Correct me if I’m wrong, but I thought purism figured out how to shut down ME with an exploit? Is that not in their production machines?

                                              2. 3

                                                I agree, which is why I bought a Purism laptop about a year ago. Unfortunately, it fell and the screen shattered about 5 months after I got it, in January of this year. Despite support (which was very friendly and responded quickly) saying they would look into it and have an answer soon several times, Purism was unable to tell me if it was possible for them to replace my laptop screen, even for a price, in 6 months. (This while all the time they were posting about progress on their phone project.) Eventually I simply gave up and bought from System76, which I’ve been very satisfied with. I know they’re not perfect, but at least I didn’t pay for a Windows license. In addition my System76 laptop just feels higher quality - my Librem 15 always felt like it wasn’t held together SUPER well, though I can’t place why, and in particular the keyboard was highly affected by how tight the bottom panel screws were (to the point where I carried screwdrivers with me so I could adjust them if need be).

                                                If you want to buy from Purism, I really do wish you the best. I truly hope they succeed. I’m not saying “don’t buy from Purism”; depending on your use case you may not find these issues to be a big deal. But I want to make sure you know what you’re getting into when buying from a very new company like Purism.

                                                1. 1

                                                  Great points! That support sounds like it sucks to not even give you a definitive answer. Also, thanks for telling me about System76. With what Wikipedia said, that looks like another good choice for vote with your wallet.

                                                2. 2

                                                  Raptor but that’s not x86

                                                  Looks like it uses POWER, which surprised me because I thought that people generally agreed that x86 was better. (Consoles don’t use it anymore, Apple doesn’t use it, etc)

                                                  Are the CPUs that Raptor is shipping even viable? They seem to not have any information than “2x 14nm 4 core processors” listed on their site.

                                                  1. 4

                                                    The FAQ will answer your questions. The POWER9 CPU’s they use are badass compared to what’s in consoles, the PPC’s Apple had, and so on. They go head to head with top stuff from Intel in the enterprise market mainly sold for outrageous prices. Raptor is the first time they’re available in $5000 or below desktops. Main goal is hardware that reduces risk of attack while still performing well.

                                                1. 4

                                                  Go to System Preferences > Network > Advanced > DNS, add two entries to DNS Servers for 1.1.1.1 and 1.0.0.1 and remove any other server

                                                  Try doing this on any network that I maintain and you’ll find your DNS queries are being dropped. Allowing outbound traffic to any DNS server is not recommended. Well, allowing unrestricted outbound traffic is not recommended. It’s 2018. Don’t trust anyone or any device. Only allow out the traffic you need out.

                                                  1. 2

                                                    Honestly I think it’s bad advice just to tell people to “hey use this DNS server instead” anyway. It actually doesn’t protect your privacy by doing so, because anyone with tcpdump on a host between you and that DNS server can still record what you are looking up.

                                                    1. 2

                                                      If you want privacy should probably be using a VPN on foreign networks.

                                                      Restrictive networks need to become the new norm now. Allowing strangers on your network to spew DNS is asking for problems because this is the type of crap that infected machines do. I don’t need to permit infected gear on my networks sending thousands of pps of DNS traffic all because some people might have taken bad advice and hardcode DNS servers on their workstations/laptops. Catering to people taking bad advice on the internet should no longer be acceptable.

                                                      Sane traffic allowed out:

                                                      • HTTP
                                                      • HTTPS
                                                      • IPSEC
                                                      • OpenVPN

                                                      Nothing else. You use the internal NTP, DNS servers (which do use dnscrypt for its upstream), etc.

                                                      1. 4

                                                        If you want privacy should probably be using a VPN on foreign networks.

                                                        This is also advice we need to be careful with, because it’s usually really difficult to tell whether public VPN services are run by bad actors or not. You can never remove the need to trust a network altogether with a VPN, you just shift that need onto a different network. The average VPN user likely does not realise that.

                                                        Restrictive networks need to become the new norm now.

                                                        There is a time and a place for restrictive networks.

                                                        1. 2

                                                          Nothing else? What about SSH? SMTPS? IMAPS and POP3S? Are you suggesting that checking your email should be disallowed on most networks?

                                                          1. 1

                                                            Yes. None of those legacy mail protocols support 2FA and are frequently attacked by botnets because it helps evade IP rate limits while still executing their dictionary attacks.

                                                            End users don’t need SSH. Those that do should be smart enough to have a VPN.

                                                          2. 1

                                                            So infected machines tunnel over HTTP(S). Now you are relying on an HTTP specific firewall?

                                                            1. 1

                                                              That’s fine. They can be infected and backdoored, but they won’t be spewing thousands of PPS of UDP and it’s very easy to deal with bad actors attempting to spam SYNs. It’s rather hard to DDOS TCP in comparison

                                                              1. 1

                                                                Setup two DNS servers; one inside the firewall the other outside. Firewall rules only permit DNS traffic between inside and outside DNS server. Intranet nodes can only query the inside DNS. Internet nodes can only spam the outside DNS.

                                                                Blacklist IPs that spam the outside DNS. If DDoS is active, only serve requests from the Intranet, rely on the cache. Alternatively, only accept requests/responses from whitelisted DNS servers.

                                                      1. 2

                                                        Can someone ELI5 why Firefox is not to be trusted anymore?

                                                        1. 4

                                                          They’ve done some questionable things. They did this weird tie-in with Mr. Robot or some TV show, where they auto-installed a plugin(but disabled thankfully) to like everyone as part of an update. It wasn’t enabled by default if I remember right, but it got installed everywhere.

                                                          Their income stream, according to wikipedia: is funded by donations and “search royalties”. But really their entire revenue stream comes directly from Google. Also in 2012 they failed an IRS audit having to pay 1.5 million dollars. Hopefully they learned their lesson, time will tell.

                                                          They bought pocket and said it would be open sourced, but it’s been over a year now, and so far only the FF plugin is OSS.

                                                          1. 4

                                                            Some of this isn’t true.

                                                            1. Mr. Robot was like a promotion, but not a paid thing, like an ad. Someone thought this was a good idea and managed tto bypass code review. This won’t happen again.
                                                            2. Money comes from a variety of search providers, depending on locale. Money ggoes directly into the people, the engineers, the product. There are no stakeholders we need to make happy. No corporations we got to talk to. Search providers come to us to get our users.
                                                            3. Pocket. Still not everything, but much more than the add-on: https://github.com/Pocket?tab=repositories
                                                            1. 3
                                                              1. OK, fair enough, but I never used the word “ad”. Glad it won’t happen again.

                                                              2. When like 80 or 90% of their funding is directly from Google… It at the very least raises questions. So I wouldn’t say not true, perhaps I over-simplified, and fair enough.

                                                              3. YAY! Good to know. I hadn’t checked in a while, happy to be wrong here. Hopefully this will continue.

                                                              But overall thank you for elaborating. I was trying to keep it simple, but I don’t disagree with anything you said here. Also, I still use FF as my default browser. It’s the best of the options.

                                                            2. 4

                                                              But really their entire revenue stream comes directly from Google.

                                                              To put this part another way: the majority of their income comes from auctioning off being the default search bar target. That happens to be worth somewhere in the 100s of $millions to Google, but Microsoft also bid (as did other search engines in other parts of the world. IIRC the choice is localised) - Google just bid higher. There’s a meta-level criticism where Mozilla can’t afford to challenge /all/ the possible corporate bidders for that search placement, but they aren’t directly beholden to Google in the way the previous poster suggests.

                                                              1. 1

                                                                Agreed. Except it’s well over half of their income, I think it’s up in the 80% or 90% range of how much of their funding comes from Google.

                                                                1. 2

                                                                  And if they diversify and, say, sell out tiles on the new tab screen? Or integrate read-it-later services? That also doesn’t fly as recent history has shown.

                                                                  People ask from Mozilla to not sell ads, not take money for search engine integration, not partner with media properties and still keep up their investment into development of the platform.

                                                                  People don’t leave any explanation of how they can do that while also rejecting all their means of making money.

                                                                  1. 2

                                                                    Agreed. I assume this wasn’t an attack on me personally, and just as a comment of the sad state of FF’s diversification woes. They definitely need diversification. I don’t have any awesome suggestions here, except I think they need to diversify. Having all your income controlled by one source is almost always a terrible idea long-term.

                                                                    I don’t have problems, personally, with their selling of search integration, I have problems with Google essentially being their only income stream. I think it’s great they are trying to diversify, and I like that they do search integration by region/area, so at least it’s not 100% Google. I hope they continue testing the waters and finding new ways to diversify. I’m sure some will be mistakes, but hopefully with time, they can get Google(or anyone else) down around the 40-50% range.

                                                                  2. 1

                                                                    That’s what “majority of their income” means. Or at least that’s what I intended it to mean!

                                                              2. 2

                                                                You also have the fact they are based in the USA, that means following American laws. Regarding personal datas, they are not very protective about them and even less if you are not an American citizen.

                                                                Moreover, they are testing in nightly to use Cloudfare DNS as DNS resolver even if the operating system configure an other. A DNS knows all domaine name resolution you did, that means it know which websiste you visit. You should be able to disable it in about:config but in making this way and not in the Firefox preferences menu, it is clear indication to make it not easily done.

                                                                You can also add the fact it is not easy to self host datas stored in your browser. I am not sure they are not sold when there first financial support is Google which have based is revenue from datas?

                                                                1. 3

                                                                  Mozilla does not have your personal data. Whatever they have for sync is encrypted in such a way that it cannot be tied to an account or decrypted.

                                                                  1. 1

                                                                    They have my sync data, sync data is personal data so they have my personal data. How do they encrypt it? Do you have any link about how they manage it? In which country is it stored? What is the law about it?

                                                                    1. 4

                                                                      Mozilla has your encrypted sync data. They do not have the key to decrypt that data. Your key never leaves your computer. All data is encrypted and decrypted locally in Firefox with a key that only you have.

                                                                      Your data is encrypted with very strong crypto and the encryption key is derived from your password with a very strong key derivation algorithm. All locally.

                                                                      The encrypted data is copied to and from Mozilla’s servers. The servers are dumb and do not actually know or do crypto. They just store blobs. The servers are in the USA and on AWS.

                                                                      The worst that can happen is that Mozilla has to hand over data to some three letter organization, which can then run their supercomputer for a 1000 years to brute force the decryption of your data. Firefox Sync is designed with this scenario in mind.

                                                                      This of course assuming that your password is not ‘hunter2’.

                                                                      It is starting to sound like you went through this effort because you don’t trust Mozilla with your data. That is totally fair, but I think that if you had understood the architecture a bit better, you may actually not have decided to self host. This is all put together really well, and with privacy and data breaches in mind. IMO there is very little reason to self host.

                                                                      1. 1

                                                                        “The worst that can happen is that Mozilla has to hand over data to some three letter organization, which can then run their supercomputer for a 1000 years to brute force the decryption of your data. Firefox Sync is designed with this scenario in mind.”

                                                                        That’s not the worst by far. The Core Secrets leak indicated they were compeling via FBI suppliers to put in backdoors. So, they’d either pay/force a developer to insert a weakness that looks accidental, push malware in during an update, or (most likely) just use a browser sploit on the target.

                                                                        1. 1

                                                                          In all of those cases, it’s game over for your browser data regardless of whether you use Firefox Sync, Mozilla-hosted or otherwise.

                                                                          1. 1

                                                                            That’s true! Unless they rewrite it all in Rust with overflow checking on. And in a form that an info-flow analyzer can check for leaks. ;)

                                                                        2. 1

                                                                          As you said, it’s totally fair to not trust Mozilla with data. As part of that, it should always be possible/supported to “self-host”, as a means to keep that as an option. Enough said to that point.

                                                                          As to “understanding the architecture”, it also comes with appreciating the business practices, ethics, and means to work to the privacy laws of a given jurisdiction. This isn’t being conveyed well by any of the major players, so with the minor ones having to cater to those “big guys”, it’s no surprise that mistrust would be present here.

                                                                        3. 2

                                                                          How do they encrypt it?

                                                                          On the client, of course. (Even Chrome does this the same way.) Firefox is open source, you can find out yourself how exactly everything is done. I found this keys module, if you really care, you can find where the encrypt operation is invoked and what data is there, etc.

                                                                          1. 2

                                                                            You don’t have to give it to them. Firefox sync is totally optional, I for one don’t use it.

                                                                            Country: almost certainly the USA. Encryption: looks like this is what they use: https://wiki.mozilla.org/Labs/Weave/Developer/Crypto

                                                                        4. 2

                                                                          The move to Clouflare as dns over https is annoying enough to make me consider other browsers.

                                                                          You can also add the fact it is not easy to self host datas stored in your browser. I am not sure they are not sold when there first financial support is Google which have based is revenue from datas?

                                                                          Please, no FUD. :)

                                                                          1. 3

                                                                            move to Clouflare

                                                                            It’s an experiment, not a permanent “move”. Right now you can manually set your own resolver and enable-disable DoH in about:config.

                                                                      1. 37

                                                                        What about dependencies? If you use python or ruby you’re going to have to install them on the server.

                                                                        How much of the appeal of containerization can be boiled directly down to Python/Ruby being catastrophically bad at handling deploying an application and all its dependencies together?

                                                                        1. 6

                                                                          I feel like this is an underrated point: compiling something down to a static binary and just plopping it on a server seems pretty straightforward. The arguments about upgrades and security and whatnot fail for source-based packages anyway (looking at you, npm).

                                                                          1. 10

                                                                            It doesn’t really need to be a static binary; if you have a self-contained tarball the extra step of tar xzf really isn’t so bad. It just needs to not be the mess of bundler/virtualenv/whatever.

                                                                            1. 1

                                                                              mess of bundler/virtualenv/whatever

                                                                              virtualenv though is all about producing a self-contained directory that you can make a tarball of??

                                                                              1. 4

                                                                                Kind of. It has to be untarred to a directory with precisely the same name or it won’t work. And hilariously enough, the --relocatable flag just plain doesn’t work.

                                                                                1. 2

                                                                                  The thing that trips me up is that it requires a shell to work. I end up fighting with systemd to “activate” the VirtualEnv because I can’t make source bin/activate work inside a bash -c invocation, or I can’t figure out if it’s in the right working directory, or something seemingly mundane like that.

                                                                                  And god forbid I should ever forget to activate it and Pip spews stuff all over my system. Then I have no idea what I can clean up and what’s depended on by something else/managed by dpkg/etc.

                                                                                  1. 4

                                                                                    No, you don’t need to activate the environment, this is a misconception I also had before. Instead, you can simply call venv/bin/python script.py or venv/bin/pip install foo which is what I’m doing now.

                                                                                  2. 1

                                                                                    This is only half of the story because you still need a recent/compatible python interpreter on the target server.

                                                                                2. 8

                                                                                  This is 90% of what I like about working with golang.

                                                                                  1. 1

                                                                                    Sorry, I’m a little lost on what you’re saying about source-based packages. Can you expand?

                                                                                    1. 2

                                                                                      The arguments I’ve seen against static linking are things like you’ll get security updates etc through shared dynamic libs, or that the size will be gigantic because you’re including all your dependencies in the binary, but with node_packages or bundler etc you’ll end up with the exact same thing anyway.

                                                                                      Not digging on that mode, just that it has the same downsides of static linking, without the ease of deployment upsides.

                                                                                      EDIT: full disclosure I’m a devops newb, and would much prefer software never left my development machine :D

                                                                                      1. 3

                                                                                        and would much prefer software never left my development machine

                                                                                        Oh god that would be great.

                                                                                  2. 2

                                                                                    It was most of the reason we started using containers at work a couple of years back.

                                                                                    1. 2

                                                                                      Working with large C++ services (for example in image processing with OpenCV/FFmpeg/…) is also a pain in the ass for dynamic libraries dependencies. Then you start to fight with packages versions and each time you want to upgrade anything you’re in a constant struggle.

                                                                                      1. 1

                                                                                        FFmpeg

                                                                                        And if you’re unlucky and your distro is affected by the libav fiasco, good luck.

                                                                                      2. 2

                                                                                        Yeah, dependency locking hasn’t been a (popular) thing in the Python world until pipenv, but honestly I never had any problems with… any language package manager.

                                                                                        I guess some of the appeal can be boiled down to depending on system-level libraries like imagemagick and whatnot.

                                                                                        1. 3

                                                                                          Dependency locking really isn’t a sufficient solution. Firstly, you almost certainly don’t want your production machines all going out and grabbing their dependencies from the internet. And second, as soon as you use e.g. a python module with a C extension you need to pull in all sorts of development tooling that can’t even be expressed in the pipfile or whatever it is.

                                                                                        2. 1

                                                                                          you can add node.js to that list

                                                                                          1. 1

                                                                                            A Node.js app, including node_modules, can be tarred up locally, transferred to a server, and untarred, and it will generally work fine no matter where you put it (assuming the Node version on the server is close enough to what you’re using locally). Node/npm does what VirtualEnv does, but by default. (Note if you have native modules you’ll need to npm rebuild but that’s pretty easy too… usually.)

                                                                                            I will freely admit that npm has other problems, but I think this aspect is actually a strength. Personally I just npm install -g my deployments which is also pretty nice, everything is self-contained except for a symlink in /usr/bin. I can certainly understand not wanting to do that in a more formal production environment but for just my personal server it usually works great.

                                                                                          2. 1

                                                                                            Absolutely but it’s not just Ruby/Python. Custom RPM/DEB packages are ridiculously obtuse and difficult to build and distribute. fpm is the only tool that makes it possible. Dockerfiles and images are a breeze by comparison.

                                                                                          1. 39

                                                                                            This is the most sane article on modern tools I’ve read in ages. Neglecting complexity is the industry-wise disease, and the article explicitly talks about it.

                                                                                            1. 25

                                                                                              Our industry would be a lot better if there were more stories of people running into real issues and having to scale–not self-inflicted ones. I don’t see enough of “Here’s where our response times and query times blew up, here’s why we couldn’t just buy a bigger machine, here’s the service architecture we had and why that was unchangeable, here’s where our hand was forced.”

                                                                                              There’s not enough clean data and anecdata to properly educate the next generation, and instead we end up with weird stories like Hadoop being beaten by basic shell knowledge or the famous McIlroy/Knuth example.

                                                                                              Cynically, one might note that a common theme is that fewer engineers with more knowledge and better analysis outperform large teams of less experienced engineers with shinier-but-less-understood tooling and problems, but then we start running into the deep soul-searching of how our industry career paths work and how we’re compensated and how “good” we all really are at our jobs…and that just leads to madness.

                                                                                              1. 9

                                                                                                Knuth example:

                                                                                                “What people remember about his review is that McIlroy wrote a six-command shell pipeline that was a complete (and bug-free) replacement for Knuth’s 10+ pages of Pascal. Here’s the script, with each command given its own line:”

                                                                                                Although I get and agree with article’s gist, the second program is super apples to oranges comparison. Knuth custom made his functions if I’m understanding the article. He did it in 10 pages of neat Pascal to ensure it’s all done correctly. The alternative was 6 lines of shell that were other programs. Their source is used in the solution but not counted. Problems in his source was counted but problems in those dependencies’ source over time wasn’t mentioned at all. Either the UNIX utilities were flawless, neat code or their flaws were ignored. Apples to apples comparison would be the total source, style, correctness, and effort of the full program (script plus source of dependencies) vs Knuths full program. Knuth might look better then.

                                                                                                1. 14

                                                                                                  I’ll disagree with you here–the shell was linking together programs (subroutines) the same as you would in a language–say, Pascal–with standard library routines and components. Saying that the shell script doesn’t count because it used other programs feels a little bit like saying the Pascal program doesn’t count because the developer isn’t manually pushing around stack frames and setting link registers.

                                                                                                  Anyways, your concern and my reply above is exactly what I’m talking about: we only have a handful of stories like the above, and they don’t even provide clear guidance for practices. Some questions:

                                                                                                  • Are we to conclude that McIlroy was correct for using smaller programs that functioned about as quickly and could be maintained by others?
                                                                                                  • Are we to conclude that Kunth’s rigor is better, even if his program took much longer to develop?
                                                                                                  • Is the fact that Knuth is clearly the superior computer scientist also give him superior engineering status?
                                                                                                  • Should all of us have the familiarity with basic tools that McIlroy has so we don’t have to reinvent bespoke wheels as Knuth did?

                                                                                                  All of those are valid questions and interpretations of the source story–which is why I classified it as “weird”.

                                                                                                  The story underscores our own profession’s lack of understanding.

                                                                                                  1. 3

                                                                                                    “aying that the shell script doesn’t count because it used other programs feels a little bit like saying the Pascal program doesn’t count because the developer isn’t manually pushing around stack frames and setting link registers.”

                                                                                                    I think that takes it too far. Most developers don’t expect each other to write compilers or assembly. Experts usually do that in a way average developer can use. If custom and aiming for efficiency, they use the standard, low-level language of the platform. That’s C for UNIX. Pascal is a C alternative. So, I’d have compared the C implementation of the shell commands plus the shell script to the Pascal implementation. That’s pretty fair.

                                                                                                    “ll of those are valid questions and interpretations of the source story–which is why I classified it as “weird”. The story underscores our own profession’s lack of understanding.”

                                                                                                    I agree with that and your questions being good examples of it.

                                                                                                    1. 2

                                                                                                      I think these are very interesting questions. I’d note that indeed you would be comparing apples to oranges, given that McIlroy’s shell commands are one abstraction level higher; indeed the shell is linking programs together like subroutines, but those subroutines/programs are themselves written using the standard library as well. If Knuth had used a Pascal helper library which included subroutines equivalent to each of the existing shell programs, that program would not have been (much) longer.

                                                                                                      So yes, most often it is the right approach to re-use existing libraries and components (otherwise, we’d all still be hand-crafting machine code, basically building skyscrapers out of toothpicks). The best engineers understand the tower of abstraction as far down as possible, which means they know where the fault lines are, so they realize when the standard tools suffice, and when it is necessary to write your own or dive into the source to make existing tools suitable. It’s a trade off that has to be made, depending on the project and how important the component is to it.

                                                                                                      I think you end up with even deeper and harder to understand abstractions that try to paper over the flaws in lower level tools if you avoid rewrites at all costs, which is (unfortunately) something of a broader problem in our profession. It doesn’t help that all the “good practice” guides tell you to re-use, re-use, re-use existing code and never write your own if something already exists if it’s even remotely similar to what you need. Of course, market pressure to perform and whip up stuff as quickly as you can in as low a budget and as little time as possible doesn’t help. I’m definitely guilty myself of using square pegs to fit round holes just to save time and costs, and I’m sure the vast majority of us are. On the other hand, without that pressure there would be so many examples of modern technology that we take for granted which would not exist.

                                                                                                      I guess this is why programming in practice is more of an art or craft than a science. It’s also what makes it challenging, and we’ll probably be arguing about all of this decades from now :)

                                                                                                    2. 2

                                                                                                      The counterexample to the oversimplified “what people remember” is also in Programming Pearls.

                                                                                                      Here’s the story in a nutshell. With a good idea, some powerful tools, and a free afternoon, Steve Johnson built a useful spelling checker in six lines of code. Assured that the project was worthy of substantial effort, a few years later Doug McIlroy spent several months engineering a great program.

                                                                                                      [http://dl.acm.org/ft_gateway.cfm?id=315102&type=pdf]

                                                                                                      1. 2

                                                                                                        That’s really only relevant from a security/correctness perspective.

                                                                                                        The important part in most environments is that the shell pipeline takes five minutes to write and another ten to debug, and most future modifications are going to take a similar amount of effort.

                                                                                                        Doing it Knuth’s way would take me all day, and I have plenty of other work to do.

                                                                                                        1. 1

                                                                                                          That’s true. It’s better to use quick-and-dirty route if incorrect results are OK.

                                                                                                          1. 1

                                                                                                            Well in this case it was Knuth program that had bugs. So, slow-and-dirty route.

                                                                                                            1. 1

                                                                                                              The UNIX utilities have had plenty of bugs. Those may or may not have, too. The fair comparison would be bugs and severity in them up to that point vs Knuths clean-slate version in Pascal.

                                                                                                      2. 4

                                                                                                        and that just leads to madness.

                                                                                                        Or disenchantment. That’s where I am.

                                                                                                        1. 2

                                                                                                          I knew about the Knuth/McIlroy discussion but I never read any of McIlroy’s actual comments from it. They’re really good. Kudos for linking to that article.

                                                                                                          Makes me want to find a full copy of the journal in my university’s library.

                                                                                                      1. 1

                                                                                                        I remember looking into Shepherd a while ago when I was considering trying GuixSD. What turned me off was the fact that, IIUC, I was simply expected to know Guile in order to configure the thing. Is that seriously right?

                                                                                                        I have absolutely no problem with Guile generally speaking, but learning a new language - especially learning it well - is very non-trivial. I’ve been using Emacs for 5 years now with extremely mediocre ELisp skills, and that’s totally fine by me because it’s just a regular app. Would I like to be better at it? Sure. But it’s just not a priority. On the other hand, I would be deeply uncomfortable running PID 1 with the same blasé attitude.

                                                                                                        1. 3

                                                                                                          This looks really awesome, although it seems like most of the benefits over a regular Nitrokey only exist if you have a Purism laptop? Which is a bummer. I used a Purism laptop briefly and really liked it (mostly) but my screen broke and they were unable to replace it for 6 months so eventually I ordered a System76.

                                                                                                          Nitrokey is really excellent though.

                                                                                                          1. 5

                                                                                                            I guess it would also work for any coreboot-able laptop (e.g. Thinkpad X220/230) you install Heads on

                                                                                                          1. 10

                                                                                                            Why do people think MS is doing all this? Do people really think a company worth 860 billion dollars has anything to give away for free? I do not want to go into MS bashing, but believing that a big company like MS is now altruistic and believing in making the world a better place is just naive. MS wants to be seen as cool and hip with the dev. crowd, esp. the young Sillicon Valley crowd, so that they can sell more Azure. They do not care about software freedom or anything like that.

                                                                                                            1. 12

                                                                                                              Goals can align. Microsoft might care about software freedom because that improves their business in some way. In this case, their goal is obviously to collect metrics about users. Almost all of the code is open though.

                                                                                                              1. 3

                                                                                                                I don’t think thats an obvious goal at all - metrics about users. A perfectly acceptable goal is to regain mindshare among developers. vscode can be seen as a gateway drug to other microsoft services, improving their reputation.

                                                                                                                1. 2

                                                                                                                  I wonder what metrics from a text editor would be useful to them?

                                                                                                                  1. 10

                                                                                                                    I want metrics from the compilers I work on. It’d be super useful to know what language extensions people have enabled, errors people hit, what they do to fix them, etc. Sounds mundane at first, but it’d allow me to focus on what needs work.

                                                                                                                    1. 8

                                                                                                                      Well, VS Code doesn’t choose your compilers :)

                                                                                                                      either way, I don’t get the paranoia. Performance telemetry, automated crash reports, stats about used configurations – not stuff that violates privacy in any meaningful way. It’s weird that this gets lumped in together in the general paranoia storm with advertisers building a profile of you to sell more crap.

                                                                                                                      1. 8

                                                                                                                        Issue #49161 VSCode sends search keystrokes to Microsoft even with telemetry disabled

                                                                                                                        It’s not even paranoia so much as irritation at this point. I know my digital life is leaking like a sieve, and I’d like to plug the holes.

                                                                                                                        1. 3

                                                                                                                          Kinda clickbait issue title. Yeah, keystrokes are always a lot more worrying than metrics, but this is settings search. I guess you could Ctrl+F search for something secret (e.g. a password) in a text file, but not in the settings.

                                                                                                                          1. 12

                                                                                                                            You know, there was a time when it was big news if a commercial program was caught to “phone home” at all. It didn’t matter what the content was.

                                                                                                                            (Today, you’d call a ‘commercial program’ a ‘proprietary application’.)

                                                                                                                            It’s still a big deal today if an open source/community maintained/free software application ‘phones home’, because reasons: untrusted individuals, the value of big data, and principles of privacy.

                                                                                                                            Now that M$ is in the game, let’s add ‘untrusted corporation’ to that last list.

                                                                                                                            I don’t care what the nature of the data is–I don’t want to be observed. Especially not as I ply my craft–few activities produce measurable signals from any deeper inside myself, and every one of those is definitely on my personal ‘no, you can’t watch!’ list.

                                                                                                                            1. 1

                                                                                                                              For me personally, I have no problem adding telemetry to apps I maintain. But I’m sure going to make sure users know about it and can disable it if they want. I think that’s the real issue - consent.

                                                                                                                            2. 5

                                                                                                                              That’s having to think way too hard about what they’re intercepting.

                                                                                                                      2. 4

                                                                                                                        Platform it’s running on, type of code being edited, frequency of use for a given feature. Heuristic data about how people interact with the UI. The list goes on. Note also that none of this need be evil. It could be seen as collecting data looking to improve user experience.

                                                                                                                    2. 3

                                                                                                                      I’d guess they’re after a platform. They want to build a base (using organic growth) that they might later on capitalize on, either by learning from it to invite people to use (proper) Visual Studio or by limiting VSCode’s openness.

                                                                                                                    1. 1

                                                                                                                      I’m a bit confused by the mention of putting parts of the heap in slower memory. Shouldn’t that be the kernel + swap’s job? Or does the kernel only page out entire process address spaces (which would clearly be too coarse for what the article’s discussing)?

                                                                                                                      1. 4

                                                                                                                        Linux on x86 and amd64 can page out individual 4kiB pages, so the granularity of that is fine.

                                                                                                                        It’s plausible that they might be able to get much better behaviour bybydoing it themselves instead of letting the kernel do it. Two things spring to mind:

                                                                                                                        If they’re managing object presence in user space, they know which objects are in RAM so they can refrain from waking them up when they definitely haven’t changed. Swap is mostly transparent to user processes. You really don’t want to wake up a swapped out object during GC if you can avoid it, but you don’t know which objects are swapped out without calling mincore() for every page, which is not very fast.

                                                                                                                        Other thing that springs to mind: AFAIK handling page faults is kinda slow and an x86 running Linux will take something like a (large fraction of) a microsecond each time a fault occurs. AFAIK the fault mechanism in the CPU is quite expensive (it has to flush some pipelines) at least. So doing your paging-in in userspace with just ordinary instructions that don’t invoke the OS or the slow bits of the CPU may be a big win.

                                                                                                                      1. 33

                                                                                                                        I don’t really think that you should be allowed to ask the users the sign a new EULA for security patches. You fucked up. People are being damaged by your fuck up and you should not use that as leverage to make the users do what you want so they can stop your fuck up from damaging them further.

                                                                                                                        Patches only count if they come with the same EULA as the original hardware/software/product.

                                                                                                                        1. 9

                                                                                                                          Sure - you’re welcome to refuse the EULA and take your processor back to the retailer, claiming it is faulty. When they refuse, file a claim in court.

                                                                                                                          Freedom!

                                                                                                                          1. 6

                                                                                                                            This suggestion reminds me of the historical floating point division bug. See https://en.m.wikipedia.org/wiki/Pentium_FDIV_bug

                                                                                                                            There was a debate about the mishandling by Intel. Also, there was debate over “real-world impact,” estimates were all over the charts.

                                                                                                                            Here, it seems that the impact is SO big, that almost any user of the chip can demonstrate significant performance loss. This might become even bigger than the FDIV bug.

                                                                                                                            1. 4

                                                                                                                              They are being sued by over 30 groups (find “Litigation related to Security Vulnerabilities”). It already is.

                                                                                                                              As of February 15, 2018, 30 customer class action lawsuits and two securities class action lawsuits have been filed. The customer class action plaintiffs, who purport to represent various classes of end users of our products, generally claim to have been harmed by Intel’s actions and/or omissions in connection with the security vulnerabilities and assert a variety of common law and statutory claims seeking monetary damages and equitable relief. The securities class action plaintiffs, who purport to represent classes of acquirers of Intel stock between July 27, 2017 and January 4, 2018, generally allege that Intel and certain officers violated securities laws by making statements about Intel’s products and internal controls that were revealed to be false or misleading by the disclosure of the security vulnerabilities […]

                                                                                                                              As for replacing defective processors, I’d be shocked. They can handwave enough away with their microcode updates because the source is not publicly auditable.

                                                                                                                              1. 1

                                                                                                                                The defense could try to get the people who are discovering these vulnerabilities in on the process to review the fixes. They’d probably have to do it under some kind of NDA which itself might be negotiable given a court is involved. Otherwise, someone who is not actively doing CPU breaks but did before can look at it. If it’s crap, they can say so citing independent evidence of why. If it’s not, they can say that, too. Best case is they even have an exploit for it to go with their claim.

                                                                                                                          2. 4

                                                                                                                            I don’t really think that you should be allowed to ask the users the sign a new EULA for security patches.

                                                                                                                            A variation of this argument goes that security issues should be backported or patched without also including new features. It is not a new or resolved issue.

                                                                                                                            Patches only count if they come with the same EULA as the original hardware/software/product.

                                                                                                                            What is different here is that this microcode update also requires operating system patches and possibly firmware updates. Further not everyone considers the performance trade-off worth it: there are a class of users for whom this is not a security issue. Aggravating matters, there are OEMs that must be involved in order to patch or explicitly fail to patch this issue. Intel had to coordinate all of this, under embargo.

                                                                                                                            1. 2

                                                                                                                              This reminds me of HP issuing a “security” update for printers that actually caused the printer to reject any third-party ink. Disgusting.

                                                                                                                              1. 2

                                                                                                                                I had not considered the case where manufacturers and end-users have different and divergent security needs.

                                                                                                                                1. 2

                                                                                                                                  It’s worth thinking on more broadly since it’s the second-largest driver of insecurity. Demand being the first.

                                                                                                                                  The easiest example is mobile phones. The revenue stream almost entirely comes from sales of new phones. So, they want to put their value proposition and efforts into the newest phones. They also want to keep costs as low as they can legally get away with. Securing older phones, even patching them, is an extra expense or just activity that doesn’t drive new phone sales. It might even slow them. So, they stop doing security updates on phones fairly quickly as extra incentive for people to buy new phones which helps CEO’s hit their goalposts in sales.

                                                                                                                                  The earliest form I know of was software companies intentionally making broken software when they could spend a little more to make it better. Although I thought CTO’s were being suckers, Roger Schell (co-founder of INFOSEC) found out otherwise when meeting a diverse array of them under Black Forrest Group. When he evangelized high-assurance systems, the CTO’s told him they believed they’d never be able to buy them from the private sector even though they were interested in them. They elaborated that they believed computer manufacturers and software suppliers were intentionally keeping quality low to force them to buy support and future product releases. Put/leave bugs in on purpose now, get paid again later to take them out, and force new features in for lock-in.

                                                                                                                                  They hit the nail on the head. Biggest examples being IBM, Microsoft, and Oracle. Companies are keeping defects in products in every unregulated sub-field of IT to this day. It should be default assumption with default mitigation being open API’s and data formats so one can switch vendors if encountering a malicious one.

                                                                                                                                  EDIT: Come to think of it, the hosting industry does the same stuff. The sites, VPS’s, and dedi’s cost money to operate in a highly-competitive space. Assuming they aren’t loss-leaders, I bet profitability on the $5-10 VM’s might get down to nickles or quarters rather than dollars. There’s been products on market touting strong security like LynxSecure with Linux VM’s. The last time I saw price of separation kernels w/ networking and filesystems it was maybe $50,000. Some supplier might take that a year per organization just to get more business. They all heavily promote the stuff. Yet, almost all hosts use KVM or Xen. Aside from features, I bet the fact that they’re free with commoditized support and training factors into that a lot. Every dollar in initial profit you make on your VM’s or servers can further feed into the business’s growth or workers’ pay. Most hosts won’t pay even a few grand for a VMM with open solutions available, much less $50,000. They’ll also trade features against security like management advantages and ecosystem of popular solutions. I’m not saying any of this is bad choices given how demand side works: just that the business model incentivizes against security-focused solutions that currently exist.

                                                                                                                            2. 1

                                                                                                                              I think you have to be presented with the EULA before purchase for it to be valid anyway