1. 11

    This is shameful:

    To Kaminska’s point, in April a once-shuttered coal power plant in Australia was announced to be reopened to provide electricity to a cryptocurrency miner. And just today, a senator from Montana warned that the closure of a coal power plant “could harm the booming bitcoin mining business in the state.”

    At a small scale, heavy residential electricity users in certain U.S. locations where marijuana remains illegal are sometimes checked out in case they are running a growing operation. I wonder if this idea of investigating grid usage by crypto miners could be applied at a large scale, or are they simply too big, coordinated, and powerful to be regulated through anything but national-scale action?

    1. 8

      Mining Bitcoin or other crypto is entirely legal. So it’s just a question of the miners signing a commercial power deal with whomever sells electricity. So there’s no need for miners to use subterfuge like illegal growers.

      1. 10

        If anything, people who are illegally growing marijuana might want to disguise their suspicious power useage by pretending to be mining cryptocurrencies!

        1. 5

          There could be zoning restrictions, though I would guess you’d build the mine in a commercial area anyway.

        2. 8

          This is shameful:

          What is the problem?

          We already expend huge amount of electricity on distributing cat videos and movies of men in cape flying around blowing stuff up. How is mining bitcoin any less ‘productive’ than beaming photons into people’s eyeballs?

          We already have huge established industry involving people betting on whether or not something will happen. Sports betting, futures market, roulette etc. If you want to save on some carbon emission, then turn off your computer and surrender your car to the nearest recycling plant. But you won’t because you think those things are ‘worthwhile’ because you like them.

          Maybe bitcoin will be useless technically, maybe it won’t. This is just a decentralised R&D program and a gambling pool rolled into one.

          The problem isn’t bitcoin. The problem is clean energy scarcity.

          1. 5

            “This is just a decentralised R&D program and a gambling pool rolled into one.”

            Best, concise description of it I’ve ever seen. ;)

          2. 2

            There’s a pretty good study on the electricity/carbon burden of marijuana manufacturing in California.

            https://sites.google.com/site/millsenergyassociates/topics/energy-efficiency/energy-up-in-smoke

            1. 2

              It seems to me that electricity is hilariously underpriced, if the best usage anyone can think of for it is a sad desperate attempt to circumvent Chinese capital controls.

              1. 7

                Or… bitcoin is hilariously overpriced if it’s worth the electricity to make it?

            1. 4

              I’m guessing that Google kept most of those tools they built for making JS usable private so they never ended up becoming what we all use today and instead others have reinvented all of it.

              1. 2

                Closure Compiler is totally an alternate-history TypeScript. It’s a pleasure to use if you get it set up right.

                1. 2

                  They published a lot of them but at the time they were working with much bigger and weirder codebases than the rest of the industry.

                  1. 0

                    I wish they had done the same with MapReduce, Dart and Go …

                    1. 2

                      Why?

                      1. 1

                        Something with more technical merit could have taken their place.

                      2. 2

                        Better to help other tools than hate on ones you dislike.

                        1. 1

                          And that’s what I did in the past. But the thing is, if you write software to have some amount of users, adoption matters, and adoption rarely follows quality.

                          You could have written something 10 times better than what Google published, and everyone would still flock to Google’s software. Technical merit << Google name-drop.

                          1. 1

                            I tend to look at things like that as a competitive edge for myself.

                    1. 18

                      Why is an app needed? The website works perfect on mobile.

                      1. 8

                        I’d hope that an app would (eventually) be able to support features not available as a website, such as APNS (push notifications) for messages and replies, for example.

                        Edit: Also, the possibility to sync and browse offline.

                          1. 3

                            I’ve been thinking about pulling down the source and looking at adding at least push notifications to the web app. But then life etc.

                            1. 2

                              Good to know, as last I checked this was not available. I do see it is noted in the Push section that “The technology is still at a very early stage” — and I’ve not seen anyone try using it yet.

                              1. 2

                                I’ve not seen anyone try using it yet

                                Really? Every damn website these days asks for push notifications permission! Even random news websites and blogs that really shouldn’t do that.

                                1. 1

                                  Those aren’t the same notifications, I believe, are very different than what we are discussing - they don’t provide push notifications outside of the browser, like APNS.

                                  Edit: Yes, they call them “Push services” vs. “Notifications”. Two separate things. When I speak of notifications I mean the APNS “Push” type notifications.

                                  1. 1

                                    Depends on how the browser implements them — mobile browsers do use APNS/GCM to deliver web push notifications. Desktop Safari and Edge probably do that kind of thing too. With desktop Firefox, sure, you need the browser to be running.

                              2. 2

                                Or, currently, Pushover

                                1. 1

                                  I’ve been using Prowl for many many years and while I’ve thought of changing, I just haven’t found the need to just yet, and I’ve built way too much with Prowl.

                                  Also, there are other competing services - Pushbullet, Telegram bots, etc.

                                  Having a native app that integrates with your native notification system is convienent, especially for mobile.

                                  1. 1

                                    I mean — Lobsters supports Pushover specifically.

                            2. 7

                              Speed, less memory, security, better notifications, possibly better search, user-specific plugins, user-specific UI’s, parallelizing any of that on multicore/NUMA/clusters, and and so on. The usual reasons to replace a web interface with a native one.

                              I’ll go ahead and mention a UI problem I have on Lobsters periodically: I can’t tell if a comment is actually being submitted or the site is doing nothing. There was no visual feedback. The screen just sat there for quite a while. If it was being slow, that results in duplicates I had to remove. I’d rather have an instant change in my UI, even if small, that tells me it’s actually sending the comment. Then, it will either show page or failure. Also, I’m not sure if this still happens or someone changed the code since I haven’t seen it in a while. I think alynpost’s hardware upgrade and caching knocked out the lag that was causing it. The point is a native app might allow such a UI change.

                              1. 3

                                I can’t tell if a comment is actually being submitted or the site is doing nothing. There was no visual feedback. The screen just sat there for quite a while. If it was being slow, that results in duplicates I had to remove.

                                I have noticed at least once a duplicate comment from you, thank you for reporting on what that is like on your end.

                                One cause of site lag or slowness is the OOMkiller grabs the Ruby/Unicorn worker that was servicing your request. This is not a normal operation: we add memory, reduce the queue size, or right-size the application when this starts happening. That said, we’re sitting at 7GB memory in-use and when I checked based on your comment here the OOMkiller did take out a worker in the past ~24 hours.

                                This issue aside, your comment about UX feedback is solid. It’s not always the OOMkiller. If any of you have suggestions on collecting and summarizing timing data for requests in Ruby or have suggestions on intra-process performance metrics (like collectd), it’s plausibly time to get better data here: the last memory upgrade was less than two weeks ago.

                                1. 3

                                  That’s interesting. Thanks. OOMkiller grabbing workers sounds like a way to get DOS’s or heisenbugs on incoming requests. Maybe heisenbugs over time, too, on stateful systems. Just noticing the bug let me deal with it, though. So, I post. Then, I wait a few seconds, use another tab for other content, or something. I check on it in 30s-1m. Keeps me from doing doubles. Last few are when I was on mobile in a hurry in weak-signal environment.

                                  Again, a native app could improve that use case esp if combined with custom, efficient relay at home. The app deliver it to relay. I know it’s sent to something that might attempt delivery, check within the wait period, repost if necessary, detect any duplicates, and delete them. Maybe it has my login credentials but my phone doesn’t. Various possibilities. I don’t know if it’s worth the time to devise such apps. I’ll probably just delete the duplicates. Relay for avoiding weak-signal issues just popped into my head as a possibility enabled with custom client that’s all or partly native.

                                  1. 3

                                    As an outsider to the Ruby world, I’m curious why you choose to use Unicorn. IIUC, Unicorn only runs one request at a time in each worker. That seems to me like it would waste a lot of memory. Is real-world Rails still not ready for multi-threaded servers? I know they exist, e.g. Puma.

                                    1. 4

                                      The decision to use Unicorn was made before my time. I’m happy to revisit it with anyone who’d find that an interesting problem.

                                      1. 2

                                        The workers are all forks, so the memory overhead is minimal thanks to copy-on-write.

                                        Unicorn is also able to use shared sockets to let the kernel map requests to workers without an extra queueing layer.

                                  2. 4

                                    I’ve personally always struggled with Lobste.rs on mobile. On my iPhone in portrait mode, I’ve never been able to long press the comment count on the right side, in order to pop up the menu that allows me to open up the comments in a new tab. Lobste.rs seems to ignore my long press. I can, of course, just tap it, but then I lose my place on the main page.

                                    As a result, I always have to use Lobste.rs in landscape mode. So I wouldn’t say the website works perfect on mobile…

                                    1. 3

                                      Last month an iOS user reported they had difficulty selecting the comment link at all. We confirmed the problem and got it fixed.

                                      Would you mind if I transcribed your comment here in to a ticket? If you haven’t tried in the last month it’s worth seeing if the above patch was sufficient. Otherwise we’ll confirm it and see what we can do.

                                      1. 4

                                        Thanks for your reply! I was able to confirm it still seems to be an issue. Long press does nothing until you release the long press; at that point, the Mobile Safari menu finally pops up, but the web page navigates into the comments (before I’ve selected how I want it to open).

                                        I created a ticket here:

                                        https://github.com/lobsters/lobsters/issues/540

                                        1. 3

                                          Interesting. Seems to work fine on Android. I wonder what the difference could be?

                                          1. 2

                                            Any browser on iOS uses the Safari engine, anything on Android does not.

                                            1. 2

                                              Yes. I was wondering why it would only show up in WebKit.

                                    2. 3

                                      A “native” app can be more responsive than a website, so I’ll definitely going to check the app.

                                      1. 1

                                        I know that has defiantly been the case, especially animations can be choppy in browsers. There is another post on the front page right now that shows Mozilla’s Servo can now render things a whole lot faster without skipping frames or lag.

                                        Will be good to test things out and see what the state of animations on mobile are now but the lobsters website is pretty basic and is fully responsive.

                                      2. 3

                                        Although progressive web apps can and do work very well, the effort required to make a good one is significantly higher than it is to make an app. Even then, it won’t feel anywhere near native (performance-wise) because the amount of JavaScript needed to make it happen will make the app slow down.

                                        Also, the app has a dark theme.

                                        1. 1

                                          I can’t use Lobsters at work because of the rs TLD. I actually wish someone would just give it another URL so I could hit it

                                          1. 3

                                            Do you have a server or little board at home? You could set it up to proxy it using an IP address instead of a name. It just redirects packets from work to home to Lobsters back and forth.

                                            1. 2

                                              I have had a similar issue with config/color scheme generator websites being on .sexy domains. Just an example of how TLD level blocking is ridiculous.

                                              1. 1

                                                You could always use toe gopher mirror, unless the protocol is blocked.

                                                1. 1

                                                  Do you know what product is being used to block the .rs ccTLD? Are you able to describe technically how the blocking is being accomplished?

                                                  EDIT: When you’re next logged in at work, I’d appreciate it if you could get a screenshot or error message of the site being blocked and email it to me.

                                                  1. 4

                                                    This was discussed on that other link aggregation site earlier. Blue Coat was mentioned in that thread, and that works by stripping SSL locally before sending it onto the internet. Basically, that should be impossible to get around.

                                                    Other web filters work by both redirecting DNS to a block page, or, if a custom DNS is set, it does a reverse DNS lookup for the server IP.

                                                    1. 3

                                                      I don’t know, but I’ll check Tuesday if I remember. I work at Capital One fwiw

                                                  2. 1

                                                    I’m not a big mobile or app user, so not directly answering your question, but one exciting thing about a lobste.rs app is that it exercises and possibly helps fix bugs in or develop the API.

                                                  1. 4

                                                    Cool, they shipped with my Go port for SPARC. Unfortunately we didn’t have time to update it to the latest Go version and merge it upstream so now the port is in limbo (more details of what happened for who is interested).

                                                    By the way, Solaris is not illumos.

                                                    1. 3

                                                      Yeah why the #illumos?

                                                      1. 0

                                                        because for some reason, the solaris/sunos tag was added as illumos

                                                    1. 8

                                                      Turn off JS then? Isn’t this what a modern browser is by definition? A tool that executes arbitrary code from URLs I throw at it?

                                                      1. 7

                                                        I am one of those developers whom surfs the web with “javascript.options.wasm = false” and NoScript to block just about 99% of all websites from running any Javascript on my home-machine unless I explicitly turn it on. I’ve also worked on various networks where Javascript is just plain turned off and can’t be turned on by regular users. I’ve heard some, sadly confidential, war-stories that have led to these policies. They are similar in nature to what the author states in his Medium-post.

                                                        If you want to run something, run it on your servers and get off my laptop, phone, tv or even production-machines. Those are mine and if your website can’t handle it, then your website is simply terrible from a user experience viewpoint, dreadfully inefficient and doomed to come back hunting you when you are already in a bind because of an entirely different customer or issue. As a consequence of this way of thinking, a few web-driven systems I wrote more than a decade ago, are still live and going strong without a single security incident and without any performance issues while at the same time reaping the benefits of the better hardware they’ve been migrated to over the years.

                                                        Therefore it is still my firm belief that a browser is primarily a tool to display content from random URLs I throw at it and not an application platform which executes code from the URLs thrown at it.

                                                        1. 3

                                                          That’s a fine and valid viewpoint to have, and you are more than welcome to disable JS. But as a person who wants to use the web as an application platform, are you suggesting that browsers should neglect people like myself? I don’t really understand what your complaint is.

                                                          1. 2

                                                            But as a person who wants to use the web as an application platform, are you suggesting that browsers should neglect people like myself?

                                                            I don’t think so. But using Web Applications should be opt-in, not opt-out.

                                                            1. 3

                                                              Exactly.

                                                              There are just to many issues with JavaScript-based web-applications. For example: Performance (technical and non-technical). Accessibility (blind people perceive your site through a 1x40 or 2x80 Braille-character-display matrix, so essentially 1/2 or 2 lines on a terminal). Usability (see gmail’s pop-out feature which misses from by far most modern web-applications and you get it almost for free if you just see the web as a fancy document-delivery/viewing system). Our social status as developers as perceived by the masses: They think that everything is broken, slow and unstable, not because they can make a logical argument, but because they “feel” (in multiple ways) that it is so. And many more.

                                                              However the author’s focus is on security. I totally get where the author is coming from with his “The web is still a weapon”-posts. If I put off my developer-goggles and look through a user’s eyes it sure feels like it is all designed to be used as one. He can definitely state his case in a better way, although I think that showing that you can interact with an intranet through a third-party javascript makes the underlying problems, and therefore the message too, very clear.

                                                              It also aligns with the CIA’s Timeless tips for sabotage which you can read on that link.

                                                              We should think about this very carefully, despite the emotionally inflammatory speech which often accompanies these types of discussions.

                                                              1. 1

                                                                He can definitely state his case in a better way

                                                                I sincerely welcome suggestions.

                                                          2. 1

                                                            by the same stretch of logic you could claim any limited subset of functionality is the only things computers should do in the name of varying forms of “security.”

                                                            perhaps something like: “The computer is a tool for doing computation not displaying things to me and potentially warping my view of reality with incorrect information or emotionally inflammatory speech. This is why I have removed any form of internet connectivity.”

                                                          3. 7

                                                            This is not a bug and it’s not RCE. JavaScript and headers are red herrings here. If you request some URL from a server, you’re going to receive what that server chooses to send you, with or without a browser. There’s a risk in that to be sure, but it’s true by design.

                                                            1. 3

                                                              Turn off your network and you should eliminate the threat. Turn your computer off completely for a safer mitigation.

                                                            1. 7

                                                              The points are good, but I certainly don’t want inotify features to be gating the VFS layer. IMO inotify is good at what it does. If you want to know about absolutely everything going on for a given filesystem, maybe you want to implement the filesystem itself (fuse, e.g.).

                                                              1. 11

                                                                IIRC (and I was involved in higher level filesystem libraries when this stuff was going into the kernel - but that was a long time ago) dnotify and inotify were designed with the constraint that they couldn’t impose a significant performance penalty, the logic being that the fs operations were more important than the change notification. If watching changes is as important or more important than io performance another mechanism like a fuse proxy fs or strace/ptrace makes sense.

                                                                1. 3

                                                                  fuse is how tup keeps track of dependencies, although I think it also will attempt to use library injection when that’s not availible.

                                                                  1. 1

                                                                    That’s awesome. I’ve tried experimenting with ptrace/strace to ensure correct dependency declaration and it’s a real pain to get right.

                                                                    1. 1

                                                                      I have yet to try it out, but I’m definitely using it in my next project.

                                                                2. 2

                                                                  Thing is, FUSE is slower, buggy (I’ve had kernel panics) and less flexible. A native way to track file system operations in a lossless manner would be really nice to have on linux.

                                                                1. 12

                                                                  Kind of an aside, but I’m pleased by the lack of vitriol in this.

                                                                  1. 13

                                                                    Almost all of Theo’s communications are straightforward and polite. It’s just that people cherry-picked and publicized the few occasions where he really let loose, so he got an undeserved reputation for being vitriolic.

                                                                    1. 2

                                                                      Pleasantly surprised, even.

                                                                    1. 8

                                                                      To be fair, they should also mark as “Not Secure” any page running JavaScript.

                                                                      Also, pointless HTTPS adoption might reduce content accessibility without blocking censorship.
                                                                      (Disclaimer: this does not mean that you shouldn’t adopt HTTPS for sensible contents! It just means that using HTTPS should not be a matter of fashion: there are serious trade-offs to consider)

                                                                      1. 11

                                                                        By adopting HTTPS you basically ensure that nasty ISPs and CDNs can’t insert garbage into your webpages.

                                                                        1. [Comment removed by author]

                                                                          1. 5

                                                                            Technically, you authorize them (you sign actual paperwork) to get/generate a certificate on your behalf (at least this is my experience with Akamai). You don’t upload your own ssl private key to them.

                                                                            1. 3

                                                                              Why on earth would I give anyone else my private certificate?

                                                                              1. 4

                                                                                Because it’s part of The Process. (Technical Dark Patterns, Opt-In without a clear way to Opt-Out, etc.)

                                                                                Because you’ll be laughed at if you don’t. (Social expectations, “received wisdom”, etc.)

                                                                                Because Do It Now. Do It Now. Do It Now. (Nagging emails. Nagging pings on social media. Nagging.)

                                                                                Lastly, of course, are Terms Of Service, different from the above by at least being above-board.

                                                                            2. 2

                                                                              No.

                                                                              It protects against cheap man-in-the-middle attacks (as the one an ISP could do) but it can nothing against CDNs that can identify you, as CDNs serve you JavaScript over HTTPS.

                                                                              1. 11

                                                                                With Subresource Integrity (SRI) page authors can protect against CDNed resources changing out from beneath them.

                                                                                1. 1

                                                                                  Yes SRI mitigate some of the JavaScript attacks that I describe in the article, in particular the nasty ones from CDNs exploiting your trust on a harmless-looking website.
                                                                                  Unfortunately several others remain possible (just think of jsonp or even simpler if the website itself collude to the attack). Also it needs widespread adoption to become a security feature: it should probably be mandatory, but for sure browsers should mark as “Not Secure” any page downloading programs from CDNs without it.

                                                                                  What SRI could really help is with the accessibility issues described by Meyer: you can serve most page resources as cacheable HTTP resources if the content hash is declared in a HTTPS page!

                                                                                2. 3

                                                                                  WIth SRI you can block CDNs you use to load JS scripts externally from manipulating the webpage.

                                                                                  I also don’t buy the link that claims it reduces content accessiblity, the link you provided above explains a problem that would be solved by simply using a HTTPS caching proxy (something a lot of corporate networks seem to have no problem operating considering TLS 1.3 explicitly tries not to break those middleboxes)

                                                                                  1. 4

                                                                                    CDNs are man-in-the-middle attacks.

                                                                                3. 1

                                                                                  As much as I respect Meyer, his point is moot. MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. Some companies even made out of the box HTTPS URL filtering their selling point. If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’. We should be ready to teach those in needs how to setup it of course, but that’s about it.

                                                                                  1. 0

                                                                                    MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. […] If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’.

                                                                                    Well… how can I say that… I don’t think so.

                                                                                    Selling HTTPS MitM proxy as a security solutions is plain incompetence.

                                                                                    Beyond the obvious risk that the proxy is compromised (you should never assume that they won’t) which is pretty high in some places (not only in Africa… don’t be naive, a chain is only as strong as its weakest link), a transparent HTTPS proxy has an obvious UI issue: people do not realise that it’s unsafe.

                                                                                    If the browsers don’t mark as “Not Secure” them (how could them?) the user will overlook the MitM risks, turning a security feature against the users’ real security and safety.

                                                                                    Is this something webmasters should care? I think so.

                                                                                    1. 4

                                                                                      Selling HTTPS MitM proxy as a security solutions is plain incompetence.

                                                                                      Not sure how to tell you this, but companies have been doing this on their internal networks for a very long time and this is basically standard operating procedure at every enterprise-level network I’ve seen. They create their own CA, generate an intermediate CA key cert, and then put that on an HTTPS MITM transparent proxy that inspects all traffic going in an out of the network. The intermediate cert is added to the certificate store on all devices issued to employees so that it is trusted. By inspecting all of the traffic, they can monitor for external and internal threats, scan for exfiltration of trade secrets and proprietary data, and keep employees from watching porn at work. There is an entire industry around products that do this, BlueCoat and Barracuda are two popular examples.

                                                                                      1. 5

                                                                                        There is an entire industry around products that do this

                                                                                        There is an entire industry around rasomware. But this does not means it’s a security solution.

                                                                                        1. 1

                                                                                          It is, it’s just that word security is better understood as “who” is getting (or not) secured from “whom”.

                                                                                          What you keep saying is that MitM proxy does not protect security of end users (that is employees). What they do, however, in certain contexts like described above, is help protect the organisation in which end users operate. Arguably they do, because it certainly makes it more difficult to protect yourself from something you cannot see. If employees are seen as a potential threat (they are), then reducing their security can help you (organisation) with yours.

                                                                                          1. 1

                                                                                            I wonder if you did read the articles I linked…

                                                                                            The point is that, in a context of unreliable connectivity, HTTPS reduce dramatically accessibility but it doesn’t help against censorship.

                                                                                            In this context, we need to grant to people accessibility and security.

                                                                                            An obvious solution is to give them a cacheable HTTP access to contents. We can fool the clients to trust a MitM caching proxy, but since all we want is caching this is not the best solution: it add no security but a false sense of security. Thus in that context, you can improve users’ security by removing HTTPS.

                                                                                            1. 1

                                                                                              I have read it, but more importantly, I worked in and build services for places like that for about 5 years (Uganda, Bolivia, Tajikistan, rural India…).

                                                                                              I am with you that HTTPS proxy is generally best to be avoided if for no other reason because it grows attack surface area. I disagree that removing HTTPS increases security. It adds a lot more places and actors who now can negatively impact user in exchange for him knowing this without being able to do much about it.

                                                                                              And that is even without going into which content is safe to be cached in a given environment.

                                                                                              1. 1

                                                                                                And that is even without going into which content is safe to be cached in a given environment.

                                                                                                Yes, this is the best objection I’ve read so far.

                                                                                                As always it’s a matter of tradeoff. In a previous related thread I described how I would try to fix the issue in a way that people can easily opt-out and opt-in.

                                                                                                But while I think it would be weird to remove HTTPS for an ecommerce chart or for a political forum, I think that most of Wikipedia should be served through both HTTP and HTTPS. People should be aware that HTTP page are not secure (even though it all depends on your threat model…) but should not be mislead to think that pages going through an MitM proxy are secure.

                                                                                      2. 2

                                                                                        HTTPS proxy isn’t incompetence, it’s industry standard.

                                                                                        They solve a number of problems and are basically standard in almost all corporate networks with a minimum security level. They aren’t a weak chain in the link since traffic in front of the proxy is HTTPS and behind it is in the local network and encrypted by a network level CA (you can restrict CA capabilities via TLS cert extensions, there is a fair number of useful ones that prevent compromise).

                                                                                        Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.

                                                                                        1. 2

                                                                                          Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.

                                                                                          Browsers bypass the network configuration to protect the users’ privacy.
                                                                                          (I agree this is stupid, but they are trying to push this anyway)

                                                                                          The point is: the user’s security is at risk whenever she sees as HTTPS (which stands for “HTTP Secure”) something that is not secure. It’s a rather simple and verifiable fact.

                                                                                          It’s true that posing a threat to employees’ security is an industry standard. But it’s not a security solution. At least, not for the employees.

                                                                                          And, doing that in a school or a public library is dangerous and plain stupid.

                                                                                          1. 0

                                                                                            Nobody is posing a threat to employees’ security here, a corporation can in this case be regarded as a single entity so terminating SSL at the borders of the entity similar to how a browser terminates SSL by showing the website on a screen is fairly valid.

                                                                                            Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it (atleast when I wanted access to either I was in both cases instructed that the network is supervised and filtered) which IMO negates the potential security compromise.

                                                                                            Browsers bypass the network configuration to protect the users’ privacy.

                                                                                            Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.

                                                                                            1. 1

                                                                                              Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it [..] which IMO negates the potential security compromise.

                                                                                              Yes this is true.

                                                                                              If people are kept constantly aware of the presence of a transparent HTTPS proxy/MitM, I have no objection to its use instead of an HTTP proxy for caching purposes. Marking all pages as “Not Secure” is a good way to gain such awareness.

                                                                                              Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.

                                                                                              Did you know about Firefox’s DoH/CloudFlare affair?

                                                                                              1. 2

                                                                                                Yes I’m aware of the “affair”. To my knowledge the initial DoH experiment was localized and run on users who had enabled studies (opt-in). In both the experiment and now Mozilla has a contract with CloudFlare to protect the user privacy during queries when DoH is enabled (which to my knowledge it isn’t by default). In fact, the problem ungleich is blogging about isn’t even slated for standard release yet, to my knowledge.

                                                                                                It’s plain and old wrong in the bad kind of way; it conflates security maximalism with the mission of Mozilla to bring the maximum amount of users privacy and security.

                                                                                                1. 1

                                                                                                  TBH, I don’t know what you mean with “security maximalism”.

                                                                                                  I think ungleich raise serious concerns that should be taken into account before shipping DoH to the masses.

                                                                                                  Mozilla has a contract with CloudFlare to protect the user privacy

                                                                                                  It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.

                                                                                                  AFAIK, even Facebook had a contract with his users.

                                                                                                  Yeah.. I know… they will “do no evil”…

                                                                                                  1. 1

                                                                                                    Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.

                                                                                                    It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.

                                                                                                    Cloudflare hasn’t done much that makes me believe they will violate my privacy. They’re not in the business of selling data to advertisers.

                                                                                                    AFAIK, even Facebook had a contract with his users

                                                                                                    Facebook used Dark Patterns to get users to willingly agree to terms they would otherwise never agree on, I don’t think this is comparable. Facebook likely never violated the contract terms with their users that way.

                                                                                                    1. 1

                                                                                                      Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.

                                                                                                      You should define “common user”.
                                                                                                      If you mean the politically inepts who are happy to be easily manipulated as long as they are given something to say and retweet… yes, they have nothing to fear.
                                                                                                      The problem is for those people who are actually useful to the society.

                                                                                                      Cloudflare hasn’t done much that makes me believe they will violate my privacy.

                                                                                                      The problem with Cloudflare is not what they did, it’s what they could do.
                                                                                                      There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.

                                                                                                      But my concerns are with Mozilla.
                                                                                                      They are trusted by milions of people world wide. Me included. But actually, I’m starting to think they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                                                                      1. 1

                                                                                                        So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?

                                                                                                        Just because you think they aren’t useful to society (and they are, these people have all the important jobs, someone isn’t useless because they can’t use a computer) doesn’t mean we, as software engineers, should abandon them.

                                                                                                        There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.

                                                                                                        Then don’t use it? DoH isn’t going to be enabled by default in the near future and any UI plans for now make it opt-in and configurable. The “Cloudflare is default” is strictly for tests and users that opt into this.

                                                                                                        they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                                                                        You mean safe because everyone involved knows what’s happening?

                                                                                                        1. 1

                                                                                                          I don’t believe the concerns are really concerns for the common user.

                                                                                                          You should define “common user”.
                                                                                                          If you mean the politically inepts who are happy to be easily manipulated…

                                                                                                          So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?

                                                                                                          I’m not sure if you are serious or you are pretending to not understand to cope with your lack of arguments.
                                                                                                          Let’s assume the first… for now.

                                                                                                          I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept. That’s obviously because, anyone politically inept is unlikely to be affected by surveillance.
                                                                                                          That’s it.

                                                                                                          they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                                                                          You mean safe because everyone involved knows what’s happening?

                                                                                                          Really?
                                                                                                          Are you sure everyone understand what is a MitM attack? Are you sure every employee understand their system administrators can see the mail they reads from GMail? I think you don’t have much experience with users and I hope you don’t design user interfaces.

                                                                                                          A MitM caching HTTPS proxy is not safe. It can be useful for corporate surveillance, but it’s not safe for users. And it extends the attack surface, both for the users and the company.

                                                                                                          As for Mozilla: as I said, I’m just not sure whether they deserve trust or not.
                                                                                                          I hope they do! Really! But it’s really too naive to think that a contract is enough to bind a company more than a subpoena. And they ship WebAssembly. And you have to edit about:config to disable JavaScript
                                                                                                          All this is very suspect for a company that claims to care about users’ privacy!

                                                                                                          1. 0

                                                                                                            I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept.

                                                                                                            I’m saying the concerns raised by ungleich are too extreme and should be dismissed on grounds of being not practical in the real world.

                                                                                                            Are you sure everyone understand what is a MitM attack?

                                                                                                            An attack requires an adversary, the evil one. A HTTPS Caching proxy isn’t the evil or enemy, you have to opt into this behaviour. It is not an attack and I think it’s not fair to characterise it as such.

                                                                                                            Are you sure every employee understand their system administrators can see the mail they reads from GMail?

                                                                                                            Yes. When I signed my work contract this was specifically pointed out and made clear in writing. I see no problem with that.

                                                                                                            And it extends the attack surface, both for the users and the company.

                                                                                                            And it also enables caching for users with less than stellar bandwidth (think third world countries where satellite internet is common, 500ms ping, 80% packet loss, 1mbps… you want caching for the entire network, even with HTTPS)

                                                                                                            And they ship WebAssembly.

                                                                                                            And? I have on concerns about WebAssembly. It’s not worse than obfuscated javascript. It doesn’t enable anything that wasn’t possible before via asm.js. The post you linked is another security maximalist opinion piece with little factual arguments.

                                                                                                            And you have to edit about:config to disable JavaScript…

                                                                                                            Or install a half-way competent script blocker like uMatrix.

                                                                                                            All this is very suspect for a company that claims to care about users’ privacy!

                                                                                                            I think it’s understandable for a company that both cares about users privacy and doesn’t want a marketshare of “only security maximalists”, also known as, 0%.

                                                                                                            1. 1

                                                                                                              An attack requires an adversary, the evil one.

                                                                                                              According to this argument, you don’t need HTTPS until you don’t have an enemy.
                                                                                                              It shows very well your understanding of security.

                                                                                                              The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.

                                                                                                              I have on concerns about WebAssembly.

                                                                                                              Not a surprise.

                                                                                                              Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).

                                                                                                              Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.

                                                                                                              As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.

                                                                                                              1. 1

                                                                                                                According to this argument, you don’t need HTTPS until you don’t have an enemy.

                                                                                                                If there is no adversary, no Malory in the connection, there is no reason to encrypt it either, correct.

                                                                                                                It shows very well your understanding of security.

                                                                                                                My understanding in security is based on threat models. A threat model includes who you trust, who you want to talk to and who you don’t trust. It includes how much money you want to spend, how much your attacker can spend and the methods available to both of you.

                                                                                                                There is no binary security, a threat model is the entry point and your protection mechanisms should match your threat model as best as possible or exceed it, but there is no reason to exert effort beyond your threat model.

                                                                                                                The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.

                                                                                                                Malory is a potential enemy. An HTTPS caching proxy operated by a corporation is not an enemy. It’s not malory, it’s Bob, Alice and Eve where Bob wants to send Alice a message, she works for Eve and Eve wants to avoid having duplicate messages on the network, so Eve and Alice agree that caching the encrypted connection is worthwile.

                                                                                                                Malory sits between Eve and Bob not Bob and Alice.

                                                                                                                Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).

                                                                                                                I did, in which case I either filed a Github issue if the project was open source or I notified the company that offered the javascript or optimized binary. Usually the bug is then fixed.

                                                                                                                It’s not my duty or problem to debug web applications that I don’t develop.

                                                                                                                Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.

                                                                                                                Then don’t do it? Nobody is forcing you.

                                                                                                                As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.

                                                                                                                I don’t think you consider that a practical problem such as bad connections can outweigh a lot of potential security issues since you don’t have the time or user patience to do it properly and in most cases it’ll be good enough for the average user.

                                                                                        2. 2

                                                                                          My point is that the problems of unencrypted HTTP and MitM’ed HTTPS are exactly the same. If one used to prefer the former because it can be easily cached, I can’t see how setting up the latter makes their security issues worse.

                                                                                          1. 3

                                                                                            With HTTP you know it’s not secure. OTOH you might not be aware that your HTTPS connection to the server is not secure at all.

                                                                                            The lack of awareness makes MitM caching worse.

                                                                                    1. 3

                                                                                      Trying to install Mastodon

                                                                                      1. 6

                                                                                        If you want secure and rather fast x86, look at Opterons 62xx and 63xx. They are still pretty fast and not vulnerable to many CVE’s. Coupled with Coreboot, they make for a nice desktop or a server.

                                                                                        If you want something faster, more secure and are not limited to x86, POWER9 with Talos II motherboard is a great choice.

                                                                                        1. 8

                                                                                          It looks like a new single CPU Talos board is still $2500. I mean, that’s far cheaper than they were last time I looked, but still not entirely practical for many enthusiasts.

                                                                                          One biggest issue with other architecture is video deciding. A lot of decoders are written in x86_64 specific assembly. Itanium never had a lot of codecs ported to EPIC, making it useless in the video editing space. There are hardware decoders on a lot of amd/nvidia GPUs, but then it comes down to drivers (amdgpu is open source and you have a better shot there on power, but it’d be interesting to see if anyone has gotten that working).

                                                                                          1. 2

                                                                                            You can hardware decode but you generally don’t want to hardware encode for editing. HW encoders have worse quality at the same bitrate vs. software.

                                                                                            Mesa support for decode on AMD is good, encode is starting to work but it’s pretty bad right now (compared to windows drivers).

                                                                                            1. 2

                                                                                              Decoding isn’t the problem. All modern lossy codecs ate strongly biased towards decode performance, and once you’re at reasonable data rates, CPUs handle it fine. Encoding would be misery, because all software encoders are laboriously hand tuned for their target platform, and you really don’t want to use a hardware encoder unless you absolutely have to.

                                                                                            2. 3

                                                                                              The only reason you’d be stuck with x86 is if you’re running proprietary software and then chip backdoors are the least of your concerns.

                                                                                              1. 4

                                                                                                The only reason you’d be stuck with x86

                                                                                                When I last saw it debated, everyone agreed x86 stumped all competitors on price/performance, mainly single-threaded. Especially important if you’re doing something CPU-bound that you can’t just throw cores at. One of the reasons is only companies bringing in piles of money can afford a full-custom, multi-GHz, more-work-per-cycle design like Intel, AMD, and IBM. Although Raptor is selling IBM’s, Intel and AMD are still much cheaper.

                                                                                                1. 2

                                                                                                  Actually, POWER9 is MUCH cheaper. You can get 18-core CPU for a way better price and it has 72 threads instead of 36 threads (like Intel).

                                                                                                  1. 2

                                                                                                    That sounds pretty high end. Is that true for regular desktop CPU’s? Ex: I built a friend a rig a year or so ago that could do everything up to the best games of the time. It cost around $600. Can I get a gaming or multimedia-class POWER9 box for $600 new?

                                                                                                    1. 2

                                                                                                      No, certainly not. But you can look at it otherwise - the PC you assemble will be enough for you for 10-15 years, if you have enough money to pay now :)

                                                                                                      $600 PC will not make it for that long.

                                                                                                      1. 2

                                                                                                        “But you can look at it otherwise - the PC you assemble will be enough for you for 10-15 years, if you have enough money to pay now :)”

                                                                                                        The local dealership called me back. They said whoever wrote the comment I showed them should put in an application to the sales department. They might have nice commissions waiting for them if they can keep up that smooth combo of truth and BS. ;)

                                                                                                        “$600 PC will not make it for that long.”

                                                                                                        Back to being serious, maybe and maybe not. The PC’s that work for about everything now get worse every year. What they get worse at depends on the year, though. The $600-700 rig was expected to get behind on high-end games in a few years, play lots of performance stuff acceptably for a few years more, and do basic stuff fast enough for years more than that. As an example (IIRC), both tedu and I each had a Core Duo 2 laptop for seven or more years with them performing acceptably on about everything we did. I paid $800 for that laptop barely-used on eBay. I’m using a Celeron right now since I’m doing maintenance on that one. It was a cheaper barter, it sucks in a lot of ways, and still gets by. I can’t say I’d have a steady stream of such bargains with long-term usability on POWER9. Maybe we’ll get it after a few years.

                                                                                                        One other thing to note is that the Talos stuff is beta based on a review I read where they had issues with some stuff. Maybe the hardware could have similar issues that would require a replacement. That’s before considering hackers focusing on hardware now: I’m just talking vanilla problems. Until their combined HW/SW offering matures, I can’t be sure anything they sell me will last a year much less 10-15.

                                                                                                2. 2

                                                                                                  Even though I’d swap my KGPE-D16 for Talos any minute, I simply can’t afford it. So I’m stuck with x86, but it’s not because of proprietary software.

                                                                                              1. 1

                                                                                                Wow I might have to start using Firefox for the first time in forever.

                                                                                                1. 13

                                                                                                  Don’t forget that performance enhancements, security enhancements, and increased hardware support all add to the size over what was done a long time ago with some UNIX or Linux. There’s cruft and necessary additions that appeared over time. I’m actually curious what a minimalist OS would look like if it had all the necessary or useful stuff. I especially curious if it would still fit on a floppy.

                                                                                                  If not security or UNIX, my baseline for projects like this is MenuetOS. The UNIX alternative should try to match up in features, performance, and size.

                                                                                                  1. 13

                                                                                                    We already have a pretty minimalist OS with good security, and very little cruft: OpenBSD.

                                                                                                    1. 7

                                                                                                      The base set alone is over 100mbyte, though. That’s a lot more than OP wants.

                                                                                                      1. 5

                                                                                                        Can you fit it with a desktop experience on a floppy like MenuetOS or QNX Demo Disc? If not, it’s not as minimal as we’re talking about. I am curious how minimal OpenBSD could get while still usable for various things, though.

                                                                                                      2. 12

                                                                                                        Modern PC OS needs ACPI script interpreter, so it can’t be particularly small or simple. ACPI is a monstrosity.

                                                                                                        1. 2

                                                                                                          Re: enhancements, I’m thinking Nanix would be more single-purpose, like muLinux, as a desktop OS that rarely (or never) runs untrusted code (incl. JS) and supports only hardware that would be useful for that purpose, just what’s needed for a CLI.

                                                                                                          Given that Linux 2.0.36 (as used in muLinux), a very functional UNIX-like kernel, fit with plenty of room to spare on a floppy, I think it would be feasible to write a kernel with no focus on backwards hardware or software compatibility to take up the same amount of space.

                                                                                                          1. 3

                                                                                                            Your OS or native apps won’t load files that were on the Internet or hackable systems at some point? Or purely personal use with only outgoing data? Otherwise, it could be hit with some attacks. Many come through things like documents, media files, etc. I can imagine scenarios where that isn’t a concern. What’s your use cases?

                                                                                                            1. 5

                                                                                                              To be honest, my use cases are summed up in the following sentence:

                                                                                                              it might be a nice learning exercise to get a minimal UNIX-like kernel going and a sliver of a userspace

                                                                                                              But you’re right, there could be attacks. I just don’t see something like Nanix being in a place where security is of utmost importance, just a toy hobbyist OS.

                                                                                                              1. 4

                                                                                                                If that’s the use, then I hope you have a blast building it. :)

                                                                                                                1. 3

                                                                                                                  It pretty much sounds like what Linus said back then, though, so who knows? ;)

                                                                                                            2. 2

                                                                                                              Linux 2.0 didn’t have ACPI support. I doubt it will even run on modern hardware.

                                                                                                              1. 2

                                                                                                                It seems to work, just booted the ISO (admittedly not the floppy, don’t have what is needed to make a virtual image right now) of muLinux in Hyper-V and it seems to work fine, even having 0% CPU usage on idle according to Hyper-V.

                                                                                                                1. 2

                                                                                                                  Running in a VM is not the same as running on hardware.

                                                                                                          1. 4

                                                                                                            Whoa, AWS will reboot your VM just because they’re doing maintenance on the host? What year is it?

                                                                                                            1. 2

                                                                                                              If you’re an Apple customer aren’t you supposed to be migrating to the iPad Pro?

                                                                                                              1. 4

                                                                                                                This isn’t even a funny joke.

                                                                                                                1. 6

                                                                                                                  Learning modern c++ with move only semantics and rvalue references and so on let me understand the problem Rust is trying to solve.

                                                                                                                    1. 23

                                                                                                                      This is a bit disappointing. It feels a bit like we are walking into the situation OpenGL was built to avoid.

                                                                                                                      1. 7

                                                                                                                        To be honest we are already in that situation.

                                                                                                                        You can’t really use GL on mac, it’s been stuck at D3D10 feature level for years and runs 2-3x slower than the same code under Linux on the same hardware.

                                                                                                                        It always seemed like a weird decision from Apple to have terrible GL support, like if I was going to write a second render backend I’d probably pick DX over Metal.

                                                                                                                        1. 6

                                                                                                                          I remain convinced that nobody really uses a Mac on macOS for anything serious.

                                                                                                                          And why pick DX over Metal when you can pick Vulkan over Metal?

                                                                                                                          1. 3

                                                                                                                            Virtually no gaming or VR is done on a mac. I assume the only devs to use Metal would be making video editors.

                                                                                                                            1. 1

                                                                                                                              This is a bit pedantic, but I play a lot of games on mac (mainly indie stuff built in Unity, since the “porting” is relatively easy), and several coworkers are also mac-only (or mac + console).

                                                                                                                              Granted, none of us are very interested in the AAA stuff, except a couple of games. But there’s definitely a (granted, small) market for this stuff. Luckily stuff like Unity means that even if the game only sells like 1k copies it’ll still be a good amount of money for “provide one extra binary from the engine exporter.”

                                                                                                                              The biggest issue is that Mac hardware isn’t shipping with anything powerful enough to run most games properly, even when you’re willing to spend a huge amount of money. So games like Hitman got ported but you can only run it on the most expensive MBPs or iMac Pros. Meanwhile you have sub-$1k windows laptops which can run the game (albeit not super well)

                                                                                                                            2. 2

                                                                                                                              I think Vulkan might have not been ready when Metal was first skecthed out – and Apple does not usually like to compromise on technology ;)

                                                                                                                              1. 2

                                                                                                                                My recollection is that Metal appeared first (about June 2014), Mantle shipped shortly after (by a coupe months?), DX12 shows up mid-2015 and then Vulkan shows up in February 2016.

                                                                                                                                I get a vague impression that Mantle never made tremendous headway (because who wants to rewrite their renderer for a super fast graphics API that only works on the less popular GPU?) and DX12 seems to have made surprisingly little (because targeting an API that doesn’t work on Win7 probably doesn’t seem like a great investment right now, I guess? Current Steam survey shows Win10 at ~56% and Win7+8 at about 40% market share among people playing videogames.)

                                                                                                                                1. 2

                                                                                                                                  Mantle got heavily retooled into Vulkan, IIRC.

                                                                                                                                  1. 1

                                                                                                                                    And there was much rejoicing. ♥

                                                                                                                        1. 1

                                                                                                                          I really miss when Apple keynotes announced interesting things.

                                                                                                                          1. -1

                                                                                                                            I’m so old.