1. 7

    Codes of Conduct are the best mechanism known today […]

    Does that mean it’ll be the best forever? Why shouldn’t we give the GNU KCG a try? Is it wrong to try a new and different approach?

    The author even mentions that differences are acceptable; “[…] Governing well means working (and finding common ground) with those you disagree. […]”, yet, that means that if the GNU or FSF or RMS does something they don’t like or are politically against, they’re automatically wrong?

    The entire text feels all over the place, first seemingly asserting that what the GNU/FSF is doing is wrong, then explaining that free speech is important slash that they try to find a middle ground, that disagreement is good? Yet despite disagreement being good, the FSF/GNU is wrong?

    I honestly think we should observe the KCG for a year or three before we strike moral judgement on it’s effectiveness, simply saying it doesn’t work isn’t sufficiently backed by data (it’s only backed by the lack of data).

    1. 6

      Of course it can be abused but I doubt the browser vendors aren’t treating this feature like a live grenade.

      On the flip side, such a feature could enable many more apps that work, together with WASM and other new browser tech, natively on all operating systems that support a webbrowser.

      1. 21

        To start, the ZFS filesystem combines the typical filesystem with a volume manager. It includes protection against corruption, snapshots and copy-on-write clones, as well as volume manager.

        It continues to baffle me how “mainstream” filesystems like ext4 forgo checksumming of the data they contain. You’d think that combatting bitrot would be a priority for a filesystem.

        Ever wondered where did vi come from? The TCP/IP stack? Your beloved macOS from Apple? All this is coming from the FreeBSD project.

        Technically, vi and the BSD implementations of the TCP/IP stack can be attributed to 4.xBSD at UCB; FreeBSD is not the origin of either.

        1. 10

          It continues to baffle me how “mainstream” filesystems like ext4 forgo checksumming of the data they contain. You’d think that combatting bitrot would be a priority for a filesystem.

          At least ext4 supports metadata checksums:

          https://wiki.archlinux.org/index.php/ext4#Enabling_metadata_checksums

          At any rate Ted T’so (the ext[34] maintainer) has said as far back as 2009 that ext4 was meant to be transitional technology:

          Despite the fact that Ext4 adds a number of compelling features to the filesystem, T’so doesn’t see it as a major step forward. He dismisses it as a rehash of outdated “1970s technology” and describes it as a conservative short-term solution. He believes that the way forward is Oracle’s open source Btrfs filesystem, which is designed to deliver significant improvements in scalability, reliability, and ease of management.

          https://arstechnica.com/information-technology/2009/04/linux-collaboration-summit-the-kernel-panel/

          Of course, the real failing here is not ext4, but that btrfs hasn’t been able to move to production use in more than ten years (at least according to some people).

          That said, ZFS works fine on Linux as well and some distributions (e.g. NixOS) support ZFS on root out-of-the-box.

          1. 3

            Of course, the real failing here is not ext4, but that btrfs hasn’t been able to move to production use in more than ten years (at least according to some people).

            I think it’s good to contrast “some people’s” opinion with the one from Facebook:

            it’s safe to say every request you make to Facebook.com is processed by 1 or more machines with a btrfs filesystem.

            Facebook’s open-source site:

            Btrfs has played a role in increasing efficiency and resource utilization in Facebook’s data centers in a number of different applications. Recently, Btrfs helped eliminate priority inversions caused by the journaling behavior of the previous filesystem, when used for I/O control with cgroup2 (described below). Btrfs is the only filesystem implementation that currently works with resource isolation, and it’s now deployed on millions of servers, driving significant efficiency gains.

            But Facebook employs btrfs project lead.

            There is also the fact that Google is now using BTRFS on Chromebooks with Crostini.

            As for opinions I’ve seen one that claims that “ZFS is more mature than btrfs ON SOLARIS. It is mostly ok on FreeBSD (with various caveats) and I wouldn’t recommend it on Linux.”.

            1. 2

              I wouldn’t recommend it on Linux.

              I’d still say that ZFS is more usable than lvm & linux-softraid. If only due to the more sane administration tooling :)

          2. 9

            Ext4, like most evolutions of existing filesystems, is strongly constrained by what the structure of on-disk data and the existing code allows it to do. Generally there is no space for on-disk checksums, especially for data; sometimes you can smuggle some metadata checksums into unused fields in things like inodes. Filesystems designed from the ground up for checksums build space for checksums into their on-disk data structures and also design their code’s data processing pipelines so there are natural central places to calculate and check checksums. The existing structure of the code matters too because when you’re evolving a filesystem, the last thing you want to do is to totally rewrite and restructure that existing battle-tested code with decade(s) of experience embedded into it; if you’re going to do that, you might as well start from scratch with an entirely new filesystem.

            In short: that ext4 doesn’t have checksums isn’t surprising; it’s a natural result of ext4 being a backwards compatible evolution of ext3, which was an evolution of ext2, and so on.

            1. 4

              It continues to baffle me how “mainstream” filesystems like ext4 forgo checksumming of the data they contain. You’d think that combatting bitrot would be a priority for a filesystem.

              Ext4 doesn’t aim to be that type of filesystem, for desktop use on the average user, this is fairly okay since actual bitrot in data the user cares about is rare (most bitrot occurs either in system files or empty space or in media files where the single corrupt frame barely matters).

              If you want to check out a more modern alternative, there is bcachefs. I’ve been using it on my laptop for a while (until I stopped but now I’m back on it) and it’s been basically rock solid. The developer is also working on erasure coding and replication in a more solid way than btrfs currently has.

            1. 9

              Plume might be a better replacement for Tumblr once they are production ready, Mastodon aims to be more like Twitter, so the culture and interface is tailored for very short blogs. There is also PixelFed which is more similar to Instagram.

              Artists and such might not find Mastodon very useful compared to Tumblr for their purposes.

              1. 5

                TL;DR “browser could serve malicious code”, “you could set your password to something very weak”, “you could reuse your password” and “someone could attempt to bruteforce your password”.

                As the paper points out the smartphone app is unaffected by the browser issue. The paper even concludes that “the browser could send malicious code” is unfixable for any webmail application. You need to trust the code the server is sending atleast once.

                The recommendation in regards to the encrypted private key is a bit weird, one of the goals of PM is mass appeal, having local-device-only keys that need to be shared via QR code and can be lost degrades the experience to that of GPG (which is beyond subpar).

                I don’t think this paper really explores anything interesting in regards to things that we didn’t already know (see my TL;DR)

                1. 16

                  Hello, I’m the author of this paper. This TL;DR is not fully representative of the paper. Here are some of the things it missed:

                  A crucial element of the work is to compare ProtonMail’s architecture against their own stayed security goals, cited from their specification and materials. ProtonMail’s stayed security model assumes that the ProtonMail server itself is compromised and this is cited in the text. In that context, it is indeed true that the webmail application simply cannot provide end to end encryption, as, again, defined by ProtonMail themselves.

                  The results hence go beyond the browser sending malicious code and also show that ProtonMail’s “encrypt-to-outside” feature (which allows sending encrypted emails to someone who uses Gmail or Outlook or whatever and allows that recipient to also send a reply) renders the sent email and the reply both decryptable not only by ProtonMail but also by the third-party mail provider (Gmail, etc.)

                  The paper also shows that ProtonMail’s claims of offering zero-knowledge authentication are not actually achieved and that ProtonMail’s servers retain a password oracle for the user.

                  The findings in the paper were not meant to be “anything new” as much as they are meant to be the first formal analysis of what ProtonMail is actually achieving with regards to its stated security goals. The fact that no such analysis previously existed was what motivated this work.

                  1. 3

                    While I agree that PM probably overpromises a bit, it’s still not entirely new knowledge that the webbrowser has to trust the server at some point. Any webapp is vulnerable to this so I don’t find it very interesting or notable. There is the Bridge and the Phone Client which both avoid this issue.

                    I’m not sure on what they promise on the symmetric feature, I’ve been using the PGP mode since it’s been released, so outgoing mail uses PGP instead of the symmetric mode if possible and signs all mail.

                    I’m not sure how one would solve the password oracle problem in this case, since you have to somehow encrypt the PGP key unless you want to degrade usability.

                    1. 1

                      ProtonMail’s “encrypt-to-outside” feature (which allows sending encrypted emails to someone who uses Gmail or Outlook or whatever and allows that recipient to also send a reply) renders the sent email and the reply both decryptable not only by ProtonMail but also by the third-party mail provider (Gmail, etc.)

                      Fortunately it’s possible to attach the recipient PGP key in ProtonMail and use proper OpenPGP encryption (and soon enough discover the external key automatically). I consider the “encrypt-to-outside” feature with PSK as a lowest common denominator solution, it works with everything but is fundamentally flawed.

                  1. 2

                    On one hand: I agree that DNS-over-HTTPS is a silly and convoluted solution.

                    On the other hand: DNS-over-TLS is a bad solution for the reason pointed out: it lives on its own port.

                    Question: Why do we need ports any more at all? It seems like if we didn’t have dedicated port numbers, but instead referred to resources by subdomain or subdirectory beneath the main hostname, then all traffic would be indiscriminate when secured by TLS.

                    1. 4

                      Could it have been possible for DNS-over-TLS to use 443 and make the server able to route DNS and HTTP request appropriately? I’m not very knowledgable of TLS. From what I understand its just a transport layer so a server could simply read the beginning of an incoming message and easily detect if it is an HTTP or DNS header?

                      1. 9

                        Yes, much like http2 works. It complicates the TLS connection because now it passes a hint about the service it wants, but that bridge is already crossed.

                      2. 4

                        IP addresses allow two arbitrary computers to exchange information [1], whereas ports allow to arbitrary programs (or processes) to exchange information. Also, it’s TCP and UDP that have ports. There are other protocols that ride on top of IP (not that anyone cares anymore).

                        [1] Well, in theory anyway, NAT breaks that to some degree.

                        1. 3

                          Ports are kinda central to packet routing, if my understanding is correct, as it has been deployed.

                          1. 5

                            You need the concept of ports to route packets to the appropriate process, certainly. However, with DNS SRV records, you don’t need globally-agreed-upon port assignments (a la “HTTP goes to port 80”). You could assign arbitrary ports to services and direct clients accordingly with SRV.

                            Support for this is very incomplete (e.g. browsers go to port 80/443 on the A/AAAA record for a domain rather than querying for SRVs), but the infrastructure is in place.

                            1. 5

                              On what port do I send the DNS query for the SRV record of my DNS server?

                              1. 1

                                Obviously, you look up an SRV record to determine which port DNS is served over. ;)

                                I don’t know if anyone has thought about the bootstrapping problem. In theory, you could deal with it the same way you already bootstrap your DNS (DHCP or including the port with the IP address in static configurations), but I don’t know if this is actually possible.

                              2. 2

                                You need the concept of ports to route packets to the appropriate process

                                Unless we assign an IP address to every web facing process.

                            2. 1

                              Problem: both solutions to private DNS queries have downsides related to the DNS protocol fundamentally having failed to envision a need for privacy

                              Solution: radically overhaul the transport layer by replacing both TCP and UDP with something portless?

                              The suggested cure is worse than the disease, in this case, in terms of sheer amount of work, and completely replaced hardware and software, it would require .

                              1. 2

                                I don’t think DNS is the right place to do privacy. If I’m on someone’s network, he can see what IP addresses I’m talking to. I can hide my DNS traffic, but he still gets to see the IP addresses I ultimately end up contacting.

                                Trying to add privacy at the DNS stage is doing it at the wrong layer. If I want privacy, I need it at the IP layer.

                                1. 4

                                  Assuming that looking up an A record and making a connection to that IP is the only thing DNS is used for.

                                  1. 3

                                    Think of CDN or “big websites” traffic. If you hit Google, Amazon, Cloudflare datacenters, nobody will be able to tell if you were reaching google.com, amazon.com, cloudflare.com or any of their costumers.

                                    Currently, this is leaking through SNI and DNS. DoH and Ecrypted SNI (ESNI) will improve on the status quo.

                                    1. 2

                                      And totally screws small sites. Or is the end game centralization of all web sites to a few hosts to “protect” the privacy of users?

                                      1. 2

                                        You can also self-host more than one domain on your site. In fact, I do too. It’s just a smaller set :-)

                                        1. 1

                                          End game would be VPNs or Tor.

                                        2. 2

                                          Is that really true? I though request/response metadata and timing analysis coud tell them who we were connecting to.

                                          1. 2

                                            Depends who they are. I’m not going to do a full traffic dump, then try to correlate packet timings to discover whether you were loading gmail or facebook. But tcpdump port 53 is something I’ve actually done to discover what’s talking to where.

                                            1. 1

                                              True. maybe ESNI and DoH are only increasing the required work. Needs more research?

                                              1. 1

                                                Probably to be on safe side. Id run it by experts in correlation analyses on network traffic. They might already have something for it.

                                            2. 2

                                              nobody will be able to tell if you were reaching google.com, amazon.com, cloudflare.com or any of their costumers.

                                              except for GOOGL, AMZN, et al. which will happily give away your data, without even flinching a bit.

                                              1. 1

                                                Yeah, depends on who you want to exclude from snooping on your traffic. The ISP, I assumed. The Googles and Amazons of the world have your data regardless of DNS/DoH.

                                                I acknowledge that the circumstances are different in every country, but in the US, the major ISPs actually own ad networks and thus have a strong incentive not to ever encrypt DNS traffic.

                                                1. 1

                                                  Yeah, depends on who you want to exclude from snooping on your traffic. The ISP, I assumed. The Googles and Amazons of the world have your data regardless of DNS/DoH.

                                                  so i’m supposed to just give them full access over the remaining part which isn’t served by them?

                                                  I acknowledge that the circumstances are different in every country, but in the US, the major ISPs actually own ad networks and thus have a strong incentive not to ever encrypt DNS traffic.

                                                  ISPs in the rest of the world aren’t better, but this still isn’t a reason to shoehorn DNS into HTTP.

                                                  1. 1

                                                    No, you’re misreading the first bit. You’re already giving iit to them, most likely, because of all those cloud customers. This makes their main web property indistinguishable from their clients, once SNI and DNS is encrypted.

                                                    No need to give more than before.

                                                    1. 1

                                                      You’re already giving iit to them, most likely, because of all those cloud customers.

                                                      this is a faux reason. i try to not use these things when possible. just because many things are there, it doesn’t mean that i have to use even more stuff of them, quite the opposite. this may be an inconvenience for me, but it is one i’m willing to take.

                                                      This makes their main web property indistinguishable from their clients, once SNI and DNS is encrypted.

                                                      indistinguishable for everybody on the way, but not for the big ad companies on whose systems things are. those are what i’m worried about.

                                                      1. 1

                                                        Hm I feel were going in circles here.

                                                        For those people who do use those services, there is an immediate gain in terms of hostname privacy (towards their ISP), once DoH and ESNI are shipped.

                                                        That’s all I’m saying. I’m not implying you do or you should.

                                                        1. 1

                                                          I’m not implying you do or you should.

                                                          no, but the implications of DoH are that i’ll end up using it, even if i don’t want to. it’ll be baked into the browsers, from there it’s only a small step to mandatory usage in systemd. regarding DoH in general: if you only have http, everything looks like a nail.

                                          2. 1

                                            Alternative solution: don’t use DNS anymore.

                                            Still lots of work since we need to ditch HTTP, HTTPS, FTP, and a host of other host-oriented protocols. But, for many of these, we’ve got well-supported alternatives already. The question of how to slightly improve a horribly-flawed system stuck in a set of political deadlocks becomes totally obviated.

                                            1. 3

                                              That’s the biggest change of all of them. The whole reason for using DoH is to have a small change, that improves things, and that doesn’t require literally replacing the entire web.

                                              1. 1

                                                Sure, but it’s sort of a waste of time to try to preserve the web. The biggest problem with DNS is that most of the time the actual hostname is totally irrelevant to our purposes & we only care about it because the application-layer protocol we’re using was poorly designed.

                                                We’re going to need to fix that eventually so why not do it now, ipv6-style (i.e., make a parallel set of protocols that actually do the right thing & hang out there for a couple decades while the people using the old ones slowly run out of incremental fixes and start to notice the dead end they’re heading toward).

                                                Myopic folks aren’t going to adopt large-scale improvments until they have no other choice, but as soon as they have no other choice they’re quick to adopt an existing solution. We’re better off already having made one they can adopt, because if we let them design their own it’s not going to last any longer than the last one.

                                                DNS is baked into everything, despite being a clearly bad idea, because it was well-established. Well, IPFS is well-established now, so we can start using it for new projects and treating DNS as legacy for everything that’s not basically ssh.

                                                1. 8

                                                  Well, IPFS is well-established now

                                                  No it’s not. Even by computer standards, IPFS is still a baby.

                                                  Skype was probably the most well-established P2P application in the world before they switched to being a reskinned MSN Messenger, and the Skype P2P network had disasters just like centralized services have, caused by netsplits, client bugs, and introduction point issues. BitTorrent probably holds the crown for most well-established P2P network now, and since it’s shared-nothing (the DHT isn’t, but BitTorrent can operate without it), has never had network-wide disasters. IPFS relies on the DHT, so it’s more like Skype than BitTorrent for reliability.

                                                  1. 0

                                                    It’s only ten years old, sure. I haven’t seen any reliability problems with it. Have you?

                                                    DHT tech, on top of being an actually appropriate solution to the problem of addressing static chunks of data (one that eliminates whole classes of attacks by its very nature), is more reliable now than DNS is. And, we have plenty of implementations and protocols to choose from.

                                                    Dropping IPFS or some other DHT into an existing system (like a browser) is straightforward. Opera did it years ago. Beaker does it now. There are pure-javascript implementations of DAT and IPFS for folks who can’t integrate it into their browser.

                                                    Skype isn’t a good comparison to a DHT, because Skype connects a pair of dynamic streams together. In other words, it can’t take advantage of redundant caching, so being P2P doesn’t really do it any favors aside from eliminating a single point of failure from the initial negotiation steps.

                                                    For transferring documents (or scripts, or blobs, or whatever), dynamicism is a bug – and one we eliminate with named data. Static data is the norm for most of what we use the web for, and should be the norm for substantially more of it. We can trivially eliminate hostnames from all asset fetches, replace database blobs with similar asset fetches, use one-time pads for keeping secret resources secret while allowing anyone to fetch them, & start to look at ways of making services portable between machines. (I hear DAT has a solution to this last one.) All of this is stuff any random front-end developer can figure out without much nudging, because the hard work has been done & open sourced already.

                                                    1. 4

                                                      IPFS is not ten years old. Its initial commit is five years ago, and that was the start of the paper, not the implementation.

                                                      1. 1

                                                        Huh. I could have sworn it was presented back in 2010. I must be getting it confused with another DHT system.

                                                  2. 7

                                                    Sure, but it’s sort of a waste of time to try to preserve the web.

                                                    This is letting Perfect be the enemy of Good thinking. We can incrementally improve (imperfectly, true) privacy now. Throwing out everything and starting over with a completely new set of protocols is a multi-decade effort before we start seeing the benefits. We should improve the situation we’re in, not ignore it while fantasizing about being in some other situation that won’t arrive for many years.

                                                    The biggest problem with DNS is that most of the time the actual hostname is totally irrelevant to our purposes & we only care about it because the application-layer protocol we’re using was poorly designed.

                                                    This hasn’t been true since Virtual Hosting and SNI became a thing. DNS contains (and leaks) information about exactly who we’re talking to that an IP address doesn’t.

                                                    1. 2

                                                      This is letting Perfect be the enemy of Good thinking. We can incrementally improve (imperfectly, true) privacy now.

                                                      We can also take advantage of low-hanging fruit that circumvent the tarpit that is incremental improvements to DNS now.

                                                      The perfect isn’t the enemy of the good here. This is merely a matter of what looks like a good idea on a six month timeline versus what looks like a good idea on a two year timeline. And, we can guarantee that folks will work on incremental improvements to DNS endlessly, even if we are not those folks.

                                                      Throwing out everything and starting over with a completely new set of protocols is a multi-decade effort before we start seeing the benefits.

                                                      Luckily, it’s an effort that started almost two decades ago, & we’re ready to reap the benefits of it.

                                                      DNS contains (and leaks) information about exactly who we’re talking to that an IP address doesn’t.

                                                      That’s not a reason to keep it.

                                                      Permanently associating any kind of host information (be it hostname or DNS name or IP) with a chunk of data & exposing that association to the user is a mistake. It’s an entanglement of distinct concerns based on false assumptions about DNS permanence, and it makes the whole domain name & data center rent-seeking complex inevitable. The fact that DNS is insecure is among its lesser problems; it should not have been relied upon in the first place.

                                                      The faster we make it irrelevant the better, & this can be done incrementally and from the application layer.

                                                    2. 2

                                                      But why would IPFS solve it?

                                                      Replacing every hostname with a hash doesn’t seem very user-friendly to me and last I checked, you can trivially sniff out what content someone is loading by inspecting the requested hashes from the network.

                                                      IPFS isn’t mature either, it’s not even a decade old and most middleboxes will start blocking it once people start using it for illegitimate purposes. There is no plan to circumvent blocking by middleboxes, not even after that stunt with putting wikipedia on IPFS.

                                                      1. 1

                                                        IPFS doesn’t replace hostnames with hashes.It uses hashes as host-agnostic document addresses.

                                                        Identifying hosts is not directly relevant to grabbing documents, and so baking hostnames into document addresses mixes two levels of abstractions, with undesirable side effects (like dependence upon DNS and server farms to provide absurd uptime guarantees).

                                                        IPFS is one example of distributed permanent addressing. There are a lot of implementations – most relying upon hashes, since hashes provide a convenient mechanism for producing practically-unique addresses without collusion, but some using other mechanisms.

                                                        The point is that once you have permanent addresses for static documents, all clients can become servers & you start getting situations where accidentally slashdotting a site is impossible because the more people try to access it the more redundancy there is in its hosting. You remove some of the hairiest problems with caching, because while you can flush things out of a cache, the copy in cache is never invalidated by changes, because the object at a particular permanent address is immutable.

                                                        Problems (particularly with web-tech) that smart folks have been trying to solve with elaborate hacks for decades become trivial when we make addresses permanent, because complications like DNS become irrelevant.

                                                        1. 1

                                                          And other problems become hard like “how do I have my content still online in 20 years?”.

                                                          IPFS doesn’t address the issues it should be addressing, using hashes everywhere being one of them making it particularly user-unfriendly (possibly even user-hostile).

                                                          IPFS doesn’t act like a proper cache either (unless their eviction strategy has significantly evolved to be more cooperative) in addition to leaking data everywhere.

                                                          Torrent and dat:// solve the problem much better and don’t over-advertise their capabilities.

                                                          Nobody really needs permanent addressing, what they really need is either a Tor onion address or actually cashing out for a proper webserver (where IPFS also won’t help if your content is dynamic, it’ll make things only more javascript heavy than they already are).

                                                          1. 1

                                                            how do I have my content still online in 20 years?

                                                            If you want to guarantee persistence of content over long periods, you will need to continue to host it (or have it hosted on your behalf), just as you would with host-based addressing. The difference is that your host machine can be puny because it’s no longer a single point of failure under traffic load: as requests increase linearly, the likelihood of any request being directed to your host decreases geometrically (with a slow decay via cache eviction).

                                                            IPFS doesn’t address the issues it should be addressing, using hashes everywhere being one of them making it particularly user-unfriendly (possibly even user-hostile).

                                                            I would absolutely support a pet-name system on top of IPFS. Hashes are convenient for a number of reasons, but IPFS is only one example of a relatively-mature named-data-oriented solution to permanent addressing. It’s minimal & has good support for putting new policies on top of it, so integrating it into applications that have their own caching and name policies is convenient.

                                                            IPFS doesn’t act like a proper cache either (unless their eviction strategy has significantly evolved to be more cooperative) in addition to leaking data everywhere.

                                                            Most caches have forced eviction based on mutability. Mutability is not a feature of systems that use permanent addressing. That said, I would like to see IPFS clients outfitted with a replication system that forces peers to cache copies of a hash when it is being automatically flushed if an insufficient number of peers already have it (in order to address problem #1) as well as a store-and-forward mode (likewise).

                                                            Torrent and dat:// solve the problem much better and don’t over-advertise their capabilities.

                                                            Torrent has unfortunately already become a popular target for blocking. I would personally welcome sharing caches over DHT by default over heavy adoption of IPFS since it requires less additional work to solve certain technical problems (or, better yet, DHT sharing of IPFS pinned items – we get permanent addresses and seed/leech metrics), but for political reasons that ship has probably sailed. DAT seems not to solve the permanent address problem at all, although it at least decentralizes services; I haven’t looked too deeply into it, but it could be viable.

                                                            Nobody really needs permanent addressing,

                                                            Early web standards assume but do not enforce that addresses are permanent. Every 404 is a fundamental violation of the promise of hypertext. The fact that we can’t depend upon addresses to be truly permanent has made the absurd superstructure of web tech inevitable – and it’s unnecessary.

                                                            what they really need is either a Tor onion address

                                                            An onion address just hides traffic. It doesn’t address the single point of failure in terms of a single set of hosts.

                                                            or actually cashing out for a proper webserver

                                                            A proper web server, though relatively cheap, is more expensive and requires more technical skill to run than is necessary or desirable. It also represents a chain of single points of failure: a domain can be siezed (by a state or by anybody who can social-engineer GoDaddy or perform DNS poisoning attacks), while a host will go down under high load (or have its contents changed if somebody gets write access to the disk). Permanent addresses solve the availability problem in the case of load or active threat, while hash-based permanent addresses solve the correctness problem.

                                                            where IPFS also won’t help if your content is dynamic,

                                                            Truly dynamic content is relatively rare (hence the popularity of cloudflare and akamai), and even less dynamic content actually needs to be dynamic. We ought to minimize it for the same reasons we minimize mutability in functional-style code. Mutability creates all manner of complications that make certain kinds of desirable guarantees difficult or impossible.

                                                            Signature chains provide a convenient way of adding simulated mutability to immutable objects (sort of like how monads do) in a distributed way. A more radical way of handling mutability – one that would require more infrastructure on top of IPFS but would probably be amenable to use with other protocols – is to support append-only streams & construct objects from slices of that append-only stream (what was called a ‘permascroll’ in Xanadu from 2006-2014). This stuff would need to be implemented, but it would not need to be invented – and inventing is the hard part.

                                                            it’ll make things only more javascript heavy than they already are

                                                            Only if we stick to web tech, and then only if we don’t think carefully and clearly about how best to design these systems. (Unfortunately, endemic lack of forethought is really the underlying problem here, rather than any particular technology. It’s possible to use even complete trash in a sensible and productive way.)

                                                            1. 1

                                                              The difference is that your host machine can be puny because it’s no longer a single point of failure under traffic load: as requests increase linearly, the likelihood of any request being directed to your host decreases geometrically (with a slow decay via cache eviction).

                                                              I don’t think this is a problem that needs addressing. Static content like the type that IPFS serves can be cheaply served to a lot of customers without needing a fancy CDN. An RPi on a home connection should be able to handle 4 million visitors a month easily with purely static content.

                                                              Dynamic content, ie the content that needs bigger nodes, isn’t compatible with IPFS to begin with.

                                                              Most caches have forced eviction based on mutability

                                                              Caches also evict based on a number of different strategies that have nothing to do with mutability though, IPFS’ strategy for loading content (FIFO last I checked) behaves poorly with most internet browsing behaviour.

                                                              DAT seems not to solve the permanent address problem at all, although it at least decentralizes services; I haven’t looked too deeply into it, but it could be viable.

                                                              The public key of a DAT share is essentially like a IPFS target with the added bonus of having at tracked and replicated history and mutability, offering everything an IPNS or IPFS hash does. Additionally it’s more private and doesn’t try to sell itself as censorship resistant (just look at the stunt with putting Wikipedia on IPFS)

                                                              Every 404 is a fundamental violation of the promise of hypertext.

                                                              I would disagree with that. It’s more important that we archive valuable content (ie, archive.org or via the ArchiveTeam, etc.) than having a permanent addressing method.

                                                              Additionally the permanent addressing still does not solve content being offline. Once it’s lost, it’s lost and no amount of throwing blockchain, hashes and P2P at it will ever solve this.

                                                              You cannot stop a 404 from happening.

                                                              The hash might be the same but for 99.999% of content on the internet, it’ll be lost within the decade regardless.

                                                              Truly dynamic content is relatively rare

                                                              I would also disagree with that, in the modern internet, mutable and dynamic content are becoming more common as people become more connected.

                                                              CF and Ak allow hosters to cache pages that are mostly static like the reddit frontpage as well as reducing the need for georeplicated servers and reducing the load on the existing servers.

                                                              is to support append-only streams & construct objects from slices of that append-only stream

                                                              See DAT, that’s what it does. It’s an append-only log of changes. You can go back and look at previous versions of the DAT URL provided that all the chunks are available in the P2P network.

                                                              Only if we stick to web tech, and then only if we don’t think carefully and clearly about how best to design these systems.

                                                              IPFS in it’s current form is largely provided as a Node.js library, with bindings to some other languages. It’s being heavily marketed for browsers. The amount of JS in websites would only increase with IPFS and likely slow everything down even further until it scales up to global, or as it promises, interplanetary scale (though interplanetary is a pipedream, the protocol can’t even handle satellite internet properly)

                                                              Instead of looking at pipedreams of cryptography for the solution, we ought to improve the infrastructure and reduce the amount of CPU needed for dynamic content, this is a more easy and viable option than switching the entire internet to a protocol that forgets data if it doesn’t remember it often enough.

                                            1. 3

                                              I’m completing my move from a OVH-based server to a Hetzner Box (Upgrading from 4 core/32gb to 6cores/128GB for no additional cost), some core services need to be migrated and reconfigured, my two haproxy instances need to be merged, my mail service needs to be decomissioned and existing apps migrated to mailgun (who finally seem to accept prepaid credit cards and have a EU region!).

                                              The most difficult will be the PHP and NFS VMs, their config is fairly complex and probably contains one too many hardcoded IPs which I’ll have to change to a hostname.

                                              I can likely complete most of that on thursday and if everything goes right I can migrate my mastodon instance and the sql box to the new hoster and shut down the old.

                                              I’m also investigating into borgbackup and borgbase.com as a hosted backup provider, I talked to the owner on the /r/datahoarder subreddit a bit and they made some good pricing promises on the storage space. It looks good and solid so far and the owner seems to have a good track record.

                                              I upgraded my NAS to 21TB capacity, 3TB still pending an erase cycle, though the heat from the harddrives is getting out of control. I’m going to have to 3dprint a few harddrive caddies and hope the PCBs I ordered for a DIY fan controller arrive sooner rather than later.

                                              1. 4

                                                So, I have a serious question: I understand different databases have different trade-offs; that’s fine. But since jepsen tests seem to reliably fail in non-intuitive ways on MongoDB, I’m having trouble figuring out two things:

                                                1. Are services running on MongoDB just losing data constantly and no one notices? If not, has it decreased the frequency, or the failure states, compared to five years ago?
                                                2. Does this imply that there should be some sort of “jepsen-for-the-99%” test? What would it take for MongoDB to legitimately pass? What else that currently fails jepsen would then pass?
                                                1. 2

                                                  Yes, services have just been losing data. Take parse for instance:

                                                  • Frequent (daily) master reelections on AWS EC2. Rollback files were discarded and let to data loss

                                                  https://medium.baqend.com/parse-is-gone-a-few-secrets-about-their-infrastructure-91b3ab2fcf71

                                                  1. 2

                                                    Network partitions & failovers are both relatively uncommon operations in day-to-day operations.

                                                    You’re only moderately likely to lose a few minutes of updates once every few years.

                                                    1. 2

                                                      This is something that has proven to be untrue many times over and has been refuted by @aphyr himself:

                                                      https://queue.acm.org/detail.cfm?id=2655736

                                                      1. 1

                                                        I said “relatively uncommon”; that is, not frequently enough to cause enough data loss to kill a business built on it.

                                                    2. 1
                                                      1. Sort of, yes. If your network experiences a hiccup, your mongodb cluster can go AWOL or FUBAR, depending on how the dice roll. That is on top of the usual problems with organically growing document stores…

                                                      2. To legit pass, a MongoDB server should handle network failure to the cluster by becoming either unavailable or, if a quorum is present, continuing operation. Continuing operation in the absence of a quorum or any other mechanism to ensure data consistency is an immediate fail IMO.

                                                    1. 8

                                                      Sooo, how is it actually a lie? At a glance, the PDF seems to be purely about some internal stuff in the (an?) OpenBSD package manager. I totally don’t understand how the title applies (apart from it being actually the subtitle of the slides, but I don’t understand either why it is so).

                                                      1. 4

                                                        It might make more sense if you take it from the other side: Transmitting programs continuously over the network is highly dangerous: if one can alter the transmitted data, one can apply modifications to the OS or add a backdoor to a program.

                                                        So what to use as transport? HTTPS? The talk questions whether HTTPS is really strong enough to support transmitting package. It then extends on how to mitigate the potential weakness that HTTPS can provoke.

                                                        1. 3

                                                          if one can alter the transmitted data

                                                          That’s why almost every package manager has package signatures… that’s also why many package manager are still using HTTP.

                                                          1. 5

                                                            HTTP would still leak what packages you installed and their exact versions, very interesting stuff to know for a potential attacker.

                                                            HTTPS would also guard against potential security problems in the signing, ie layered security. If the signing process has an issue, HTTPS will provide a different or reduced security depending on your threat model.

                                                            1. 1

                                                              Totally true indeed. I was mentioning that for the moment, these security issues do not seem to be considered as high threat and therefore not addressed at the moment (not that I know of).

                                                            2. 1

                                                              So as HTTP does not provide a strong enough security on its own, other mechanisms are used. I like the practice of not relying at 100% on the transport to provide security.

                                                              1. 1

                                                                The thing is that, HTTPS doesn’t provide only the certification that what you ask is what you get, it also encrypts the traffic (which is arguably important for package managers).

                                                                So at the moment, HTTP + signing is reasonable enough to be used a « security mechanism ».

                                                                1. 3

                                                                  The thing is that, HTTPS doesn’t provide only the certification that what you ask is what you get, it also encrypts the traffic (which is arguably important for package managers).

                                                                  That’s something the slides dispute. Packages have predictable lengths, especially when fetched in a predictable order. Unless the client and/or server work to pad them out, the HTTPS-encrypted update traffic is as good as plaintext.

                                                              2. 1

                                                                I’m completely blanking on which package manager it was, but there was a recent CVE (probably this past month) where the package manager did something unsafe with a package before verifying the signature. HTTPS would’ve mitigated the problem.

                                                                Admittedly, it’s a well-known and theoretically simpler rule to never do anything before verifying the signature, but you’re still exposing more attack surface if you don’t use HTTPS.

                                                          1. 3

                                                            I picked up kernel development again. I scrapped most of the code I had from the kernel and replaced it with some cleaner code. (Thanks to https://os.phil-opp.com/) Target this time is to include WebAssembly support, there is nebulet but I want a bit more…

                                                            1. 1

                                                              Sad to hear it, he was one of a kind. Is there any effort to archive his work and preserve the site? His family might not want to keep paying the hosting bill forever.

                                                              1. 2

                                                                archive.org has a large collection of his videos and a lot of TempleOS isos.

                                                                I have some of his videos, curated, missing most livestreams. I prefer the more in-depth videos about TOS.

                                                              1. 1

                                                                I get that mental illness gives old mate a pass on the racist diatribes, but most of those “features” are really bad ideas.

                                                                1. 7

                                                                  As the article put it:

                                                                  Don’t write things off just because they have big flaws.

                                                                  That said, would you please expand on why most of the features are really bad ideas?

                                                                  1. 11

                                                                    I may be the only user of my computer, but I still appreciate memory protection.

                                                                    1. 5

                                                                      More to the point: Practically every, if not every, security feature is also an anti-footbullet feature. Memory protection protects my data from other people on the system and allows security contexts to be enforced, and it protects my data from one of my own programs going wrong and trying to erase everything it can address. Disk file protections protect my data from other users and partially-trusted processes, and ensure my own code can’t erase vital system files in the normal course of operation. That isn’t even getting into how memory protection interacts with protecting peripheral hardware.

                                                                      Sufficiently advanced stupidity is indistinguishable from malice.

                                                                      1. 15

                                                                        But that’s not really the point of TempleOS, is it?

                                                                        As Terry once mentioned, TempleOS is a motorbike. If you lean over too far you fall off. Don’t do that. There is no anti-footbullet features because that’s the point.

                                                                        Beside that, TOS still has some features lacking in other OS. Severely lacking.

                                                                        1. 1

                                                                          Beside that, TOS still has some features lacking in other OS. Severely lacking.

                                                                          Like?

                                                                          1. 12

                                                                            The shell being not purely text but actual hypertext with images is lacking in most other os by default and I would love to have that.

                                                                            1. 6

                                                                              If you’ve never played with Oberon or one of its descendant systems, or with Acme (inspired by Oberon) from Rob Pike, you should give it/them a try.

                                                                              1. 0

                                                                                If you start adding images and complex formatting in to the terminal then you lose the ability to pipe programs and run text processing tools on them.

                                                                                1. 13

                                                                                  Only because Unix can’t comprehend with the idea of anything other than bags of bytes that unformatted text happens to be congruent with.

                                                                                  1. 4

                                                                                    I have never seen program composition of guis. The power of text is how simple it is to manipulate and understand with simple tools. If a tool gives you a list of numbers its very easy to process. If the tool gives you those numbers in a picture of a pie chart then it’s next to impossible to do stuff with that.

                                                                                    1. 7

                                                                                      Program composition of GUIs is certainly possible – the Alto had it. It’s uncommon in UNIX-derived systems and in proprietary end-user-oriented systems.

                                                                                      One can make the argument that the kind of pipelining of complex structured objects familiar from notebook interfaces & powershell is as well-suited to GUI composability as message-passing is (although I prefer message-passing for this purpose since explicit nominal typing associated with this kind of OO slows down iterative exploration).

                                                                                      A pie chart isn’t an image, after all – a pie chart is a list of numbers with some metadata that indicates how to render those numbers. The only real reason UNIX doesn’t have good support for rich data piping is that it’s hard to add support to standard tools decades later without breaking existing code (one of the reasons why plan9 is not fully UNIX compatible – it exposes structures that can’t be easily handled by existing tools, like union filesystems with multiple files of the same name, and then requires basically out-of-band disambiguation). Attempts to add extra information to text streams in UNIX tools exist, though (often as extra control sequences).

                                                                                      1. 3

                                                                                        Have a look at PowerShell.

                                                                                        1. 3

                                                                                          I have never seen program composition of guis. The power of text is how simple it is to manipulate and understand with simple tools. If a tool gives you a list of numbers its very easy to process. If the tool gives you those numbers in a picture of a pie chart then it’s next to impossible to do stuff with that.

                                                                                          Then, respectfully, you need to get out more :) Calvin pointed out one excellent example, but there are others.

                                                                                          Smalltalk / Squeak springs to mind.

                                                                                          1. 2

                                                                                            Certainly the data of the pie chart has to be structured with such metadata that you can pipe it to a tool which extracts the numbers. Maybe even manipulates them and returns a new pie chart.

                                                                                        2. 3

                                                                                          You don’t loose that ability considering such data would likely still have to be passed around in a pipe. All that changes is that your shell is now capable of understanding hypertext instead of normal text.

                                                                                          1. 1

                                                                                            I could easily imagine a command shell based on S-expressions rather than text which enabled one to pipe typed data (to include images) easily from program to program.

                                                                                      2. 1

                                                                                        But why do I want that? It takes me 30 seconds to change permissions on /dev/mem such that I too can ride a motorbike without a helmet.

                                                                                        1. 2

                                                                                          That is completely beside the point. A better question is how long would it take you to implement an operating system from scratch, by yourself, for yourself. When you look at it that way, of course he left some things out. Maybe those things just weren’t as interesting to him.

                                                                                          1. 1

                                                                                            You could do that, but in TOS that’s the default. Defaults matter a lot.

                                                                                            1. 2

                                                                                              /dev/mem more or less world accessible was also the default for a particular smartphone vendor I did a security audit for.

                                                                                              Defaults do matter a lot…

                                                                                        2. 8

                                                                                          If there are no other users, and it takes only a second or two to reload the OS, what’s the harm?

                                                                                          1. 6

                                                                                            Its fine for a toy OS but I dont want to be working on real tasks where a bug in one program could wipe out everything I’m working on or corrupt it silently.

                                                                                            1. 11

                                                                                              I don’t think TempleOS has been advertised as anything other than a toy OS. All this discussion of “but identity mapped ring 0!” seems pretty silly in context. It’s not designed to meet POSIX guidelines, it’s designed to turn your x86_64 into a Commodore.

                                                                                      3. 2

                                                                                        Don’t write things off just because they have big flaws.

                                                                                        That’s pretty much the one and only reason where you would want to write things off.

                                                                                        1. 14

                                                                                          There’s a difference between writing something off based on it having no redeeming qualities and writing something off because it’s a mixed bag. TempleOS is a mixed bag – it is flawed in a generally-interesting way. (This is preferable to yet another UNIX, which is flawed in the same boring ways as every other UNIX.)

                                                                                      4. 2

                                                                                        This is probably not what you meant to imply, but nobody else said it, so just to be clear: Mental illness and racism aren’t correlated.

                                                                                        1. 2

                                                                                          Whatever is broken inside somebody to make them think the CIA is conspiring against them, I find it hard to believe that same fault couldn’t easily make somebody think redheads are conspiring against them.

                                                                                          1. 2

                                                                                            You’re oversimplifying. There are many schizophrenic people in the U.S., and most of them are not racist. Compulsions, even schizophrenic ones, don’t come from the ether, and they’re not correlated with any particular mental illness. Also, terry’s compulsions went far beyond paranoia.

                                                                                      1. 8

                                                                                        To be fair, they should also mark as “Not Secure” any page running JavaScript.

                                                                                        Also, pointless HTTPS adoption might reduce content accessibility without blocking censorship.
                                                                                        (Disclaimer: this does not mean that you shouldn’t adopt HTTPS for sensible contents! It just means that using HTTPS should not be a matter of fashion: there are serious trade-offs to consider)

                                                                                        1. 11

                                                                                          By adopting HTTPS you basically ensure that nasty ISPs and CDNs can’t insert garbage into your webpages.

                                                                                          1. 2

                                                                                            No.

                                                                                            It protects against cheap man-in-the-middle attacks (as the one an ISP could do) but it can nothing against CDNs that can identify you, as CDNs serve you JavaScript over HTTPS.

                                                                                            1. 11

                                                                                              With Subresource Integrity (SRI) page authors can protect against CDNed resources changing out from beneath them.

                                                                                              1. 1

                                                                                                Yes SRI mitigate some of the JavaScript attacks that I describe in the article, in particular the nasty ones from CDNs exploiting your trust on a harmless-looking website.
                                                                                                Unfortunately several others remain possible (just think of jsonp or even simpler if the website itself collude to the attack). Also it needs widespread adoption to become a security feature: it should probably be mandatory, but for sure browsers should mark as “Not Secure” any page downloading programs from CDNs without it.

                                                                                                What SRI could really help is with the accessibility issues described by Meyer: you can serve most page resources as cacheable HTTP resources if the content hash is declared in a HTTPS page!

                                                                                              2. 3

                                                                                                WIth SRI you can block CDNs you use to load JS scripts externally from manipulating the webpage.

                                                                                                I also don’t buy the link that claims it reduces content accessiblity, the link you provided above explains a problem that would be solved by simply using a HTTPS caching proxy (something a lot of corporate networks seem to have no problem operating considering TLS 1.3 explicitly tries not to break those middleboxes)

                                                                                                1. 4

                                                                                                  CDNs are man-in-the-middle attacks.

                                                                                              3. 1

                                                                                                As much as I respect Meyer, his point is moot. MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. Some companies even made out of the box HTTPS URL filtering their selling point. If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’. We should be ready to teach those in needs how to setup it of course, but that’s about it.

                                                                                                1. 0

                                                                                                  MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. […] If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’.

                                                                                                  Well… how can I say that… I don’t think so.

                                                                                                  Selling HTTPS MitM proxy as a security solutions is plain incompetence.

                                                                                                  Beyond the obvious risk that the proxy is compromised (you should never assume that they won’t) which is pretty high in some places (not only in Africa… don’t be naive, a chain is only as strong as its weakest link), a transparent HTTPS proxy has an obvious UI issue: people do not realise that it’s unsafe.

                                                                                                  If the browsers don’t mark as “Not Secure” them (how could them?) the user will overlook the MitM risks, turning a security feature against the users’ real security and safety.

                                                                                                  Is this something webmasters should care? I think so.

                                                                                                  1. 4

                                                                                                    Selling HTTPS MitM proxy as a security solutions is plain incompetence.

                                                                                                    Not sure how to tell you this, but companies have been doing this on their internal networks for a very long time and this is basically standard operating procedure at every enterprise-level network I’ve seen. They create their own CA, generate an intermediate CA key cert, and then put that on an HTTPS MITM transparent proxy that inspects all traffic going in an out of the network. The intermediate cert is added to the certificate store on all devices issued to employees so that it is trusted. By inspecting all of the traffic, they can monitor for external and internal threats, scan for exfiltration of trade secrets and proprietary data, and keep employees from watching porn at work. There is an entire industry around products that do this, BlueCoat and Barracuda are two popular examples.

                                                                                                    1. 5

                                                                                                      There is an entire industry around products that do this

                                                                                                      There is an entire industry around rasomware. But this does not means it’s a security solution.

                                                                                                      1. 1

                                                                                                        It is, it’s just that word security is better understood as “who” is getting (or not) secured from “whom”.

                                                                                                        What you keep saying is that MitM proxy does not protect security of end users (that is employees). What they do, however, in certain contexts like described above, is help protect the organisation in which end users operate. Arguably they do, because it certainly makes it more difficult to protect yourself from something you cannot see. If employees are seen as a potential threat (they are), then reducing their security can help you (organisation) with yours.

                                                                                                        1. 1

                                                                                                          I wonder if you did read the articles I linked…

                                                                                                          The point is that, in a context of unreliable connectivity, HTTPS reduce dramatically accessibility but it doesn’t help against censorship.

                                                                                                          In this context, we need to grant to people accessibility and security.

                                                                                                          An obvious solution is to give them a cacheable HTTP access to contents. We can fool the clients to trust a MitM caching proxy, but since all we want is caching this is not the best solution: it add no security but a false sense of security. Thus in that context, you can improve users’ security by removing HTTPS.

                                                                                                          1. 1

                                                                                                            I have read it, but more importantly, I worked in and build services for places like that for about 5 years (Uganda, Bolivia, Tajikistan, rural India…).

                                                                                                            I am with you that HTTPS proxy is generally best to be avoided if for no other reason because it grows attack surface area. I disagree that removing HTTPS increases security. It adds a lot more places and actors who now can negatively impact user in exchange for him knowing this without being able to do much about it.

                                                                                                            And that is even without going into which content is safe to be cached in a given environment.

                                                                                                            1. 1

                                                                                                              And that is even without going into which content is safe to be cached in a given environment.

                                                                                                              Yes, this is the best objection I’ve read so far.

                                                                                                              As always it’s a matter of tradeoff. In a previous related thread I described how I would try to fix the issue in a way that people can easily opt-out and opt-in.

                                                                                                              But while I think it would be weird to remove HTTPS for an ecommerce chart or for a political forum, I think that most of Wikipedia should be served through both HTTP and HTTPS. People should be aware that HTTP page are not secure (even though it all depends on your threat model…) but should not be mislead to think that pages going through an MitM proxy are secure.

                                                                                                    2. 2

                                                                                                      HTTPS proxy isn’t incompetence, it’s industry standard.

                                                                                                      They solve a number of problems and are basically standard in almost all corporate networks with a minimum security level. They aren’t a weak chain in the link since traffic in front of the proxy is HTTPS and behind it is in the local network and encrypted by a network level CA (you can restrict CA capabilities via TLS cert extensions, there is a fair number of useful ones that prevent compromise).

                                                                                                      Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.

                                                                                                      1. 2

                                                                                                        Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.

                                                                                                        Browsers bypass the network configuration to protect the users’ privacy.
                                                                                                        (I agree this is stupid, but they are trying to push this anyway)

                                                                                                        The point is: the user’s security is at risk whenever she sees as HTTPS (which stands for “HTTP Secure”) something that is not secure. It’s a rather simple and verifiable fact.

                                                                                                        It’s true that posing a threat to employees’ security is an industry standard. But it’s not a security solution. At least, not for the employees.

                                                                                                        And, doing that in a school or a public library is dangerous and plain stupid.

                                                                                                        1. 0

                                                                                                          Nobody is posing a threat to employees’ security here, a corporation can in this case be regarded as a single entity so terminating SSL at the borders of the entity similar to how a browser terminates SSL by showing the website on a screen is fairly valid.

                                                                                                          Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it (atleast when I wanted access to either I was in both cases instructed that the network is supervised and filtered) which IMO negates the potential security compromise.

                                                                                                          Browsers bypass the network configuration to protect the users’ privacy.

                                                                                                          Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.

                                                                                                          1. 1

                                                                                                            Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it [..] which IMO negates the potential security compromise.

                                                                                                            Yes this is true.

                                                                                                            If people are kept constantly aware of the presence of a transparent HTTPS proxy/MitM, I have no objection to its use instead of an HTTP proxy for caching purposes. Marking all pages as “Not Secure” is a good way to gain such awareness.

                                                                                                            Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.

                                                                                                            Did you know about Firefox’s DoH/CloudFlare affair?

                                                                                                            1. 2

                                                                                                              Yes I’m aware of the “affair”. To my knowledge the initial DoH experiment was localized and run on users who had enabled studies (opt-in). In both the experiment and now Mozilla has a contract with CloudFlare to protect the user privacy during queries when DoH is enabled (which to my knowledge it isn’t by default). In fact, the problem ungleich is blogging about isn’t even slated for standard release yet, to my knowledge.

                                                                                                              It’s plain and old wrong in the bad kind of way; it conflates security maximalism with the mission of Mozilla to bring the maximum amount of users privacy and security.

                                                                                                              1. 1

                                                                                                                TBH, I don’t know what you mean with “security maximalism”.

                                                                                                                I think ungleich raise serious concerns that should be taken into account before shipping DoH to the masses.

                                                                                                                Mozilla has a contract with CloudFlare to protect the user privacy

                                                                                                                It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.

                                                                                                                AFAIK, even Facebook had a contract with his users.

                                                                                                                Yeah.. I know… they will “do no evil”…

                                                                                                                1. 1

                                                                                                                  Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.

                                                                                                                  It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.

                                                                                                                  Cloudflare hasn’t done much that makes me believe they will violate my privacy. They’re not in the business of selling data to advertisers.

                                                                                                                  AFAIK, even Facebook had a contract with his users

                                                                                                                  Facebook used Dark Patterns to get users to willingly agree to terms they would otherwise never agree on, I don’t think this is comparable. Facebook likely never violated the contract terms with their users that way.

                                                                                                                  1. 1

                                                                                                                    Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.

                                                                                                                    You should define “common user”.
                                                                                                                    If you mean the politically inepts who are happy to be easily manipulated as long as they are given something to say and retweet… yes, they have nothing to fear.
                                                                                                                    The problem is for those people who are actually useful to the society.

                                                                                                                    Cloudflare hasn’t done much that makes me believe they will violate my privacy.

                                                                                                                    The problem with Cloudflare is not what they did, it’s what they could do.
                                                                                                                    There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.

                                                                                                                    But my concerns are with Mozilla.
                                                                                                                    They are trusted by milions of people world wide. Me included. But actually, I’m starting to think they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                                                                                    1. 1

                                                                                                                      So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?

                                                                                                                      Just because you think they aren’t useful to society (and they are, these people have all the important jobs, someone isn’t useless because they can’t use a computer) doesn’t mean we, as software engineers, should abandon them.

                                                                                                                      There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.

                                                                                                                      Then don’t use it? DoH isn’t going to be enabled by default in the near future and any UI plans for now make it opt-in and configurable. The “Cloudflare is default” is strictly for tests and users that opt into this.

                                                                                                                      they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                                                                                      You mean safe because everyone involved knows what’s happening?

                                                                                                                      1. 1

                                                                                                                        I don’t believe the concerns are really concerns for the common user.

                                                                                                                        You should define “common user”.
                                                                                                                        If you mean the politically inepts who are happy to be easily manipulated…

                                                                                                                        So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?

                                                                                                                        I’m not sure if you are serious or you are pretending to not understand to cope with your lack of arguments.
                                                                                                                        Let’s assume the first… for now.

                                                                                                                        I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept. That’s obviously because, anyone politically inept is unlikely to be affected by surveillance.
                                                                                                                        That’s it.

                                                                                                                        they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                                                                                                        You mean safe because everyone involved knows what’s happening?

                                                                                                                        Really?
                                                                                                                        Are you sure everyone understand what is a MitM attack? Are you sure every employee understand their system administrators can see the mail they reads from GMail? I think you don’t have much experience with users and I hope you don’t design user interfaces.

                                                                                                                        A MitM caching HTTPS proxy is not safe. It can be useful for corporate surveillance, but it’s not safe for users. And it extends the attack surface, both for the users and the company.

                                                                                                                        As for Mozilla: as I said, I’m just not sure whether they deserve trust or not.
                                                                                                                        I hope they do! Really! But it’s really too naive to think that a contract is enough to bind a company more than a subpoena. And they ship WebAssembly. And you have to edit about:config to disable JavaScript
                                                                                                                        All this is very suspect for a company that claims to care about users’ privacy!

                                                                                                                        1. 0

                                                                                                                          I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept.

                                                                                                                          I’m saying the concerns raised by ungleich are too extreme and should be dismissed on grounds of being not practical in the real world.

                                                                                                                          Are you sure everyone understand what is a MitM attack?

                                                                                                                          An attack requires an adversary, the evil one. A HTTPS Caching proxy isn’t the evil or enemy, you have to opt into this behaviour. It is not an attack and I think it’s not fair to characterise it as such.

                                                                                                                          Are you sure every employee understand their system administrators can see the mail they reads from GMail?

                                                                                                                          Yes. When I signed my work contract this was specifically pointed out and made clear in writing. I see no problem with that.

                                                                                                                          And it extends the attack surface, both for the users and the company.

                                                                                                                          And it also enables caching for users with less than stellar bandwidth (think third world countries where satellite internet is common, 500ms ping, 80% packet loss, 1mbps… you want caching for the entire network, even with HTTPS)

                                                                                                                          And they ship WebAssembly.

                                                                                                                          And? I have on concerns about WebAssembly. It’s not worse than obfuscated javascript. It doesn’t enable anything that wasn’t possible before via asm.js. The post you linked is another security maximalist opinion piece with little factual arguments.

                                                                                                                          And you have to edit about:config to disable JavaScript…

                                                                                                                          Or install a half-way competent script blocker like uMatrix.

                                                                                                                          All this is very suspect for a company that claims to care about users’ privacy!

                                                                                                                          I think it’s understandable for a company that both cares about users privacy and doesn’t want a marketshare of “only security maximalists”, also known as, 0%.

                                                                                                                          1. 1

                                                                                                                            An attack requires an adversary, the evil one.

                                                                                                                            According to this argument, you don’t need HTTPS until you don’t have an enemy.
                                                                                                                            It shows very well your understanding of security.

                                                                                                                            The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.

                                                                                                                            I have on concerns about WebAssembly.

                                                                                                                            Not a surprise.

                                                                                                                            Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).

                                                                                                                            Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.

                                                                                                                            As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.

                                                                                                                            1. 1

                                                                                                                              According to this argument, you don’t need HTTPS until you don’t have an enemy.

                                                                                                                              If there is no adversary, no Malory in the connection, there is no reason to encrypt it either, correct.

                                                                                                                              It shows very well your understanding of security.

                                                                                                                              My understanding in security is based on threat models. A threat model includes who you trust, who you want to talk to and who you don’t trust. It includes how much money you want to spend, how much your attacker can spend and the methods available to both of you.

                                                                                                                              There is no binary security, a threat model is the entry point and your protection mechanisms should match your threat model as best as possible or exceed it, but there is no reason to exert effort beyond your threat model.

                                                                                                                              The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.

                                                                                                                              Malory is a potential enemy. An HTTPS caching proxy operated by a corporation is not an enemy. It’s not malory, it’s Bob, Alice and Eve where Bob wants to send Alice a message, she works for Eve and Eve wants to avoid having duplicate messages on the network, so Eve and Alice agree that caching the encrypted connection is worthwile.

                                                                                                                              Malory sits between Eve and Bob not Bob and Alice.

                                                                                                                              Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).

                                                                                                                              I did, in which case I either filed a Github issue if the project was open source or I notified the company that offered the javascript or optimized binary. Usually the bug is then fixed.

                                                                                                                              It’s not my duty or problem to debug web applications that I don’t develop.

                                                                                                                              Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.

                                                                                                                              Then don’t do it? Nobody is forcing you.

                                                                                                                              As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.

                                                                                                                              I don’t think you consider that a practical problem such as bad connections can outweigh a lot of potential security issues since you don’t have the time or user patience to do it properly and in most cases it’ll be good enough for the average user.

                                                                                                      2. 2

                                                                                                        My point is that the problems of unencrypted HTTP and MitM’ed HTTPS are exactly the same. If one used to prefer the former because it can be easily cached, I can’t see how setting up the latter makes their security issues worse.

                                                                                                        1. 3

                                                                                                          With HTTP you know it’s not secure. OTOH you might not be aware that your HTTPS connection to the server is not secure at all.

                                                                                                          The lack of awareness makes MitM caching worse.

                                                                                                  1. 4

                                                                                                    I’m gonna sit in the smug corner for people running AMD.

                                                                                                    Otherwise, this is all kinds of “very very bad”. The kind of where in a cartoon you’d have sirens spin up to warn of incoming air raids.

                                                                                                    1. 3

                                                                                                      Because SEV has been that much better?

                                                                                                      1. 2

                                                                                                        The most secure system is the system no one uses :)

                                                                                                    1. 1

                                                                                                      Thank you for your work! I was almost going to subscribe to a lot of them individually.

                                                                                                      1. 1

                                                                                                        That’s quite a neat survey with interesting results, despite the admitted bias of the author. Maybe they can redo it in the future to see how it develops and improve on the survey itself…

                                                                                                        1. 5

                                                                                                          With waterfall, sure you can get a lot done with a beaming goal post in the distance, but is work done on the wrong things considered throughput? Agile should help resolve issues that crop up along the way, and as such throughput might be a bit less, but the idea is that you can change focus and direction as programs get tested, whereas with waterfall you’ll finish a product fast, only to realize that you have to undo a lot of the work done to get back to a place where you can change it. This grossly oversimplifies the whole process, but that is the ups and downs. The agile upside is better direction and “correct” throughput, at the cost of project visibility into the future.

                                                                                                          1. 1

                                                                                                            Depends on how you do waterfall, just like it depends on how you do agile.

                                                                                                            With waterfall you can still have all those things agile does, weekly meetings, burndown charts, even sprints. The important thing behind waterfall is that right at the very beginning you have a complete description of everything the program will be doing. When you do the programming, you do all the programming. Once it’s done you test it and make little adjustments where things are going wrong. Once you did the testing, you hand over to the customer and do the next project.

                                                                                                            In comparison, Agile asks you to regularly interact with the customer and see if the requirements evolved or changed, things that might happen when the customer sees the application evolve. You don’t do that in Waterfall.

                                                                                                            Waterfall can be immensely powerful when deployed under the right circumstances. As does Agile.

                                                                                                            I would also disagree that agile gives you more correct throughput, Waterfall can be just as good if your customer gives you a good handover. But this depends entirely on the customer.

                                                                                                            1. 2

                                                                                                              Mini-waterfall is agile in the large, and I think mini-waterfall is kinda underrated, especially in teams where the same people work together frequently over a long period:

                                                                                                              “We all understand what this is, it’s going to take us 3-5 weeks, we might have a few questions along the way, but it should be done by about October”.

                                                                                                              That can actually work out pretty well.

                                                                                                              1. 1

                                                                                                                I think we are pretty much in agreement. Waterfall puts a lot of responsibility on the customer knowing exactly what they want and how they want it, and some good architects that can create a specification that matches this. If all that is in order, waterfall should win hands down, and the testing at the end should be minor (some graphical design, a few discrepancies, etc.)

                                                                                                                Where agile wins is if this is not the case, which unfortunately often happens. In cases where you are not producing for a customer (such as internal product development), it can also be hard to know in advance how a feature works out and you want feedback early and often. Agile then is better, to avoid working in the wrong direction and “correct the course” often, since there is no “X marks the spot” for crossing the finish line.

                                                                                                            1. 8

                                                                                                              KeePass has clients that work the 3 operation systems in question, and I’ve had good luck using Syncthing to share the password file between computers, but the encryption of the database means that any good sync utility can work with it.

                                                                                                              1. 4

                                                                                                                I KeePassX together with SyncThing on multiple Ubuntus and Androids for two years now. By now I have three duplicate conflict files which I keep around because I have no idea what the difference between the files is. Once I had to retrieve a password from such conflict file as it was missing in the main one.

                                                                                                                Not perfect, but works.

                                                                                                                Duclare, using ssh instead of SyncThing would certainly work since the database is just a file. I prefer SyncThing because of convenience.

                                                                                                                1. 2

                                                                                                                  Duclare, using ssh instead of SyncThing would certainly work since the database is just a file.

                                                                                                                  Ideally it’d be automated and integrated into the password manager though. Keepass2android does support it, but it does not support passwordless login and don’t recall it ever showing me the server’s fingerprint and asking if that’s OK. So it’s automatically logging in with a password to a host run by who knows. Terribly insecure.

                                                                                                                  1. 1

                                                                                                                    I had the same situation. 3 conflict files and merging is a pain. I’ve switched to Pass instead now.

                                                                                                                  2. 2

                                                                                                                    I use Keepass for a few years now too. I tried other Password managers in the meantime but I never got quite satisfied, not even pass though that one was just straight up annoying.

                                                                                                                    I’ve had a few conflicts over the years but usually Nextcloud is rather good at avoiding conflicts here and KPXC handles it very well. I think Syncthing might casue more problems as someone else noted, since nodes might take a while to sync up.

                                                                                                                  1. 1

                                                                                                                    The website seems to have been taken down since I get a 403, maybe the author didn’t like being linked to Lobste.rs or they’re shy.

                                                                                                                    1. 1

                                                                                                                      Works for me from here.

                                                                                                                      1. 1

                                                                                                                        Curiously it works from my phone.

                                                                                                                        I guess my ip is blocked or something? Weird …

                                                                                                                        1. 2

                                                                                                                          Their hosting provider applies blocks rather … aggressively.

                                                                                                                    1. 11

                                                                                                                      Git via mail can be nice but it’s very hard to get used to. It took me ages to set up git send-email correctly, and my problem in the end was that our local router blocked SMTP connections to non-whitelisted servers. This is just one way it can go wrong. I can inagine there are many more.

                                                                                                                      And just another minor comment: Everyone knows that Git is decentralized (not federated, btw.), the issue is GitHub, ie the service that adds a certain something to version control, like profiles, stars, commit-stats, fork-counts, followers,etc. A one-sided, technical perspective ignores all of these things as useless and unnecessary – falsely. Centralized platforms have a unfair benefit in this perspective, since there’s only one voice, one claim and no way to question it. One has to assume they make sure that the accounts are all real, and not spam-bots, otherwise nothing makes sense.

                                                                                                                      To overcome this issue, is the big task. And Email, which is notoriously bad at any identity validation, might not be the best thing. To be fair, ActivityPub, currently isn’t either, but the though that different services and platforms could interoperate (and some of these might even support a email-interface) seems at the very least interesting to me.

                                                                                                                      1. 13

                                                                                                                        Article author here. As begriffs said, I propose email as the underlying means of federating web forges, as opposed to ActivityPub. The user experience is very similar and users who don’t know how to or don’t want to use git send-email don’t have to.

                                                                                                                        Everyone knows that Git is decentralized (not federated, btw.)

                                                                                                                        The point of this article is to show that git is federated. There are built in commands which federate git using email (a federated system) as the transport.

                                                                                                                        GitHub, ie the service that adds a certain something to version control, like profiles, stars, commit-stats, fork-counts, followers,etc. A one-sided, technical perspective ignores all of these things as useless and unnecessary – falsely

                                                                                                                        Profiles can live on sr.ht, but on any sr.ht instance, or any instance of any other forge software which federates with email. A person would probably still have a single canonical place they live on, and a profile there which lists their forks of software from around the net. Commit stats are easily generated on such a platform as well. Fork counts and followers (stars?) I find much less interesting, they’re just ego stroking and should be discarded if technical constraints require.

                                                                                                                        1. 4

                                                                                                                          I don’t think that’s a strong argument in favor of git being federated. I don’t think it matters either.

                                                                                                                          Git in of itself does not care about the transport. It does not care whether you use HTTP, git:// or email to bring your repo up to date. You can even use an USB stick.

                                                                                                                          I’d say git is communication format agnostic, federation is all about standardizing communication. Using email with git is merely another way to pipe git I/O, git itself does not care.

                                                                                                                          1. 2

                                                                                                                            git send-email literally logs into SMTP and sends a patch with it.

                                                                                                                            git am and git format-patch explicitly refer to mailboxes.

                                                                                                                            Email is central to the development of Linux and git itself, the two projects git is designed for. Many git features are designed with email in mind.

                                                                                                                            1. 4

                                                                                                                              Yes but ultimately both do not require nor care about federation itself.

                                                                                                                              send-email is IMO more of a utility function, git am or format-patch, which as you mention go to mailboxes, have nothing to do with email’s federated nature. Neither is SMTP tbh, atleast on the Client-Server side.

                                                                                                                              They’re convenience scripts that do the hard part of writing patches in mails for you, you can also just have your mailbox on a usb stick and transport it that way. And the SMTP doesn’t need to go elsewhere either.

                                                                                                                              I guess the best comparison is that this script is no more than a frontend for Mastodon. The Frontend of Mastodon isn’t federated either, Mastodon itself is. Federation is the server-to-server part. That’s the part we care about. But git doesn’t care about that.

                                                                                                                              1. 9

                                                                                                                                I see what you’re getting at. I have to concede that you are correct in a pedantic sense, but in a practical sense none of what you’re getting at matters. In a practical sense, git is federated via email.

                                                                                                                              2. 4

                                                                                                                                That various email utilities are included seems more like a consequence of email being the preferred workflow of git developers. I don’t see how that makes it the canonical workflow compared to pulling from remotes via http or ssh, git has native support for both after all.

                                                                                                                            2. 2

                                                                                                                              I belive that @tscs37 already showed that Git is distributed, since all nodes are equal (no distinction between clinets and servers), while a git networks can be structured in a federated fashion, ir even in a centralized one. What the transport medium has to do with this is still unclear in my view.

                                                                                                                              Fork counts and followers (stars?) I find much less interesting, they’re just ego stroking and should be discarded if technical constraints require

                                                                                                                              That’s exactly my point. GitHub offers a unit-standard, and easily recognisable and readable (simply because everyone is used to it). This has a value and ultimately a relevance, that can’t just be ignored, even if this reason is nonsense. It would just be another example of technical naïve.

                                                                                                                              I’ve shown my sympathy for ideas like these before, ad I most certainly don’t want to make the impression of a GitHub apologist. All I want to remind people is that the Social aspect beyond necessity (builds, issue trackers, …) are all things one has to seriously consider and tackle when one is interested in offering an alternative to GitHub, with any serious ambitions.

                                                                                                                              1. 3

                                                                                                                                I don’t think sr.ht has to please everyone. People who want these meaningless social features will probably be happier on some other platform, while the veterans are getting work done.

                                                                                                                                1. 2

                                                                                                                                  I’m fine with people using mailing-list oriented solutions (the elitism might be a bit off-putting, but never mind). I just don’t think that it’s that much better than the GitPub idea.

                                                                                                                                  People who want these meaningless social features will probably be happier on some other platform, while the veterans are getting work done.

                                                                                                                                  If having these so-called “meaningless social features” helps a project thrive, attract contributers and new users, I wouldn’t conciser these meaningless. But if that’s not what you are interested in, that’s just ok.

                                                                                                                            3. 2

                                                                                                                              our local router blocked SMTP connections to non-whitelisted servers

                                                                                                                              The article says that sr.ht can optionally send the emails for you, no git send-mail required: “They’ll enter an email address (or addresses) to send the patch(es) to, and we’ll send it along on their behalf.”

                                                                                                                              Also what mail transfer agent were you pointing git send-mail at? You can have it send through gmail/fastmail/etc servers – would your router block that?

                                                                                                                              GitHub […] adds a certain something to version control, like profiles, stars, commit-stats, fork-counts, followers

                                                                                                                              How about mirroring code on github to collect stars? Make it a read-only mirror by disabling issues and activating the pull request rejection bot. Git, Linux, and Postgres do this, and probably other projects do too.

                                                                                                                              Email […] is notoriously bad at any identity validation

                                                                                                                              Do SPF, DKIM and DMARC make this no longer true, or are there still ways to impersonate people?

                                                                                                                              1. 1

                                                                                                                                Also what mail transfer agent were you pointing git send-mail at?

                                                                                                                                Fastmail. Was too esoteric for the default settings of my router. And if it weren’t for support, I would have never guessed that that was the issue, since the whole interface is so alien to most people (just like the questions: did I send the right commits, is my message formatted correctly, etc.)

                                                                                                                                How about mirroring code on github to collect stars? Make it a read-only mirror by disabling issues and activating the pull request rejection bot. Git, Linux, and Postgres do this, and probably other projects do too.

                                                                                                                                I’m not saying it’s perfect (again, I’m no GitHub apologist) – my point is that it isn’t irrelevant!

                                                                                                                                Do SPF, DKIM and DMARC make this no longer true, or are there still ways to impersonate people?

                                                                                                                                Yes, if someone doesn’t use these things. And claiming “oh, but they just should” is again raising the entry barrier, which is just too high the way it already would be.

                                                                                                                                1. 1

                                                                                                                                  Yes, if someone doesn’t use these things. And claiming “oh, but they just should” is again raising the entry barrier, which is just too high the way it already would be.

                                                                                                                                  This doesn’t damn the whole idea, it just shows us where open areas of development are.