1. 6

    This combination of SQLite and consistent backups looks nice. What I still don’t understand is how you deploy your application or migrate to another node.

    Seems like SQLite supports read/write from multiple process so I can start another process and then terminate the previous one once it stops processing requests.

    How about migrating to another node? You might want to install the updates or AWS has scheduled your node to be restarted. Does it mean that in this case I would need to stop all requests then wait until no more writes are happening to the database and then restore it from the S3 on another machine?

    1. 3

      Does it mean that in this case I would need to stop all requests then wait until no more writes are happening to the database and then restore it from the S3 on another machine?

      That’s my understanding as well. One way to mitigate this would be to switch your application to read-only as you bring up a replacement node. For the duration of the restore process you’ll have degraded availability, but you won’t be “down”.

      1. 3

        SQLite does support multi-process access to the same db file, but doing this over a network filesystem like NFS or SMB is strongly discouraged: most network filesystem implementations have problems with file locking, and improper locking can easily cause database corruption.

        So this wouldn’t work for migrating to another host. It would however let you update your server code, on the same host, without downtime.

        1. 1

          Yep, that was my understanding as well. Being able to switch to another process on the same host.

      1. 31

        All the things.

        And by that, I mean I’m hosting all the little web apps I want to share with a few people in my basement instead of dropping them onto a cloud VPS. My VPS zoo got too big in 2019 and for 2020 I started driving it the other direction.

        I have a single VPS that runs a reverse proxy and a wireguard interface. VMs in my basement connect over wireguard to the VPS, which then proxies traffic back over wireguard. No holes get punched in the home firewall.

        Here’s where I wrote down the general approach.

        Here are the steps I take to expose a new thing.

        For things that don’t use much bandwidth and don’t need perfect uptime, I love this.

        1. 3

          It’s the way to go, never put stuff directly onto the internet. It’s so easy to restore a haproxy config and even a simple pptp server and get things rolling in minutes. A $2 a month VPS is enough for a few web sites and even my Exchange server which is connected via cellphone!

          Local copies, local backups. “enterprise” software is dirt cheap second hand, just as is the “free” stuff.

          1. 4

            I didn’t mention this in my below post, but I use the same trick of proxying services running on my physical hardware to a cloud VPS over wireguard, to avoid exposing my home IP address. I actually have several such VPSs, since that allows me to make publicly available services I use in different social contexts, without someone being able to notice that my.realname.com and my.internet.pseudonym.com are both pointing at the same hosting provider IP address.

            That said, I’m paying somewhat more than $2/month, and I’m curious where you’ve been able to find that price. $5/month seems to be the standard price across several VPS providers for their lowest-end VPS with a gig of RAM. The absolute cheapest I’ve seen is a vultr VPS at $2.50/month, which only has an IPv6 address and isn’t available at all their data centers. It wouldn’t surprise me if they’re making so little money off of it that they’d be prone to discontinuing it in the future. If you just have one VPS there’s not much difference between $2 and $5 a month, but since I have several, the costs do start to add up.

            1. 4

              The smallest entry on Hetzner Cloud’s list is a 3€/month machine.

              1. 3

                Depending on how much traffic you need to handle, Oracle free VMs could work as your exit nodes: https://matrix.org/docs/guides/free-small-matrix-server#get-a-free-server

                1. 2

                  It’s not a VPS, but you can deploy a small VM on https://fly.io which should be enough to run wg.

              2. 1

                This is super interesting, chur

              1. 2

                I rent a dedicated server for aprox 50 euro a month (i7-6700k, 32GB ram):

                Mastodon, Pleroma, tt-rss, bookmarks (linkding), matrix + bridges, website, peertube, calendar/contacts (radicalie)

                VMs: e-mail (openbsd) some matrix bridges

                1. 1

                  Mind sharing where you’re renting your server from? Looks like you got a good deal.

                  1. 3

                    Until recently I had a server at prgmr.com which I think offers really good prices.

                    Nothing against it, I just got a better deal at someone I know ;-) https://openbsd.amsterdam/

                    I just host contacts and calendar for the whole family and run my 24/7 programs there.

                    1. 2

                      It looks like hetzner’s dedicated offerings. (same cpu spec)

                      1. 2

                        Hetzner. They have regular auctions for servers. I moved everything off of 4 Vultr VMs over to a single Hetzner dedicated. I even paid for the USB stick so I could install Void Linux on it. (There is network KVM access but you have to reserve it in 2 hour blocks. I only needed it for the install).

                    1. 2

                      Maybe it’s time to bring (back?) proxies that accept unencrypted HTTP/1.0 requests, negotiate a modern version of TLS with the destination and rewrite the html to allow for seamless navigation on older browsers.

                      1. 5

                        For occasional web browsing from OS 9, I have Squid running on a local server, acting as an HTTPS proxy. The client still connects over HTTPS, but the Squid server accepts older protocols, which the destination usually doesn’t accept.

                        1. 2

                          How do you have Squid configured? Is this using bumping?

                          1. 4

                            Yes, here’s the configuration that I got working. A lot of it is likely redundant

                        2. 2

                          Since legacy software shouldn’t be exposed to the wide Internet without at least some protective layer, I think HTTPS-to-HTTP proxies is a preferable option. There are some projects, though they aren’t as easy to use as I hoped.

                          A proxy server can also perform some other adjustments to make pages more accessible to legacy browsers, e.g. inject polyfills as needed.

                          1. 2

                            Or, use a period browser that can be taught to forward HTTPS on (disclaimer: my project, previously posted): https://oldvcr.blogspot.com/2020/11/fun-with-crypto-ancienne-tls-for.html

                          1. 3

                            I wonder if anyone is already using Wasmer or planning to use it in a productive environment.

                            1. 2

                              I recall some people using Nomad to run some wasm code via wasmer.

                              One of the hurdle of integrating wasmer as a first party driver was around not being able to set resource limits. I don’t know if things have changed on this front.

                            1. 26

                              Pro tip: this applies to you if you’re a business too. Kubernetes is a problem as much as it is a solution.

                              Uptime is achieved by having more understanding and control over the deployment environment but kubernetes takes that away. It attracts middle managers and CTOs because it seems like a silver bullet without getting your hands dirty but in reality it introduces so much chaos and indirections into your stack that you end up worse off than before, and all the while you’re emptying your pockets for this experience.

                              Just run your shit on a computer like normal, it’ll work fine.

                              1. 9

                                This is true, but let’s not forget that Kubernetes also has some benefits.

                                Self-healing. That’s what I miss the most with a pure NixOS deployment. If the VM goes down, it requires manual intervention to be restored. I haven’t seen good solutions proposed for that yet. Maybe uptimerobot triggering the CI when the host goes down is enough. Then the CI can run terraform apply or some other provisioning script.

                                Zero-downtime deployment. This is not super necessary for personal infrastructures but is quite important for production environments.

                                Per pod IP. It’s quite nice not to have to worry about port clashes between services. I think this can be solved by using IPv6 as each host automatically gets a range of IPs to play with.

                                Auto-scaling. Again not super necessary for personal infrastructure but it’s nice to be able to scale beyond one host, and not to have to worry on which host one service lives.

                                1. 6

                                  Did anyone tried using Nomad for personal projects? It has self-healing and with the raw runner one can run executables directly on NixOS without needing any containers. I have not tried it myself (yet), but would be keen on hearing the experiences.

                                  1. 3

                                    I am experimenting with the Hashiscorp stack while off for the holidays. I just brought up a vagrant box (1GB ram) with Consul, Docker and Nomad runing (no jobs yet) and the overhead looks okay:

                                                  total        used        free      shared  buff/cache   available
                                    Mem:          981Mi       225Mi       132Mi       0.0Ki       622Mi       604Mi
                                    Swap:         1.9Gi       7.0Mi       1.9Gi
                                    

                                    but probably too high to fit Postgres, Traefik or Fabio and a Rails app into it as well, but 2GB will probably be lots (I am kind of cheap so the less resources the better).

                                    I have a side project running in ‘prod’ using Docker (for Postgres and my Rails app) along with Caddy running as a systemd service but it’s kind of a one off machine so I’d like to move towards something like Terraform (next up on the list to get running) for bring up and Nomad for the reasons you want something like that.

                                    But… the question that does keep running through the back of my head, do I need even Nomad/Docker? For a prod env? Yes, it’s probably worth the extra complexity and overhead but for personal stuff? Probably not… Netlify, Heroku, etc are pretty easy and offer free tiers.

                                    1. 1

                                      I was thinking about doing this but I haven’t done due diligence on it yet. Mostly because I only have 2 droplets right now and nobody depends on what’s running on them.

                                    2. 1

                                      If you’re willing to go the Amazon route, EC2 has offered most of that for years. Rather than using the container as an abstraction, treat the VM as a container: run one main process per VM. And you then get autoscaling, zero downtime deploys, self-healing, and per-VM IPs.

                                      TBH I think K8s is a step backwards for most orgs compared to just using cloud VMs, assuming you’re also running K8s in a cloud environment.

                                      1. 2

                                        That’s a good point. And if you don’t care about uptime too much, autoscaling + spot instances is a pretty good fit.

                                        The main downside is that a load-balancer is already ~15.-/month if I remember correctly. And the cost can explode quite quickly on AWS. It takes quite a bit of planning and effort to keep the cost super low.

                                    3. 5

                                      IMO, Kubernetes’ main advantage isn’t in that it “manages services”. From that POV, everything you say is 100% spot-on. It simply moves complexity around, rather than reducing it.

                                      The reason I like Kubernetes is something entirely different: It more or less forces a new, more robust application design.

                                      Of course, many people try to shoe-horn their legacy applications into Kubernetes (the author running git in K8s appears to be one example), and this just adds more pain.

                                      Use K8s for the right reasons, and for the right applications, and I think it’s appropriate. It gets a lot of negative press for people who try to use it for “everything”, and wonder why it’s not the panacea they were expecting.

                                      1. 5

                                        I disagree that k8s forces more robust application design; fewer moving parts are usually a strong indicator of reliability.

                                        Additionally, I think k8s removes some of the pain of microservices–in the same way that a local anathestic makes it easier to keep your hand in boiling water–that would normally help people reconsider their use.

                                      2. 5

                                        And overhead. Those monster yaml files are absurd in so many levels.

                                        1. 2

                                          Just run your shit on a computer like normal, it’ll work fine.

                                          I think that’s an over-simplification. @zimbatm’s comment makes good points about self-healing and zero-downtime deployment. True, Kubernetes isn’t necessary for those things; an EC2 auto-scaling group would be another option. But one does need something more than just running a service on a single, fixed computer.

                                          1. 3

                                            But one does need something more than just running a service on a single, fixed computer.

                                            I respectfully disagree…worked at a place which made millions over a few years with a single comically overloaded DO droplet.

                                            We eventually made it a little happier by moving to hosted services for Mongo and giving it a slightly beefier machine, but otherwise it was fine.

                                            The single machine design made things a lot easier to reason about, fix, and made CI/CD simpler to implement as well.

                                            Servers with the right provider can stay up pretty well.

                                            1. 2

                                              Servers with the right provider can stay up pretty well.

                                              I was one of the victims of the DDOS that hit Linode on Christmas day (edit: in 2015; didn’t mean to omit that). DO and Vultr haven’t had perfect uptime either. So I’d rather not rely on single, static server deployments any more than I have to.

                                              1. 2

                                                I don’t see how your situation/solution negates the statement.

                                                You’ve simply traded one “something” (Kubernetes) with another (“the right provider”, and all that entails–probably redundant power supplies, network connections, hot-swappable hard drives, etc, etc).

                                                The complexity still exists, just at a different layer of abstraction. I’ll grant you that it does make reasoning about the application simpler, but it makes reasoning about the hardware platform, and peripheral concerns, much more complex. Of course that can be appropriate, but it isn’t always.

                                                I’m also unsure how a company’s profit margin figures into a discussion about service architectures…

                                                1. 5

                                                  I’m also unsure how a company’s profit margin figures into a discussion about service architectures…

                                                  There is no engineering without dollar signs in the equation. The only reason we’re being paid to play with shiny computers is to deliver business value–and while I’m sure a lot of “engineers” are happy to ignore the profit-motive of their host, it is very unwise to do so.

                                                  I’ll grant you that it does make reasoning about the application simpler, but it makes reasoning about the hardware platform, and peripheral concerns, much more complex.

                                                  That engineering still has to be done, if you’re going to do it at all. If you decide to reason about it, do you want to be able to shell into a box and lay hands on it immediately, or hope that your k8s setup hasn’t lost its damn mind in addition to whatever could be wrong with the app?

                                                  You’ve simply traded one “something” (Kubernetes) with another (“the right provider”, and all that entails–probably redundant power supplies, network connections, hot-swappable hard drives, etc, etc).

                                                  The complexity of picking which hosting provider you want to use (ignoring colocation issues) is orders and order of magnitudes less than learning and handling k8s. Hosting is basically a commodity at this point, and barring the occasional amazingly stupid thing among the common names there’s a baseline of competency you can count on.

                                                  People have been sold this idea that hosting a simple server means racking it and all the craziness of datacenters and whatnot, and it’s just a ten spot and an ssh key and you’re like 50% of the way there. It isn’t rocket surgery.

                                                2. 1

                                                  can you share more details about this?

                                                  I’ve always been impressed by teams/companies maintaining a very small fleet of servers but I’ve never heard of any successful company running a single VM.

                                                  1. 4

                                                    It was a boring little Ubuntu server if I recall correctly, I think like a 40USD general purpose instance. The second team had hacked together an impressive if somewhat janky system using the BEAM ecosystem, the first team had built the original platform in Meteor, both ran on the same box along with Mongo and supporting software. The system held under load (mostly, more about that in a second), and worked fine for its role in e-commerce stuff. S3 was used (as one does), and eventually as I said we moved to hosted options for database stuff…things that are worth paying for. Cloudflare for static assets, eventually.

                                                    What was the business environment?

                                                    Second CTO and fourth engineering team (when I was hired) had the mandate to ship some features and put out a bunch of fires. Third CTO and fifth engineering team (who were an amazing bunch and we’re still tight) shifted more to features and cleaning up technical debt. CEO (who grudgingly has my respect after other stupid things I’ve seen in other orgs) was very stingy about money, but also paid well. We were smart and well-compensated (well, basically) developers told to make do with little operational budget, and while the poor little server was pegged in the red for most of its brutish life, it wasn’t drowned in bullshit. CEO kept us super lean and focused on making the money funnel happy, and didn’t give a shit about technical features unless there was a dollar amount attached. This initially was vexing, but after a while the wisdom of the approach became apparent: we weathered changes in market conditions better without a bunch of outstanding bills, we had more independence from investors (for better or worse), and honestly the work was just a hell of a lot more interesting due in no small part to the limitations we worked under. This is key.

                                                    What problems did we have?

                                                    Support could be annoying, and I learned a lot about monitoring on that job during a week where the third CTO showed me how to setup Datadog and similar tooling to help figure out why we had intermittent outages–eventual solution was a cronjob to kill off a bloated process before it became too poorly behaved and brought down the box. The thing is, though, we had a good enough customer success team that I don’t think we even lost that much revenue, possibly none. That week did literally have a day or two of us watching graphs and manually kicking over stuff just in time, which was a bit stressful, but I’d take a month of that over sitting in meetings and fighting matrix management to get something deployed with Jenkins onto a half-baked k8s platform and fighting with Prometheus and Grafana and all that other bullshit…as a purely random example, of course. >:|

                                                    The sore spots we had were basically just solved by moving particular resource-hungry things (database mainly) to hosting–the real value of which was having nice tooling around backups and monitoring, and which moving to k8s or similar wouldn’t have helped with. And again, it was only after a few years of profitable growth that it traffic hit a point where that migration even seemed reasonable.

                                                    I think we eventually moved off of the droplet and onto an Amazon EC2 instance to make storage tweaks easier, but we weren’t using them in any way different than we’d use any other barebones hosting provider.

                                                    1. 4

                                                      Did that one instance ever go completely down (becoming unreachable due to a networking issue also counts), either due to an unforeseen problem or scheduled maintenance by the hosting provider? If so, did the company have a procedure for bringing a replacement online in a timely fashion? If not, then I’d say you all just got very lucky.

                                                      1. 1

                                                        Yes, and yes–the restart procedure became a lot simpler once we’d switched over to EC2 and had a hot spare available…but again, nothing terribly complicated and we had runbooks for everything because of the team dynamics (notice the five generations of engineering teams over the course of about as many years?). As a bonus, in the final generation I was around for we were able to hire a bunch of juniors and actually teach them enough to level them up.

                                                        About this “got very lucky” part…

                                                        I’ve worked on systems that had to have all of the 9s (healthcare). I’ve worked on systems, like this, that frankly had a pretty normal (9-5, M-F) operating window. Most developers I know are a little too precious about downtime–nobody’s gonna die if they can’t get to their stupid online app, most customers–if you’re delivering value at a price point they need and you aren’t specifically competing on reliability–will put up with inconvenience if your customer success people treat them well.

                                                        Everybody is scared that their stupid Uber-for-birdwatching or whatever app might be down for a whole hour once a month. Who the fuck cares? Most of these apps aren’t even monetizing their users properly (notice I didn’t say customers), so the odd duck that gets left in the lurch gets a hug and a coupon and you know what–the world keeps turning!

                                                        Ours is meant to be a boring profession with simple tools and innovation tokens spent wisely on real business problems–and if there aren’t real business problems, they should be spent making developers’ lives easier and lowering business costs. I have yet to see k8s deliver on any of this for systems that don’t require lots of servers.

                                                        (Oh, and speaking of…is it cheaper to fuck around with k8s and all of that, or just to pay Heroku to do it all for you? People are positively baffling in what they decide to spend money on.)

                                                      2. 1

                                                        eventual solution was a cronjob to kill off a bloated process before it became too poorly behaved and brought down the box … That week did literally have a day or two of us watching graphs and manually kicking over stuff just in time, which was a bit stressful,…

                                                        It sounds like you were acting like human OOM killers, or more generally speaking manual resource limiters of those badly-behaved processes. Would it be fair to say that sort of thing would be done today by systemd through its cgroups resource management functionality?

                                                        1. 1

                                                          We probably could’ve solved it through systemd with Limit* settings–we had that available at the time. For us, we had some other things (features on fire, some other stuff) that took priority, so just leaving a dashboard open and checking it every hour or two wasn’t too bad until somebody had the spare cycles to do the full fix.

                                              1. 1

                                                Does anyone know what they are (might be) using for the hash function? I’ve worked on similar locality-sensitive-hash problems before and find the properties of such hashes to be pretty interesting

                                                1. 1

                                                  According to their blog, they’re not telling

                                                  Fuzzy Hashing’s Intentionally Black Box

                                                  How does it work? While there has been lots of work on fuzzy hashing published, the innards of the process are intentionally a bit of a mystery. The New York Times recently wrote a story that was probably the most public discussion of how such technology works. The challenge was if criminal producers and distributors of CSAM knew exactly how such tools worked then they might be able to craft how they alter their images in order to beat it. To be clear, Cloudflare will be running the CSAM Screening Tool on behalf of the website operator from within our secure points of presence. We will not be distributing the software directly to users. We will remain vigilant for potential attempted abuse of the platform, and will take prompt action as necessary.

                                                  Which is unfortunate because that is much more interesting than the contents of this article.

                                                  1. 1

                                                    Thanks for the link! Seems like a reasonable call on their part.

                                                    Thinking on this a bit more, image fingerprinting seems like really interesting problem; the problems I’ve thought about before all deal with byte streams, but with an image the hash has to be like, perceptual? Not sure what the right word is, but seems really interesting. I’ve probably found a weekend tangent haha

                                                    1. 3

                                                      This comparison of perceptual hashing is a good intro to the topic: https://tech.okcupid.com/evaluating-perceptual-image-hashes-okcupid/

                                                1. 2

                                                  It is sad that CoreOS’ fleetctl didn’t launched as it was mostly it - services scheduler on top of systemd. Probably the main problem was that it was too focused on Docker integration, however I feel like there should be revival of that concept and I even wanted to work on something like that in Erlang. This shouldn’t be that hard to handle services launching as the DBus API for systemd is quite well documented, so you can work on top of that.

                                                  1. 2

                                                    An interesting take: Kubernetes API for systemd

                                                    https://github.com/miekg/vks

                                                  1. 6

                                                    Yes, it sucks.

                                                    But I’ll still take it over JSON any day, for defining infrastructure as code.

                                                    I just wish there was an human-workable alternative focused on minimalism. YAML is an unnecessarily complex clusterfuck.

                                                    1. 11

                                                      HCL is a decent DSL.

                                                      1. 1

                                                        HCL

                                                        Hmm. That one actually looks interesting.

                                                      2. 7

                                                        Toml?

                                                        1. 2

                                                          Ugh. Not a fan.

                                                          1. 3

                                                            Why not?

                                                        2. 6

                                                          So, there’s three axes I look for in a replacement for either configuration or serialization.

                                                          One: What everyday problems do you run into.

                                                          For instance, can you encode every country code without no getting turned into false; can you include comments, can you read the damn thing or is it too much markup. Do common validation tools exist? Have they got useful error messages?

                                                          Two: What capabilities do you have at the language layer.

                                                          For instance, can you encode references? how about circular references? What kind of validation is possible? What is the range of types?

                                                          Three: Does anyone else know about the language. Is it supported in most languages?

                                                          Unfortunately, so far none of the languages that fare well on questions 1 and 2 also do well on question 3. Dhall (for config) and ADL (Aspect Definition Language, for data serialization) come to mind as “solve 1 and 2 but not 3”.

                                                          1. 2

                                                            Dhall

                                                            Your point three is fixable, and Dhall is great tech. The community has been very responsive IME, to the point where I need to write a blog post about just how abnormally good they are.

                                                        1. 32

                                                          the next time I have to buy a UPS for a piece of equipment that requires power 24/7, I will buy a line-interactive UPS rather than a standby UPS

                                                          I’ve been wondering why one would get an office, standby UPS for a server (home or otherwise), but then I realized someone who has never been designing server installations indeed has no way to even know what keywords to look for.

                                                          And now that fewer and fewer companies even have on-premises server rooms, it may be becoming a somewhat obscure knowledge even among professional sysadmins.

                                                          Maybe it’s time for a collaborative “how to make a server closet” manual…

                                                          1. 20

                                                            I would certainly welcome a guide like that.

                                                            1. 8

                                                              I second this. It would be extremely valuable to some of us who are less familiar with this.

                                                            2. 7

                                                              That would be a lovely piece for all of us with nascent home datacenters. :-)

                                                              1. 5

                                                                “how to make a server closet” manual

                                                                Time to dig through the photos I still have for “how not to make a network/server closet!”

                                                                1. 5

                                                                  I’ve just started playing around with some home server gear, and the best general resource I’ve found is the r/homelab wiki. The Hardware Guide was particularly useful to orient myself to the world of decade-old enterprise servers, but it’s UPS subsection doesn’t mention this “line-active” terminology.

                                                                  1. 1

                                                                    Maybe it’s time for a collaborative “how to make a server closet” manual…

                                                                    That would be awesome.

                                                                    1. 1

                                                                      how to make a server closet

                                                                      please

                                                                    1. 15

                                                                      IPv6 is just as far away from universal adoption…as it was three years ago.

                                                                      That seems…pretty easily demonstrably untrue? While it’s of course not a definitive, be-all-end-all adoption metric, this graph has been marching pretty steadily upward for quite a while, and is significantly higher now (~33%) than it was in 2017 (~20%).

                                                                      (And as an aside, it’s sort of interesting to note the obvious effect of the pandemic pushing the weekday troughs in that graph upward as so many people work from home.)

                                                                      1. 7

                                                                        I wouldn’t count it as “adoption” if it’s basically a hit or miss if your provider does it or not. So they do the natting for you?

                                                                        Still haven’t worked at any company (as an employee or being sent to the customer) where there was any meaningful adoption.

                                                                        My stuff is available via v4 and v6, unless I forget, because I don’t have ipv6 at home, because I simply don’t need it. When I tried it, I had problems.

                                                                        Yes, I’m 100% pessimistic about this.

                                                                        1. 13

                                                                          I adopted IPv6 around 2006 and finally removed it from all my servers this year.

                                                                          The “increase” in “adoption” is likely just more mobile traffic, and some providers a have native v6 and NAT64 and… shocker… it sucks.

                                                                          IPv4 will never go away and Jeff Huston is right: the future is NAT, always has been, always will be. The additional address space really isn’t needed, and every device doesn’t need its own dedicated IP for direct connections anyway. Your IP is not a telephone number; it’s not going to be permanent and it’s not even permanent for servers because of GEODNS anyway (or many servers behind load balancers, etc etc). IPs and ports are session identifiers, no more, no less.

                                                                          You’ll never get rid of the broken middle boxes on the Internet, so stop believing you will.

                                                                          The future is name-based addressing – separate from our archaic DNS which is too easily subverted by corporations and governments, and we will definitely be moving to a decentralized layer that runs on top of IP. We just don’t know which implementation yet. But it’s the only logical path forward.

                                                                          DNSSEC and IPv6 are failures. 20+ years and still not enough adoption. Put it in the bin and let’s move on and focus our efforts on better things that solve tomorrow’s problems.

                                                                          1. 21

                                                                            What I find so annoying about NAT is that it makes hard or impossible to send data from one machine to another, which was pretty much the point of the internet. Now you can only send data to servers. IPv6 was supposed to fix this.

                                                                            1. 8

                                                                              Now you can only send data to servers

                                                                              It’s almost as if everyone that “counts” has a server, so there’s no need for everyone to have one. This is coherent with the growing centralisation of the Internet.

                                                                              1. 19

                                                                                It just bothers me that in 2020 the easiest way to share a file is to upload to a server and send the link to someone. It’s a bit like “I have a message for you, please go to the billboard at sunshine avenue to read it.”.

                                                                                1. 4

                                                                                  There are pragmatic reasons for this. If the two machines are nearby, WiFi Direct is a better solution (though Apple’s AirDrop is the only reliable implementation I’ve seen and doesn’t work with non-Apple things). If the two machines are not near each other, they need to be both on and connected at the same time for the transfer to work. Sending to a mobile device, the receiver may prefer not to grab the file until they’re on WiFi. There are lots of reasons either endpoint may remove things. Having a server handle the delivery is more reliable. It’s more analogous to sending someone a package in a big truck that will wait outside their house until they’re home and then deliver it.

                                                                                  1. 3

                                                                                    Bittorrent and TCP are pretty reliable. You’re right about the ‘need to be connected at the same time’ though.

                                                                                    1. 2

                                                                                      Apple’s AirDrop is the only reliable implementation I’ve seen and doesn’t work with non-Apple things

                                                                                      Have you seen opendrop?

                                                                                      Seems to work fine for me, although it’s finicky to set up.

                                                                                      https://github.com/seemoo-lab/opendrop

                                                                                    2. 2

                                                                                      I think magic wormhole is easier for the tech crowd, but still requires both systems to be on at the same time.

                                                                                      1. 1

                                                                                        https://webwormhole.io/ works really well!

                                                                                    3. 7

                                                                                      This is coherent with the growing centralisation of the Internet.

                                                                                      My instinct tells me this might not be so good.

                                                                                      1. 4

                                                                                        So does mine. So does mine.

                                                                                      2. 2

                                                                                        Plus le change…

                                                                                        On the other hand, servers have never been more affordable or generally accessible: all you need is like $5 a month and the time and effort to self-educate. You can choose from a vast range of VPS providers, free software, and knowledge sources. You can run all kinds of things in premade docker containers without having much of a clue as to how they work. No, it’s not the theoretical ideal by any means, but I don’t see any occasion for hand-wringing.

                                                                                        1. 1

                                                                                          I’ve always assumed the main thing holding v6 back is the middle-men of the internet not wanting to lose their power as gatekeepers.

                                                                                        2. 6

                                                                                          Nobody in their right mind is going to use client machines without a firewall protecting them and no firewall is going to by default accept unsolicited traffic form the wider internet.

                                                                                          Which means you need some UPnP like mechanism on the gateway anyways. Not to map a port, but to open a port to a client address.

                                                                                          Btw: I’m ha huge IPv6 proponent for other reasons (mainly to not give centralized control to very few very wealthy parties due to address starvation), but the not-possible-to-open-connections argument I don’t get at all.

                                                                                          1. 8

                                                                                            Nobody in their right mind would let a gazillion services they don’t even know about run on their machines and let those services be contacted from the outside.

                                                                                            Why do (non-technical) people need a firewall to begin with? Mainly because they don’t trust the services that run on their machines to be secure. The correct solution is to remove those services, not add a firewall or NaT that requires traversing.

                                                                                            Though you were talking about UPnP, so the audience there is clearly the average non-technical Windows user, who doesn’t know how to configure their router. I have no good solution for them.

                                                                                            1. 8

                                                                                              Why do (non-technical) people need a firewall to begin with? Mainly because they don’t trust the services that run on their machines to be secure

                                                                                              Many OSes these days run services listening on all Interfaces. Yes, most of them could be rebound to localhost or the local network interface, but many don’t provide easy configurability.

                                                                                              Think stuff like portmap which is still required for NFS in many cases. Or your print spooler. Or indeed your printer’s print spooler.

                                                                                              This stuff should absolutely not be on the internet and a firewall blanket-prevents these from being exposed. You configure one firewall instead of n devices running m services.

                                                                                              1. 3

                                                                                                Crap, good point, I forgot about stuff on your local network you literally cannot configure effectively. Well, we’re back to configuring the router, then.

                                                                                            2. 1

                                                                                              If the firewall is in the gateway at home, then you can control it, and you can decide to forward ports and allow incoming connections to whatever machine behind it. If your home NAT is behind a CGNAT you don’t control, you are pretty much out of options for incoming connections.

                                                                                              IPv6 removes the need for CGNAT, fixing this issue.

                                                                                              1. 2

                                                                                                Of course but I felt like my parent poster was talking from an application perspective. And for these not much changes. An application you make and deploy on somebodies machine still won’t be able to talk to another instance of your application on another machine by default. Stuff like STUN will remain required to trick firewalls into forwarding packets.

                                                                                            3. 3

                                                                                              Yeah but this is not a fair statement. If we had no NAT this same complaint would exist and it would be “What I find so annoying about FIREWALLS is they make it hard or impossible to send data from one machine to another…”

                                                                                              But do you really believe having IPv6 would allow arbitrary direct connections between any two devices on the internet? There will still have to be some mechanism for securely negotiating the session. NAT doesn’t really add that much more of a burden. The problem is when people have terribly designed networks with double NAT. These same people likely would end up with double firewalls…

                                                                                              1. 2

                                                                                                Of course, NAT has been invented for a reason, and I’d prefer having NAT over not having NAT. But for those of us that want to play around with networks, it’s a shame that we can’t do it without paying for a server anymore.

                                                                                                1. 1

                                                                                                  I really do find it easier to make direct connections between IPv6 devices!

                                                                                                  Most of the devices I want to talk to each other are both behind an IPv4 NAT, so IPv6 allows them to contact each other directly with STUN servers.

                                                                                                  Even so, Tailscale from the post linked is even easier to setup and use than IPv6, I’m a fan.

                                                                                              2. 17

                                                                                                The “increase” in “adoption” is likely just more mobile traffic

                                                                                                Even if so, why the scare quotes? They’re network hosts speaking Internet Protocol…do they not “count” for some reason?

                                                                                                You’ll never get rid of the broken middle boxes on the Internet, so stop believing you will.

                                                                                                Equipment gets phased out over time and replaced with newer units. Devices in widespread deployment, say, 10 years ago probably wouldn’t have supported IPv6 gracefully (if at all), but guess what? A lot of that stuff’s been replaced by things that do. Sure, there will continue to be shitty middleboxes needlessly breaking things on the internet, but that happens with IPv4 already (hard to think of a better example than NAT itself, actually).

                                                                                                It’s uncharacteristic because I’m generally a pessimistic person (and certainly so when it comes to tech stuff), but I’d bet that we’ll eventually see IPv6 become the dominant protocol and v4 fade into “legacy” status.

                                                                                                1. 4

                                                                                                  I participated in the first World IPv6 Day back in 2011. We begged our datacenter customers to take IPv6. Only one did. Here’s how the conversation went with every customer:

                                                                                                  “What is IPv6?”

                                                                                                  It’s a new internet protocol

                                                                                                  “Why do I need it?”

                                                                                                  It’s the future!

                                                                                                  “Does anyone in our state have IPv6?”

                                                                                                  No, none of the residential ISPs support it or have an official rollout plan. (9 years later – still nobody in my state offers IPv6)

                                                                                                  “So why do I need it?”

                                                                                                  Some people on the internet have IPv6 and you would give them access to connect to you with IPv6 natively.

                                                                                                  “Don’t they have IPv4 access too?”

                                                                                                  Yes

                                                                                                  “So why do I need it?”

                                                                                                  edit: let’s also not forget that the BCP for addressing has changed multiple times. First, customers should get assigned a /80 for a single subnet. Then we should use /64s. Then they should get a /48 so they can have their own subnets. Then they should get a /56 because maybe /48 is too big?

                                                                                                  Remember when we couldn’t use /127 for ptp links?

                                                                                                  As discussed in [RFC7421], "the notion of a /64 boundary in the
                                                                                                  address was introduced after the initial design of IPv6, following a
                                                                                                  period when it was expected to be at /80".  This evolution of the
                                                                                                  IPv6 addressing architecture, resulting in [RFC4291], and followed
                                                                                                  with the addition of /127 prefixes for point-to-point links, clearly
                                                                                                  demonstrates the intent for future IPv6 developments to have the
                                                                                                  flexibility to change this part of the architecture when justified.
                                                                                                  
                                                                                                2. 10

                                                                                                  I adopted IPv6 around 2006 and finally removed it from all my servers this year.

                                                                                                  Wait, you had support for IPv6 and your removed it? Did leaving it working cost you?

                                                                                                  1. 3

                                                                                                    Yes it was a constant source of failures. Dual stack is bad, and people using v6 tunnels get a terrible experience. Sixxs, HE, etc should have never offered tunneling services

                                                                                                    1. 8

                                                                                                      I’m running dual stack on the edge of our production network, in the office and at my home. I have never seen any interference of one stack with another.

                                                                                                      The only problem I have seen was that some end-users had broken v6 routing and couldn’t reach our production v6 addresses, but that was quickly resolved. The reverse has also been true in the past (broken v4, working v6), so I wouldn’t count that against v6 in itself, though I do agree that it probably takes longer for the counter party to notice v6 issues than they would v4 ones.

                                                                                                      But I absolutely cannot confirm v6 to be a “constant source of failures”

                                                                                                      1. 3

                                                                                                        The only problem I have seen was that some end-users had broken v6 routing and couldn’t reach our production v6 addresses, but that was quickly resolved.

                                                                                                        This is the problem we constantly experienced in the early 2010s. Broken OSes, broken transit, broken ISPs. The customer doesn’t care what the reason is, they just want it to work reliably 100% of the time. It’s also not fun when due to Happy Eyeballs and latency changes the client can switch between v4 and v6 at random.

                                                                                                      2. 1

                                                                                                        Is there any data on what the tunnelling services are used for though? Just asking because some friends were just using them for easier access to VMs that weren’t public per se, or devices/services in a network (with the appropriate firewall rules to only allow trusted sources)

                                                                                                    2. 2

                                                                                                      This is the first time I downvoted a post so I figure I’d explain why.

                                                                                                      For one, you point to a future of more of the status quo: More NAT, IPv4. But at the same time you also claim the world is going to drop one of the biggest status quo’s of DNS for a wholly brand new name resolution service? Also, how would a decentralized networking layer be able to STUN/TURN the 20+ layers of NAT we’re potentially looking at in our near future?

                                                                                                      1. 1

                                                                                                        Oh no, we aren’t going to drop DNS, we will just not use it for the new things. Think Tor hidden services, think IPFS (both have problems in UX and design, but are good analogues). These things are not directly tied to legacy DNS; they can exist without it. Legacy DNS will exist for a very long time, but it won’t always be an important part of new tech.

                                                                                                      2. 2

                                                                                                        The future is name-based addressing – separate from our archaic DNS which is too easily subverted by corporations and governments, and we will definitely be moving to a decentralized layer that runs on top of IP. We just don’t know which implementation yet. But it’s the only logical path forward.

                                                                                                        So this would solve the IPv4 addressing problem? While I certainly agree with “every device doesn’t need its own dedicated IP”, the amount us usable IPv4 addresses is about 3.3 billion (excluding multicast, class E, rfc1918, localhost, /8s assigned to Ford etc.) which really isn’t all that much if you want to connect the entire world. It’ll be a tight fit at best.

                                                                                                        I wonder how hard it would be to start a new ISP, VPS provider, or something like that today. I would imagine it’s harder than 10 years ago; who do you ask for IP addresses?

                                                                                                        1. 2

                                                                                                          Some of the pressure on IPv6 addresses went away with SRV records. For newer protocols that baked in SRV from the start, you can run multiple (virtual) machines in a data center behind a single public IPv4 address and have the service instances run on different ports. For things like HTTP, you need a proxy because most (all?) browsers don’t look for SRV records. If you consider IP address + port to be the thing a service needs, we have a 48-bit address space, which is a bit cramped for IoT things, but ample for most server-style things.

                                                                                                      3. 5

                                                                                                        That graph scares me tbh. It looks consistent with an S-curve which flattens out well before 50%. I hope that’s wrong, and it’s just entering a linear phase, but you’d hope the exponential-ish growth phase would at least have lasted a lot longer.

                                                                                                        1. 3

                                                                                                          Perhaps there’s some poetic licence there, but 13% in 3 years isn’t exactly a blazing pace, and especially if we assume that the adoption curve is S-shaped, it’s going to take at least another couple of decades for truly universal adoption.

                                                                                                          1. 7

                                                                                                            It’s not 13%, it’s 65%. (13 percentage points.)

                                                                                                            1. 1

                                                                                                              Yup, right about two decades to get to 90% with S-curve growth. I mean, it’s not exponential growth, but it’s steady and two decades is about 2 corporate IT replacement lifecycles.

                                                                                                            2. 2

                                                                                                              That seems…pretty easily demonstrably untrue? While it’s of course not a definitive, be-all-end-all adoption metric, this graph has been marching pretty steadily upward for quite a while, and is significantly higher now (~33%) than it was in 2017 (~20%).

                                                                                                              I think that’s too simplistic of an interpretation of that chart; if you look at the “Per-Country IPv6 adoption” you see there are vast differences between countries. Some countries like India, Germany, Vietnam, United States, and some others have a fairly significant adoption of IPv6, whereas many others have essentially no adoption.

                                                                                                              It’s a really tricky situation, because it requires the entire world to cooperate. How do you convince Indonesia, Mongolia, Nigeria, and many others to use IPv6?

                                                                                                              So I’d argue that “IPv6 is just as far away from universal adoption” seems pretty accurate; once you start the adoption process it seems to take at least 10-15 years, and many countries haven’t even started yet.

                                                                                                              1. 1

                                                                                                                How do you convince Indonesia, Mongolia, Nigeria, and many others to use IPv6?

                                                                                                                By giving them too few IPv4 blocks to begin with? Unless they’re already hooked on carrier grade NAT, the scarcity of addresses could be a pretty big incentive to switch.

                                                                                                                1. 1

                                                                                                                  I’m not sure if denying an economic resource to those kind of countries is really fair; certainly in a bunch of cases it’s probably just lack of resources/money (or more pressing problems, like in Syria, Afghanistan, etc.)

                                                                                                                  I mean, we (the Western “rich”) world shoved the problem ahead of us for over 20 years, and now suddenly the often lesser developed countries actually using the least amount of addresses need to spend a lot of resources to quickly implement IPv6? Meh.

                                                                                                                  1. 2

                                                                                                                    My comment wasn’t normative, but descriptive. Many countries already starve for IPv4 addresses.

                                                                                                                    now suddenly the often lesser developed countries actually using the least amount of addresses need to spend a lot of resources to quickly implement IPv6?

                                                                                                                    If “suddenly” means they were knew it would happen like 2 decades ago, and “quickly” means they’d have over 10 years to get to it… In any case, IPv6 has already been implemented in pretty much every platform out there. It’s more a matter of deployment now. The end points are already capable. We may have some routers who still aren’t IPv6 capable, but there can’t be that many by now, even in poorer countries. I don’t see anyone spending “a lot” of resources.

                                                                                                              2. 1

                                                                                                                perhaps the author is going by the absolute number of hosts rather than percentage

                                                                                                              1. 5

                                                                                                                Meta opinion about Cloudflare: I tried to avoid Cloudflare but if I have to use a CDN, I must allow some sort of a pseudo-MITM. I just choose CF instead of something else for my blog.

                                                                                                                1. 4

                                                                                                                  You can ask browsers to verify that your CDN is returning what you want instead of giving them everything.

                                                                                                                  1. 1

                                                                                                                    Yeah, but only for assets/media. These days it’s common/desirable/??? for the web page itself to come through the CDN.

                                                                                                                    1. 2

                                                                                                                      If your page is dynamic, then it doesn’t make any sense for the web page to come through the CDN. If your page is just a bunch of links for your JS SPA, you should still serve it yourself for the security, and because it isn’t that big anyways. If your page is fully static, you should probably still serve it yourself, just to be sure that nobody is inserting anything unwanted into your page.

                                                                                                                      1. 2

                                                                                                                        If your page is dynamic, then it doesn’t make any sense for the web page to come through the CDN

                                                                                                                        The route through the CDN’s private network should usually be faster than over the public internet.

                                                                                                                        1. 1

                                                                                                                          It’s not that expensive to have several servers in multiple locations to handle load, you just need to architecture for it. Paying for the access through the CDN’s private network is about the same cost.

                                                                                                                      2. 1

                                                                                                                        Exactly - my use case is the caching of the HTML web page itself.

                                                                                                                    2. 2

                                                                                                                      Where is your blog hosted? Why would it need a CDN? I would be surprised if does. My other comment in this thread gives some color on why I’ve never needed to MITM my own site:

                                                                                                                      https://lobste.rs/s/xbl6uc/cloudflare_outage_on_july_17_2020#c_nt8atu

                                                                                                                      1. 4

                                                                                                                        Without a CDN your site and its assets are stored in a single location worldwide. If you happen to have readers who do not live close to where the site is hosted, latency will likely be noticeable. Is it using TLS? 3 round trips for the handshake. That’s before even downloading html, css, js and images.

                                                                                                                        A quick check on https://wondernetwork.com/pings/Los+Angeles/Barcelona and ping between Western Europe and US west coast is around 150ms. That’s not nothing!

                                                                                                                        As you pointed out, it’s not necessary, but it doesn’t visibly improve the user experience for a lot of people worldwide.

                                                                                                                        1. 1

                                                                                                                          WAN latency can be an issue for really optimized sites, but most sites aren’t really optimized.

                                                                                                                          Example: I just want to webpagetest.org and put in nytimes.com.

                                                                                                                          https://www.webpagetest.org/result/200722_CA_fc5e1e19a8f5c3402fc5cb91be7b4824/

                                                                                                                          It took 15 seconds to load the page. nytimes presumably has all the CDNs in the world.

                                                                                                                          Then I go to lobste.rs, and it takes less than a second:

                                                                                                                          https://www.webpagetest.org/result/200722_7S_ec97779d98563256c0140997a431e19f/

                                                                                                                          lobste.rs is hosted at a single location as far as I know (because CDNs would break the functionality of the site, i.e. the dynamic elements. It’s running of a single traditional database AFAIK)

                                                                                                                          So the bottleneck is elsewhere, and I claim that’s overwhelmingly common.


                                                                                                                          So it could be an issue, but probably not, and that’s why I asked for specific examples. Most sites have many other things to optimize first. If it were “free”, then sure, use a CDN. But it also affects correctness and has serious downside security wise (e.g. a self-MITM)

                                                                                                                        2. 3

                                                                                                                          It’s self hosted on a box in my house, so having a CDN ensures my ISP doesn’t hate me if I get too much traffic, and other services I host aren’t impacted.

                                                                                                                          For folks concerned, I also provide a tor onion address for the blog which totally bypasses CF.

                                                                                                                      1. 7

                                                                                                                        Isn’t this conflating different things, hosting and availability?

                                                                                                                        Cloudflare [to my knowledge] doesn’t host websites. It’s about improving availability via distributed caching and DDOS mitigation. So it’s conceptually more at the routing level.

                                                                                                                        It’s pretty clear that the backbone of the Internet is more centralized and less resilient than it should be. There are too many instances of one person making a mistake in a routing configuration update and blowing up IP connectivity for millions of people.

                                                                                                                        1. 3

                                                                                                                          Cloudflare somewhat recently launched Cloudflare Workers, which lets you publish websites “hosted” on their servers: https://workers.cloudflare.com/sites

                                                                                                                          1. 3

                                                                                                                            Hosting is a perfectly reasonable and accurate way to describe what CDNs do for their customers, I think.

                                                                                                                          1. 3

                                                                                                                            Isn’t this very very similar to what Tailscale does? Just at a lower scale?

                                                                                                                            1. 3

                                                                                                                              I’d say that Tailscale does way more than NAT traversal. It is a great service.

                                                                                                                              The problem is, that tailscale is not open source. To achieve a similar service you have to build it on your own…

                                                                                                                              1. 2

                                                                                                                                At least parts of it are open source: https://github.com/tailscale

                                                                                                                            1. 4
                                                                                                                              1. 4

                                                                                                                                Beyond the technical analysis, it’s the writing that keeps me interested in reading these: clear, concise, objective and with a tiny bit of flair.

                                                                                                                                1. 1

                                                                                                                                  Agreed. I also appreciate https://jepsen.io/ethics. To me, it is one of the most important Jepsen documents in that it supports every other analysis.

                                                                                                                                1. 3

                                                                                                                                  Gleam looks more and more appealing with such nice libraries.

                                                                                                                                  Slightly out of scope but, it would definitely help adoption if there was a simple example of deploying such an app on a platform like Heroku. I’d be happy to help on the Heroku part, I’m very unfamiliar with the Erlang deployment process.

                                                                                                                                  1. 2

                                                                                                                                    Hello! Deployment documentation is certainly a top priority, and something we consider a responsibility of the core team!

                                                                                                                                    Right now I’m working a lot on tooling so it’s hard to write documentation as it’s in a state of flux. For now users will have to refer to the Erlang documentation I’m afraid https://www.rebar3.org/docs/releases

                                                                                                                                    1. 2

                                                                                                                                      For now we could have a guide about using heroku + Mix/Elixir with some Gleam modules. That’s what I am currently doing for my Gleam deployments, though my set up is rather messy.

                                                                                                                                    1. 13

                                                                                                                                      One of the easiest ways to get more out of our database is by introducing a new component to the system: the cache layer.

                                                                                                                                      In my experience, caching should only be used to reduce latency NOT to reduce database load. If your database can’t operate without an external caching layer (redis/memcached) you’re in big trouble when your cache hit ratio drops (restarting instances, cache key version update, etc.) It’s very easy to fall into this trap, and requires heavy architecture changes to get out of it.

                                                                                                                                      1. 1

                                                                                                                                        this is a really interesting (and I’m guessing hard-won) perspective.

                                                                                                                                        I’ve never been bitten by this but also never hosted something where a bit of jank during warm-up was a problem

                                                                                                                                      1. 14

                                                                                                                                        I guess this is a little off topic, but creating a browser engine is so difficult, I wonder if anybody has considered creating Markdown browsers and communities around them?

                                                                                                                                        The key ideas would be:

                                                                                                                                        • Serve markdown (maybe an extended version) over HTTPS
                                                                                                                                        • Reuse existing server infrastructure
                                                                                                                                        • Only text, images, video, and audio - no client scripting
                                                                                                                                        • Leave all rendering decisions to the browser
                                                                                                                                        • Participating sites would (ideally) only link to other markdown sites

                                                                                                                                        The HTML and Javascript web seems to get more and more like TV every day, with less user control and more centralized/corporate control. A new browser engine might help, but it feels like it’s beyond saving at this point.

                                                                                                                                        1. 25

                                                                                                                                          https://gemini.circumlunar.space/

                                                                                                                                          Gemini is a new, collaboratively designed internet protocol, which explores the space inbetween gopher and the web, striving to address (perceived) limitations of one while avoiding the (undeniable) pitfalls of the other.

                                                                                                                                          While not a browser per say, this is similar in spirit to your markdown browser idea.

                                                                                                                                          1. 12

                                                                                                                                            Leave all rendering decisions to the browser

                                                                                                                                            I’m torn on this. I don’t really care for the CSS or layout for random news sites, but at the same time I really like the distinctive and wacky styles I see on people’s personal sites. Removing that element of individuality would, IMO, make the web more corporate.

                                                                                                                                            1. 6

                                                                                                                                              Sounds kind of like the existing Gopherverse, sans HTTPS.

                                                                                                                                              1. 5

                                                                                                                                                Reminds me a bit of this post calling for a “Khyber Pass Browser”. I saw it in another forum, so I’ll paste my comments here as they also apply to your idea, and I’m intrigued by the design space and problem of staying simple:

                                                                                                                                                What are your use cases or goals? I ask because I am ancient, and this sounds like a design doc for Mosaic 1.0, down to opening an external application to deal with those newfangled jpegs.

                                                                                                                                                Depending on those high-level questions, maybe you want to lean way, way more into unix and do the most extreme version of “defer any content-specific rendering to user-specified helper programs” and like make a FUSE filesystem for browsing IPFS/DataShards or similar? Then you don’t even have to define a document format and write a renderer. (But more likely there’s some context/assumptions I’m missing.)

                                                                                                                                                [discussion turned to the “should be implementable by a motivated undergad in a year of free time” heading]

                                                                                                                                                I think an undergrad with a high-level binding like fusepy could bang out an integration pretty quickly. But I’m not married to the idea, I was throwing it out there to try to figure out what’s important to you in this project, same with the use case/goals question. Is it networking? Is it a new document markup? Is it use of mimetypes?

                                                                                                                                                A point I meant to make with the comparison: Mosaic was an undergrad project and thirty years later, here we are with browsers that have 10x more code than the OS it ran on. What about this project keeps it from growing? How would you allow it to change and adapt, but not grow? That’s a really fascinating problem; how do you repeal Zawinski’s Law? Is there something in your design space that you think will help a KBP become a living fossil?

                                                                                                                                                  1. 1

                                                                                                                                                    I could see myself using that, though I assume it’s going to mostly for personal websites, so one will still need an extra conventional browsers.

                                                                                                                                                    Really interesting idea