1. 6

    I realize this piece has a strong whiff of PR to it, but I legitimately am interested in the idea of moving past the “local development environment” as a standard practice. Long ago I got real excited about local vagrant and docker for development, and that hasn’t really panned out for me in practice. I’ve been watching cloud development environments with great interest for a while and haven’t had a chance to invest time into them.

    Is this the way?

    1. 27

      Is this the way?

      Not unless you’d like to see the end of accessible general purpose compute in your lifetime.

      1. 5

        I’m convinced it will happen regardless. Too few people care about it passionately. General purpose computing will become a hobby and cult like the Amiga is today.

        1. 2

          Also it’ll probably only be relevant if you’ve got such a big project that it takes you 45 minutes to make a fresh clone-to-dev environment and you’re not working with real hardware but something that’s made with replication in mind like web services. Oh and you don’t want any network problems.

        2. 2

          This could be so, so powerful if the compilation within those codespaces could also be pushed to distributed cloud-build instances. I’d be dying to use this if it came with a prebuilt ccache of previously compiled object files. I’m on a 28 core machine with 64G of RAM and building Firefox still takes up to ten minutes. I know it can be much less than with a distributed compiler like icecc.

          1. 3

            I think this will be the next step. The first and easiest development workflow to cover a scenario that is matches the remote environment as closely as possible (e.g. linux, no ui, etc). So codespaces is perfect as is for Web Development on Python, Ruby, PHP, JS. The next step would be service development where you combine Remote Execution (https://docs.bazel.build/versions/main/remote-execution.html) with CodeSpaces. It’s a bit tricky because now you have to deal with either multiple build systems, which is very difficult, or enforce a given supported built system (e.g. bazel). But at this point you will have very fast Rust/C/C++, etc compilation and can nicely develop there as well. The problem with CodeSpaces is when it comes to Mobile or GUI development, or worse case, 3D software (games). I am curious to see how they will solve that.

            1. 3

              Back when I used to build Chromium all the time, the worst part was linking (because it happened every time, even if I only touched one source file.) And [current] linkers are both not parallelizable, and heavily I/O bound, so a distributed build system doesn’t help The only thing that helped was putting the object files on a RAM disk.

              1. 1

                I don’t recall what was being used because I was a huge Unix fanboy at the time and wouldn’t touch Windows (tl;dr I was even more of an idiot than I am now) but back like 10+ years ago, I recall some folks at $work had this in Visual Studio. I don’t know if it was some add-in or something built in-house but it was pretty neat. It would normally take a couple of hours to compile their projects on the devs’ machines, but they could get it in 5-20 minutes, depending on how fresh their branch was.

                I haven’t seen it because it was way before my time but I had a colleague who described basically this exact mechanism, minus the cloud part, because this was back in 2002 or so. ClearCase was also involved though so I’m not sure it was as neat as he described it :-D.

                Cloud-based instances are, I suspect, what would really make this useful. Both of those two things were local, which wasn’t too hard for $megacorps who had their own data centres and stuff, but are completely out of my one man show reach. I don’t have the money or the space to permanently host a 28-core machine with 64G of RAM, but I suspect I could afford spinning some up on demand.

                I wish this didn’t involve running an IDE in a damn browser but I guess that ship has sailed long ago…

                1. 2

                  Back when we had all those powerful workstations co-located in an office, we had them running icecc, which is really damn awesome and got us above 100 shared cores. For a while, I even ssh’d into my workstation remotely and it worked quite well. But my machine failed me and getting it home was easier than making sure it’s never going to require maintenance again. Especially given that physical office access is very limited.

                  (As an aside, I agree running an IDE in a browser feels wrong and weird but vscode is pretty OK in terms of usability, considering it’s running on chromium)

              2. 2

                Docker and Vagrant can be heavy to run and often don’t make reproducible builds. Something like Nix or Guix can help with this part, and if you throw in a Cachix subscription, you can safely build and push once from a developer’s machine to CI, production, and other developers with less overhead.

                1. 1

                  Usually I find it very frustrating to do any sort of development where there is human noticeable (and variable) latency on responses to keystrokes. (eg. Working From Home via vnc or something).

                  I suspect I’d find this extremely frustrating.

                  I have been working with a thin schroot container like thing. (ie. Tools, cross compilers, build system etc in a tar ball that gets unpacked into a schroot, gui tools and editor in the native host)

                  That has been working just fine for me. Schroot is smart about updating the tools when they change.

                  1. 1

                    I’m curious what issues you saw/see with a Vagrant setup, that you think some kind of ‘develop in a browser’ environment would solve?

                  1. 5

                    I noticed a lot of the decisions look like the ones hello made. This is another desktop-oriented FreeBSD AppImage project. What differentiates them?

                    1. 1

                      This one aims for source-level compatibility

                    1. 6

                      Anyone know why this took so long?

                      1. 6

                        Some malicious features implemented by Microsoft?

                        One of the claims was related to having modified Windows 3.1 so that it would not run on DR DOS 6.0 although there were no technical reasons for it not to work.

                        1. 1

                          Don’t know. Looking at the patch so many things are stub/null but the hooks just weren’t there.

                        1. 1

                          Doesn’t work for me on Firefox or Chrome. Must be my adblocker

                            1. 2

                              Same thing :(

                              1. 1

                                Try with http:// as a prefix

                          1. 2

                            “The next optimization is both significant and controversial: disabling speculative execution mitigations in the Linux kernel. Now, before you run and get your torches and pitchforks, first take a deep breath and slowly count to ten. Performance is the name of the game in this experiment, and as it turns out these mitigations have a big performance impact when you are trying to make millions of syscalls per second.”

                            Here is one highly optimized word: No.

                            1. 2

                              If it’s an EC2 instance running a single app server, the risk is minimal as he explains.

                              1. 2

                                Honest question, if you were running this server on a dedicated server, wouldn’t turning off those speculative execution mitigations be a good thing? In the author’s case since he’s on AWS it may not be super ok but on my own actual hardware? I thought it would be fine.

                              1. 1

                                Not surprised and a little sad that the politics and non-explicitness seems to be just as bad as in other big projects. Also maybe a relief that they’re not doing it better than the rest of us…

                                Great post though!

                                1. 5

                                  See also The Tyranny of Structurelessness, originally written about 1960s anarcho-feminist collectives but very applicable to open source projects. Skip down to ‘Formal and Informal Structures’ if you want to go straight to the relevant parts. TLDR: groups of people seeking to do things will always develop hierarchies and rules, so you’d better make them explicit if you want to keep them from being harmful.

                                  Github’s explicit permissions help avoid some of the structureless pitfalls - you either have the ability to merge a commit or you don’t - but not all of them. Schneems thought he could ignore an objection to a merge from a particular person, but it turns out there were unwritten rules about whose opinions counted and what ownership of a particular piece meant.

                                  1. 3

                                    It involves people, so politics is not far away :)

                                    1. 1

                                      What I meant was mostly under-the-radar inofficial structures and not, for example, “to be eligible to join the core group, someone inside must champion your entry and then we will vote. 2/3 yes -> in. Also the core members currently are x,y,z and these are their responsibilites and sole privileges: a,b,c”

                                  1. 52

                                    The question is, what is the alternative? I see two main funding models:

                                    Paywalls. You pay with your money.

                                    Ads. You pay with your attention.

                                    It’s also possible to fund projects through donations, or as hobbies, but producing most of what there is to read requires more money.

                                    Has capitalism really progressed so far that we can no longer even conceive of collective funding models? No wonder people put up with privatised prisons, schools and healthcare systems.

                                    Yes I am suggesting software/news/services could be funded from taxes. Content that is a necessary part of our social infrastructure should be. Content that serves only a luxury/entertainment purpose could be covered by art grants to supplement the models the author listed.

                                    Increasingly we require certain software and internet services to function in society, we should view this as basic infrastructure.

                                    1. 9

                                      Let us be distinct about funding infrastructure maintenance compared with funding software development or other artistic production. Then, indeed, infrastructure could be maintained through taxes in a non-controversial application of socialist logic. However, the design of that infrastructure will be by committees and incumbent power structures. Similarly, art grants could be extended to software authors, with all of the controversy over ownership and licensing that would result.

                                      But for infrastructure, there’s at least one additional option, which is perhaps more communist than socialist: the cooperative. The Bittorrent network is a popular example; folks each contribute a small amount of bandwidth and disk space, and create a vast content-distribution network which becomes faster and more available as content keys become hotter.

                                      1. 4

                                        However, the design of that infrastructure will be by committees and incumbent power structures.

                                        Socialism is all about upsetting the incumbent power structure and putting the people in charge. In recent conceptions this has included nationalising utilities and putting them under the control of a board of stakeholders including service users, workers, and government (Labour party, 2019). There’s also the municipal socialism model where this is devolved to a local level (and quite a few essential services are delivered by municipally owned organisations, some of which are even meaningfully democratic).

                                        Sure, there will still be committees, but there’s no reason that they have to be more onerous than they are in capitalist organisations. There’s nothing stopping a small group from doing its own thing and then trying to persuade the world to adopt it, indeed, if you don’t need to devote less of your time to wage labour you have more capacity to do such things and if the stakeholders don’t need surveillance capitalism then there should be less of an incentive mismatch.

                                        Small aside: Lots of socialist parties support the co-operative movement. The Labour party in the UK has been in electoral coalition with the co-operative party for decades.

                                        1. 2

                                          For an implementation that’s a lot closer in spirit to what you describe, see freenet.

                                        2. 8

                                          I’ve noticed a similar phenomenon when discussing English football (soccer) in the aftermath of the attempt to form a breakaway league. (If you aren’t following it, the short version is that some historically profitable clubs tried to start a new league from which they cannot be relegated to guarantee their income, where the “they” in “their” is the owners who treat it as a business rather than the cultural entity it is.)

                                          Any ideas that in any way restrict the freedom of the owners of these clubs - culture and wider society be damned - are out of the question.

                                          We are now so deeply within this economic orthodoxy that we can no longer conceive of ideas that don’t neatly fit within it.

                                          1. 7

                                            Has capitalism really progressed so far that we can no longer even conceive of collective funding models? No wonder people put up with privatised prisons, schools and healthcare systems.

                                            Yes I am suggesting software/news/services could be funded from taxes. Content that is a necessary part of our social infrastructure should be. Content that serves only a luxury/entertainment purpose could be covered by art grants to supplement the models the author listed.

                                            We could fund things that way, and maybe we should fund things that way. But we aren’t funding things that way, which means that for right now there are only a handful of practical funding models that work, and none of them are good.

                                            1. 5

                                              Paywalls. You pay with your money.

                                              These are annoying indeed but there are plenty of websites where you pay for content but can freely share a number of articles each month or so with non-subscribers, like LWN or The Correspondent. There are plenty of people paying them. And no ads!

                                              Ads. You pay with your attention.

                                              Like others say, this completely bypasses the deeply invasive ways ads on the internet track you. See also this other post showing how Facebook doesn’t even want to expose this to users because they’re too ashamed of it.

                                              It also ignores the “user experience” of ads, which is often terrible - making your machine slow, hijacking your attention with big boxes that you have to click away etc. I don’t mind a well-designed ad here and there like you used to have in magazines, but the current ad experience is just hellish.

                                              1. 5

                                                It also ignores the “user experience” of ads, which is often terrible - making your machine slow

                                                Indeed we collectively pay for ads through bandwidth and power consumption. Why is this never factored in?

                                                1. 4

                                                  The Cost of Mobile Ads on 50 News Websites estimated ads are >50% of mobile data usage.

                                                2. 4

                                                  Ads. You pay with your attention.

                                                  Like others say, this completely bypasses the deeply invasive ways ads on the internet track you.

                                                  It’s not just the tracking. Ads are intentionally manipulative. A lot of the techniques in modern advertising date back to the propaganda techniques from the early 20th century and have been progressively refined. There are benign ads, which try to inform customers and rely on the fact that the product serves a real need and is better than the competition for a specific use, but they’re in the minority. The vast majority are using psychological tricks to try to manipulate people into spending money.

                                                  If your motivation for working on ads is rooted in the idea that there’s a lot of wealth disparity and so a lot of people who couldn’t afford paywalled content, maybe you shouldn’t work in an industry that’s predicated on finding the most vulnerable people in society and taking money from them?

                                                3. 3

                                                  Can you give more detail on how you would have government funding of media without government control of the media? Maybe a dedicated tax, the way the BBC is authorized to collect an annual fee from anyone in the UK who owns a television?

                                                  Maybe it’s better to have a media landscape beholden to a government which we (in the US) mostly elect than one beholden to a few giant ad companies, but not that much better.

                                                  1. 2

                                                    Here are some options:

                                                    1. Make them financially independent by giving investments or a trust fund rather than recurring grants (this is how The Guardian (partially) funds itself, and how many universities and charities in the USA are funded)
                                                    2. Encourage individuals to do the funding (this is how lots of public broadcasting in the USA is funded now, especially in the US. Possibly increase minimum wage or issue vouchers to get more funding from poorer people)
                                                    3. Get a more trustworthy government (Proportional representation, better parties, funding reform, gerrymandering, etc, etc)

                                                    I’d also argue that the existing corporate media in the USA is beholden to government, or at the very least has a deeply untrustworthy relationship with it. The corporate media are well known for uncritically repeating lies fed to them by intelligence officers (Glenn Greenwald and others have written about this often) and political journalists are dependent on “access” to government ministers and officials for their stories, which requires them to be chummy with the people they are supposedly holding to account.

                                                  2. 2

                                                    I once asked an economist doing monetary policy studies for African nations how she thought the world could work without currency (a la Star Trek, or similar) and she legit could not concieve of such a thing, said it was impossible.

                                                    1. 1

                                                      Well, she would’ve probably thought of something like the Economic Calculation Problem and decided it wasn’t worth her time to solve…

                                                      1. 1

                                                        You have the same failure of imagination. Conceivably, a post-scarcity world where you have a matter replicator at home and you can just walk up to it and say what you want and it gets fabricated for you on the fly would conceivably not need a market, hence its absence in Star Trek TNG for example.

                                                        It’s ridiculously far-fetched, but it’s just an exercise in imagination. We’re as a society so fixated on current economics and politicization, we can’t even conceive of different systems. Is what we have now the end state? If so, we’ve stopped dreaming and evolving.

                                                        1. 3

                                                          I was going to make some joke about fully-automated luxury communism or the like but thought we were talking from within the bounds of possibility.

                                                  1. 4

                                                    Makes me think of the same for some Super Nintendo games, like the ones with the SuperFX chips or that one ARM processor used for only one game https://snescentral.com/chips.php?chiptype=ST018

                                                    1. 4

                                                      This can totally disrupt the market for super expensive accelerators for Amigas, pretty cool.

                                                      1. 1

                                                        And the best part is that it is simple hardware (so it should be cheap to make) and it is under MIT license.

                                                        Amiga stores will be selling a lot of these, I predict. Those super expensive accelerators will have to lower their prices.

                                                        1. 3

                                                          Some of them won’t. The incredibly expensive ones are incredibly expensive because they use the 68060 which is rare as hen’s teeth. Those have the cachet of being “real hardware” accelerators which some retronauts care about, and they won’t become cheaper just because the fake hardware accelerators do.

                                                          1. 1

                                                            the incredibly expensive ones are incredibly expensive because they use the 68060 which is rare as hen’s teeth.

                                                            They’re expensive even when you have to contribute your own 68060 because they ship them w/o. Yes, it is a thing.

                                                            fake hardware accelerators

                                                            PiStorm doesn’t make any claims to be anything else than it is.

                                                            they won’t become cheaper

                                                            They will be a much harder sell. Lower prices will be the natural consequence, as they are not priced to cost (the boards themselves do not cost that much to make, and AIUI come w/o 060), but to demand (based on ebay prices for old accelerators).

                                                            some retronauts care about

                                                            I believe this new product will hurt Vampire the most, and this is honestly for the best.

                                                            I dislike the ARM doing JIT approach (it is extremely ugly and unpleasant to think about), but I am definitely happy with the idea of a modern 68k implementation in FPGA. Unfortunately, the only such product right now is the Vampire, which is not OSHW.

                                                            OSHW implementations such as tg68k do exist, but so far nobody has made an FPGA-based accelerator board for the Amiga. Until that does happen, at the very least this ugly JIT approach is there.

                                                          2. 2

                                                            Even if the expensive is crazy expensive, I’m hopeful of this spuring more of an “arm in a socket” replacement for more and more parts allowing general purpose CPUs to replace all the dated components and not cost a fortune like FPGAs have proven themselves.

                                                            1. 1

                                                              If anything, the existing non-OSHW FPGA-based solutions will have a much harder time selling; It puts a stop to that trainwreck.

                                                              I am hopeful a proper FPGA-based OSHW accelerator board will show up at some point. Even if they were to use existing (tg68k, I understand somewhere between 030 and 040 performance), it wouldn’t matter. As long as they pick a decent FPGA (ECP5 85K would be nice, with open synth/routing stack), better HDL would no doubt come later.

                                                              1. 1

                                                                and not cost a fortune like FPGAs have proven themselves

                                                                The ECP5 85K I suggest goes for about $80/u.

                                                                The existing FPGA boards for the Amiga (I am thinking Vampire) are expensive, but that has nothing to do with the cost of the PCBs or components.

                                                          1. 1

                                                            The font scaling issues aren’t really GNOME’s fault, that’s pango and fontconfig, right?

                                                            1. 3

                                                              Pango is a Gnome project. A long time ago, two different projects called GScript and GnomeText got merged and the result was Pango. It has its own release roadmap and stuff but it’s pretty much part of the Gnome org – it’s hosted in Gnome’s Gitlab infrastructure, it uses Gnome’s bugtracker and so on.

                                                            1. 6

                                                              This has absolutely been my experience doing web stuff… but also doing Python. Do you want to use virtualenv, pipenv, poetry, or…?

                                                              1. 8

                                                                Yeah it’s like the article says, instead of trying to tackle complexities we just hide them in more and more layers of tooling every year. It feels like there’s little innovation in problem solving, it’s all problem management.

                                                                1. 4

                                                                  instead of trying to tackle complexities we just hide them in more and more layers of tooling every year.

                                                                  Hah, that complaint is the source of a big controversy from back in 2008 when Jonathon Blow said what you just said, criticizing Linux for doing exactly that.

                                                                  Specifically, his complaint was that X handled mouse input poorly (if you moved the mouse all the way outside of the window within a single frame, the mouse delta would be capped to the window width/height instead of representing the true distance), and people responded “why are you using X anyway, just use SDL” and his response was “because SDL has the exact same problem as SDL is just a wrapper around the X function anyway”.

                                                                  1. 1

                                                                    Last month, I had to install a package manager to install a package manager. That’s when I closed my laptop and slowly backed away from it.

                                                                    Perhaps the answer is complexity layering limitations?

                                                                    1. 1

                                                                      complexity limitations? how does that work?

                                                                1. 2

                                                                  Hey everyone, I’ve been working on this framework for close to 3 years now. It uses static analysis and code generation to simplify and improve large parts of the developer experience, like automatically generating API docs, instrumenting your app with tracing, and more.

                                                                  Would love your feedback :)

                                                                  1. 2

                                                                    Looks really polished and built with attention to detail, congrats. Makes me wish I knew go 😪

                                                                    1. 1

                                                                      Thank you! It’s an easy enough language to learn :)

                                                                  1. 11

                                                                    Me and @thequux were some of the presenters on this one if you have any questions.

                                                                    1. 1

                                                                      Do you have any good English sources of info/software for BTron? Otherwise, cool showcase for these systems even though I’d have preferred a more structured look at what we actually lost instead of a demo of 3 systems with some bullet points.

                                                                      1. 1

                                                                        Looking forward to this one, will watch later!

                                                                      1. 5

                                                                        This does seem to confirm something I’ve been thinking about lately while making my own PL, which is that array/slice handling should probably be a feature of all languages, since it is often the thing that we are doing to deal with I/O and it can be error prone. And not just the concept, but an appropriate selection of utility functions for all of the use-cases that one could have with arrays/slices in terms of copying/moving/whatnot. I wouldn’t say that Rust did a stellar job of the last one since their pace of adding library functions is glacial at best. (Professional Rust programmer btw)

                                                                        1. 2

                                                                          Some langs are going in this direction, like Alan https://docs.alan-lang.org/about_alan.html

                                                                        1. 1

                                                                          Is this maybe because it takes time to agree on them across any team in the first place?

                                                                          1. 4

                                                                            I tried out Nomad but getting a shell on a job is an Enterprise feature? Seems over aggressive in terms of pricing.

                                                                            1. 2

                                                                              No?

                                                                              nomad alloc exec <allocation-id> bash

                                                                              gives you a shell in one allocation of a job.

                                                                              1. 2

                                                                                Just tried again, works! My bad. Think I was a victim of https://github.com/hashicorp/nomad/issues/4567

                                                                                Still only works sometimes…

                                                                                 2021-03-01T14:53:37.401Z [ERROR] http: request failed: method=GET path=/v1/client/allocation/bc06ab17-9271-3bfd-cbbd-97c992252d7e/exec?task=redis&tty=true&ws_handshake=true&command=%5B%22%2Fbin%2Fbash%22%5D error="websocket: close 1006 (abnormal closure): unexpected EOF" code=500
                                                                                    2021-03-01T14:54:05.571Z [ERROR] http: request failed: method=GET path=/v1/client/allocation/bc06ab17-9271-3bfd-cbbd-97c992252d7e/exec?task=redis&tty=true&ws_handshake=true&command=%5B%22%2Fbin%2Fbash%22%5D error="websocket: close 1006 (abnormal closure): unexpected EOF" code=500
                                                                                
                                                                                1. 1

                                                                                  oh, well I stand corrected. good to know :)

                                                                                2. 1

                                                                                  I tried “exec” from the UI - is that not the same?

                                                                                3. 1

                                                                                  Ah, yes, product tiering!

                                                                                1. 6

                                                                                  Great to see new ideas in the OS space, and using Rust! Here’s a paper on it http://kevinaboos.web.rice.edu/docs/theseus_boos_osdi2020.pdf

                                                                                  1. 9

                                                                                    Now this is cool. Always wondered how hey.com was able to have such a small js footprint and just be so fast.

                                                                                    I swear Rails is just ageing like wine at this point, this is seriously cool.

                                                                                    1. 3

                                                                                      Interestingly, I find Hey to be annoyingly slow sometimes and it makes me wish for a native app. Thankfully, I recently found a tracker blocker plugin for Apple Mail and that’s really all I want.

                                                                                      1. 2

                                                                                        Where are you located? There was talk about people in Singapore having slow UI response.

                                                                                        1. 1

                                                                                          I’m in Cupertino, USA. I think it’s a combination of Electron and slow network calls.

                                                                                    1. 1

                                                                                      Isn’t Postman web-based too? They have a desktop app, but you can use it from the website as well.

                                                                                      1. 1

                                                                                        TIL! I only ever used the desktop app. I use Burp and Python for most of my API testing.

                                                                                      1. 3

                                                                                        HTTP/2, we hardly knew you…

                                                                                        1. 1

                                                                                          As I understand it, HTTP/3 is largely a binary serialisation of HTTP/2 headers over QUIC, whereas HTTP/2 is a text serialisation of the same protocol over TCP. There are a lot of things in HTTP/2 like multiple streams that need application changes to take advantage of but once you’ve done those you can plug in HTTP/3 at the bottom of your stack and not modify anything at the higher levels.

                                                                                          1. 4

                                                                                            well I see it as something complicating the whole http stack even more and is only pushed by big companies that have enough money to throw engineers at any migration

                                                                                            1. 2

                                                                                              I’m not sure how the latter follows. HTTP/3 lets you establish a single connection and get multiple resources without packet loss in one stream delay any of the others and reduces latency for establishing encrypted sessions. That’s going to be great for a load of HTTP-based services such as Nextcloud and JMAP (the Nextcloud apps all seem to happily talk HTTP/2). Once your client and server are able to support encryption and multiple streams, moving from HTTP/2 to HTTP/3 should just be a matter of dropping in a different library (and those big companies that you complain about are the ones producing the open source libraries that everyone else will be using) and getting lower latency.

                                                                                              1. 3

                                                                                                All well and good, but it means we have well and truly locked out beginners and fast prototypes from building clients and servers at that layer of the stack without depending on those libraries.

                                                                                                HTTP, for all of its many flaws, can be quickly implemented by anything that can munge text–and has been.

                                                                                                1. 5

                                                                                                  I am not sure I agree. HTTP depends on TCP, which is something that you want to build from scratch, so you rely on some external provider of a network stack. Typically people actually want HTTPS, and I really hope you’re not implementing your own TLS stack as a beginner and are pulling in a library for that. If you’re willing to depend on a network stack and a TLS library, depending on an HTTP/3 library is not very much more complexity (actually, it’s less in total: QUIC + UDP is a lot simpler than TLS + TCP) and you can put anything you want that can ‘munge text’ on top.

                                                                                                  1. -1

                                                                                                    Why would you implement your own HTTP server? There’s plenty of good implementations that already exist.

                                                                                                    1. 4

                                                                                                      Well, for people who are writing a programming language, an HTTP server is usually part of the process of building a web framework.

                                                                                                      There’s also applications like having embedded sensors sending their data via HTTP, or small computers doing the same.

                                                                                                      1. 3

                                                                                                        The user I’m replying to is making an argument about beginners being locked out and fast prototyping being difficult.

                                                                                                        Someone implementing a programming language has plenty on their plate already and are probably not beginners.

                                                                                                        In terms of performance, if the hardware can run it then there’s probably already an implementation. If it can’t then there’s no point in writing one.

                                                                                                        1. 1

                                                                                                          I’d expect embedded sensors to either keep using HTTP/1.0 (most don’t even do 1.1) or something like MQTT. Just because HTTP/3 exists doesn’t mean that you can’t use older versions. If you’re building something for tightly resource-constrained environments then a protocol designed for low latency, high-bandwidth, independent streams might not be the best fit for your use case.