Threads for GrayGnome

  1. 14

    I’m very curious how these companies address the fact that there are countries where smartphones are not universally owned (because of cost, or lack of physical security for personal belongings).

    1. 8

      At least Microsoft has multiple paths for 2FA - an app, or a text sent to a number. It’s hard to imagine them going all in on “just” FIDO.

      Now, as to whether companies should support these people - from a purely money-making perspective, if your customers cannot afford a smartphone, maybe they’re not worth that much as customers?

      A bigger issue is if public services are tied to something like this, but in that case, subsidizing smartphone use is an option.

      1. 24

        if your customers cannot afford a smartphone, maybe they’re not worth that much as customers?

        I had a longer post typed out and I don’t think at all you meant this but at a certain point we need to not think of people as simply customers and begin to think that we’re taking over functions typically subsidized or heavily regulated by the government like phones or mail. It was not that long ago that you probably could share a phone line (telcos which were heavily regulated) with family members or friends when looking for a job or to be contacted about something. Or pay bills using the heavily subsidized USPS. Or grab a paper to go through classifieds to find a job.

        Now you need LinkedIn/Indeed, an email address, Internet, your own smartphone, etc. to do anything from paying bills to getting a job. So sure if you’re making a throwaway clickbait game you probably don’t need to care about this.

        But even this very website, do we want someone who is not doing so well financially to be deprived of keeping up with news on their industry or someone too young to have a cellphone from participating? I don’t think it is a god-given right but the more people are not given access to things you or I have access to, the greater the divide becomes. Someone who might have a laptop, no Internet, but have the ability to borrow a neighbor’s wifi. Similarly a family of four might not have a cell phone for every family member.

        I could go on but like discrimination or dealing with people of various disabilities it is something that’s really easy to forget.

        1. 15

          I should have been clearer. The statement was a rhetorical statement of opinion, not an endorsement.

          Viewing users as customers excludes a huge number of people, not just those too poor to have a computer/smartphone, but also people with disabilities who are simply too few to economically cater to. That’s why governments need to step in with laws and regulations to ensure equal access.

          1. 11

            I think governments often think about this kind of accessibility requirement exactly the wrong way around. Ten or so years ago, I looked at the costs that were being passed onto businesses and community groups to make building wheelchair accessible. It was significantly less than the cost of buying everyone with limited mobility a motorised wheelchair capable of climbing stairs, even including the fact that those were barely out of prototype and had a cost that reflected the need to recoup the R&D investment. If the money spent on wheelchair ramps had been invested in a mix of R&D and purchasing of external prosthetics, we would have spent the same amount and the folks currently in wheelchairs would be fighting crime in their robot exoskeletons. Well, maybe not the last bit.

            Similarly, the wholesale cost of a device capable of acting as a U2F device is <$5. The wholesale cost of a smartphone capable of running banking apps is around $20-30 in bulk. The cost for a government to provide one to everyone in a country is likely to be less than the cost of making sure that government services are accessible by people without such a device, let alone the cost to all businesses wanting to operate in the country.

            TL;DR: Raising people above the poverty line is often cheaper than ensuring that things are usable by people below it.

            1. 12

              Wheelchair ramps help others than those in wheelchairs - people pushing prams/strollers, movers, emergency responders, people using Zimmer frames… as the population ages (in developed countries) they will only become more relevant.

              That said, I fully support the development of powered exoskeletons to all who need or want them.

              1. 8

                The biggest and most expensive problem around wheelchairs is not ramps, it’s turn space and door sizes. A wheelchair is broader (especially the battery-driven ones you are referring to) and needs more space to turn around than a standing human. Older buildings often have too narrow pathways and doors.

                Second, all wheelchairs and exoskeletons here would need to be custom, making them inappropriate for short term disability or smaller issues like walking problems that only need crutches. All that while changing the building (or building it right in the first place) is as close to a one-size-fits-all solution as it gets.

                1. 5

                  I would love it if the government would buy me a robo-stroller, but until then, I would settle for consistent curb cuts on the sidewalks near my house. At this point, I know where the curb cuts are and are not, but it’s a pain to have to know which streets I can or can’t go down easily.

                2. 7

                  That’s a good point, though I think there are other, non-monetary concerns that may need to be taken into account as well. Taking smartphones for example, even if given out free by the government, some people might not be real keen on being effectively forced to own a device that reports their every move to who-knows-how-many advertisers, data brokers, etc. Sure, ideally we’d solve that problem with some appropriate regulations too, but that’s of course its own whole giant can of worms…

                  1. 2

                    The US government will already buy a low cost cellphone for you. One showed up at my house due to some mistake in shipping address. I tried to send it back, but couldn’t figure out how. It was an ancient Android phone that couldn’t do modern TLS, so it was basically only usable for calls and texting.

                    1. 2

                      Jokes aside - it is basically a requirement in a certain country I am from; if you get infected by Covid you get processed by system and outdoors cameras monitor so you don’t go outside, but to be completely sure you’re staying at home during recovery it is mandatory to install a government-issued application on your cellphone/tablet that tracks your movement. Also some official check ups on you with videocalls in said app to verify your location as well several times per day at random hours.

                      If you fail to respond in time or geolocation shows you left your apartments you’ll automatically get a hefty fine.

                      Now, you say, it is possible to just tell them “I don’t own a smartphone” - you’ll get cheap but working government-issued android tablet, or at least you’re supposed to; as lots of other things “the severity of that laws is being compensated by their optionality” so quite often devices don’t get delivered at all.

                      By law you cannot decline the device - you’ll get fined or they promise to bring you to hospital as mandatory measure.

                  2. 7

                    Thank you very much for this comment. I live in a country where “it is expected” to have a smartphone. The government is making everything into apps which are only available on Apple Appstore or Google Play. Since I am on social welfare I cannot afford a new smartphone every 3-5 years and old ones are not supported either by the appstores or by the apps themselves.

                    I have a feeling of being pushed out by society due to my lack of money. Thus I can relate to people in similar positions (larger families with low incomes etc.).

                    I would really like more people to consider that not everybody has access to new smartphones or even a computer at home.

                    I believe the Internet should be for everyone not just people who are doing well.

                3. 6

                  If you don’t own a smartphone, why would you own a computer? Computers are optional supplements to phones. Phones are the essential technology. Yes, there are weirdos like us who may choose to own a computer but not a smartphone for ideological reasons, but that’s a deliberate choice, not an economic one.

                  1. 7

                    In the U.S., there are public libraries where one can use a computer. In China, cheap internet cafés are common. If computer-providing places like these are available to non-smartphone-users, that could justify services building support for computer users.

                    1. 1

                      In my experience growing up in a low income part of the US, most people there now only have smartphones. There most folks use laptops in office or school settings. It remains a difficulty for those going to college or getting office jobs. It was the same when I was growing up there except there were no smartphones, so folks had flip phones. Parents often try and save up to buy their children nice smartphones.

                      I can’t say this is true across the US, but for where I grew up at least it is.

                      1. 1

                        That’s a good point, although it’s my understanding that in China you need some kind of government ID to log into the computers. Seems like the government ID could be made to work as a FIDO key.

                        Part of the reason a lot of people don’t have a computer nowadays is that if you really, really need to use one to do something, you can go to the library to do it. I wonder though if the library will need to start offering smartphone loans next.

                      2. 5

                        How are phones the “essential technology”? A flip phone is 100% acceptable these days if you just have a computer. There is nothing about a smartphone that’s required to exist, let alone survive.

                        A computer, on the other hand, (which a smart phone is a poor approximation of), is borderline required to access crucial services outside of phone calls and direct visits. “Essential technology” is not a smartphone.

                        1. 2

                          There’s very little I can only do on a computer (outside work) that I can’t do on a phone. IRC and image editing, basically. Also editing blog posts because I do that in the shell.

                          I am comfortable travelling to foreign lands with only a phone, and relying on it for maps, calls, hotel reservations, reading books, listening to music…

                          1. 1

                            The flip phones all phased out years ago. I have friends who deliberately use flip phones. It is very difficult to do unless you are ideologically committed to it.

                          2. 3

                            I’m curious about your region/job/living situation, and what about is making phones “the essential technology”? I barely need a phone to begin with, not to mention a smartphone. It’s really only good as a car navigation and an alarm clock to me.

                            1. 1

                              People need to other people to live. Most other people communicate via phone.

                              1. 1

                                It’s hardly “via phone” if it’s Signal/Telegram/FB/WhatsApp or some other flavor of the week instant messenger. You can communicate with them on your PC just as well.

                                1. 4

                                  I mean I guess so? I’m describing how low income people in the US actually live, not judging whether it makes sense. Maybe they should all buy used Chromebooks and leech Wi-Fi from coffee shops. But they don’t. They have cheap smartphones and prepaid cards.

                                  1. 2

                                    You can not connect to WhatsApp via the web interface without a smartphone running the WhatsApp app, and Signal (which does not have this limitation) requires a smartphone as the primary key with the desktop app only acting as a subkey. I think Telegram also requires a smartphone app for initial provisioning.

                                    I think an Android Emulator might be enough, if you can manually relay the SMS code from a flip phone, maybe.

                              2. 2

                                You’re reasoning is logical if you’re presented a budget and asked what to buy. Purchasing does not happen in a vacuum. You may inherit a laptop, borrow a laptop, no longer afford a month to month cell phone bill, etc. Laptops also have a much longer life cycle than phones.

                                1. 4

                                  I’m not arguing that this is good, bad, or whatever. It’s just a fact that in the USA today if you are a low income person, you have a smartphone and not a personal computer.

                            1. 12

                              The lesson here sounds more like “bad protocols will make your client/server system slow and clumsy”, not “move all of your system’s code to the server.” The OP even acknowledges that GraphQL would have helped a lot. (Or alternatively something like CouchDB’s map/reduce query API.)

                              I don’t really get the desire to avoid doing work on the client side. Your system includes a lot of generally-quite-fast CPUs provided for free by users, and the number of these scales 1::1 with the number of users. Why not offload work onto them from your limited and costly servers? Obviously you’re already using them for rendering, but you can move a lot of app logic there too.

                              I’m guessing that the importance of network protocol/API design has been underappreciated by web devs. REST is great architecturally but if you use it as a cookie-cutter approach it’s non-optimal for app use. GraphQL seems a big improvement.

                              1. 16

                                Your system includes a lot of generally-quite-fast CPUs provided for free by users

                                Yes, and if every site I’m visiting assumes that, then pretty quickly, I no longer have quite-fast CPUs to provide for free, as my laptop is slowly turning to slag due to the heat.

                                1. 8

                                  Um, no. How many pages are you rendering simultaneously?

                                  1. 3

                                    I usually have over 100 tabs open at any one time, so a lot.

                                    1. 5

                                      If your browser actually keeps all those tabs live and running, and those pages are using CPU cycles while idling in the background and the browser doesn’t throttle them, I can’t help you… ¯\_(ツ)_/¯

                                      (Me, I use Safari.)

                                      1. 3

                                        Yes, but assuming three monitors you likely have three, four windows open. That’s four active tabs, Chrome put the rest of them to sleep.

                                        And even if you only use apps like the one from the article, and not the well-developed ones like the comment above suggests, it’s maybe five of them at the same time. And you’re probably not clicking frantically all over them at once.

                                        1. 2

                                          All I know is that when my computer slows to a crawl the fix that usually works is to go through and close a bunch of Firefox tabs and windows.

                                          1. 4

                                            There is often one specific tab which for some reason is doing background work and ends up eating a lot of resources. When I find that one tab and close it my system goes back to normal. Like @zladuric says, browsers these days don’t let inactive tabs munch resources.

                                  2. 8

                                    I don’t really get the desire to avoid doing work on the client side.

                                    My understanding is that it’s the desire to avoid some work entirely. If you chop up the processing so that the client can do part of it, that carries its own overhead. How do you feel about this list?

                                    Building a page server-side:

                                    • Server: Receive page request
                                    • Server: Query db
                                    • Server: Render template
                                    • Server: Send page
                                    • Client: Receive page, render HTML

                                    Building a page client-side:

                                    • Server: Receive page request
                                    • Server: Send page (assuming JS is in-page. If it isn’t, add ‘client requests & server sends the JS’ to this list.)
                                    • Client: Receive page, render HTML (skeleton), interpret JS
                                    • Client: Request data
                                    • Server: Receive data request, query db
                                    • Server: Serialize data (usu. to JSON)
                                    • Server: Send data
                                    • Client: Receive data, deserialize data
                                    • Client: Build HTML
                                    • Client: Render HTML (content)

                                    Compare the paper Scalabiilty! But at what COST!, which found that the overhead of many parallel processing systems gave them a high “Configuration to Outperform Single Thread”.

                                    1. 4

                                      That’s an accurate list… for the first load! One attraction of doing a lot more client-side is that after the first load, the server had the same list of actions for everything you might want to do, while the client side looks more like:

                                      • fetch some data
                                      • deserialize it
                                      • do an in-place rerender, often much smaller than a full page load

                                      (Edit: on rereading your post your summary actually covers all requests, but missed how the request and response and client-side rerender can be much smaller this way. But credit where due!)

                                      That’s not even getting at how much easier it is to do slick transitions or to maintain application state correctly across page transitions. Client side JS state management takes a lot of crap and people claim solutions like these are simpler but… in practice many of the sites which use them have very annoying client side state weirdness because it’s actually hard to keep things in sync unless you do the full page reload. (Looking at you, GitHub.)

                                      1. 6

                                        When I’m browsing on mobile devices I rarely spend enough time on any single site for the performance benefits of a heavy initial load to kick in.

                                        Most of my visits are one page long - so I often end up loading heavy SPAs when a lighter, single page optimized to load fast from an un sched blank state would have served me much better.

                                        1. 4

                                          I would acknowledge that this is possible.

                                          But that’s almost exactly what the top comment said. People use framework of the day for a blog. Not flattening it, or remixing it or whatever.

                                          SPAs that I use are things like Twitter, the tab is likely always there. (And on desktop i have those CPU cores.)

                                          It’s like saying, I only ride on trains to work, and they’re always crowded, so trains are bad. Don’t use trains if your work is 10 minutes away.

                                          But as said, I acknowledge that people are building apps where they should be building sites. And we suffer as the result.

                                          What still irks me the most are sites with a ton of JavaScript. So it’s server-rendered, it just has a bunch of client-side JavaScript that’s unused, or loading images or ads or something.

                                      2. 4

                                        You’re ignoring a bunch of constant factors. The amount of rendering to create a small change on the page is vastly smaller than that to render a whole new page. The most optimal approach is to send only the necessary data over the network to create an incremental change. That’s how native client/server apps work.

                                        1. 5

                                          In theory yes but if in practice if the “most optimal approach” requires megabytes of JS code to be transmitted, parsed, executed through 4 levels of interpreters culminating in JIT compiling the code to native machine code all the while performing millions of checks to make sure that this complex system doesn’t result in a weird machine that can take over your computer then maybe sending a “whole new page” consisting of 200 kb of static HTML upon submitting a form would be more optimal.

                                          1. 4

                                            In theory yes but if in practice if the “most optimal approach” requires megabytes of JS code to be transmitted, parsed, executed through 4 levels of interpreters culminating in JIT compiling the code to native machine code all the while performing millions of checks to make sure that this complex system doesn’t result in a weird machine that can take over your computer

                                            This is hyperbole. Sending a ‘“whole new page” of 200 kb of static HTML’ has your userspace program block on the kernel as bytes are written into some socket buffer, NIC interrupts the OS to grab these bytes, the NIC generates packets containing the data, userspace control is then handed back to the app which waits until the OS notifies it that there’s data to read, and on and on. I can do this for anything on a non-embedded computer made in the last decade.

                                            Going into detail for dramatic effect doesn’t engage with the original argument nor does it elucidate the situation. Client-side rendering makes you pay a one-time cost for consuming more CPU time and potentially more network bandwidth for less incremental CPU and bandwidth. That’s all. Making the tradeoff wisely is what matters. If I’m loading a huge Reddit or HN thread for example, it might make more sense to load some JS on the page and have it adaptively load comments as I scroll or request more content. I’ve fetched large threads on these sites from their APIs before and they can get as large as 3-4 MB when rendered as a static HTML page. Grab four of these threads and you’re looking at 12-16 MB. If I can pay a bit more on page load then I can end up transiting a lot less bandwidth through adaptive content fetching.

                                            If, on the other hand, I’m viewing a small thread with a few comments, then there’s no point paying that cost. Weighing this tradeoff is key. On a mostly-text blog where you’re generating kB of content, client-side rendering is probably silly and adds more complexity, CPU, and bandwidth for little gain. If I’m viewing a Jupyter-style notebook with many plots, it probably makes more sense for me to be able to choose which pieces of content I fetch to not fetch multiple MB of content. Most cases will probably fit between these two.

                                            Exploring the tradeoffs in this space (full React-style SPA, HTMX, full SSR) can help you come to a clean solution for your usecase.

                                            1. 1

                                              I was talking about the additional overhead required to achieve “sending only the necessary data over the network”.

                                      3. 4

                                        I don’t really get the desire to avoid doing work on the client side.

                                        My impression is that it is largely (1) to avoid JavaScript ecosystem and/or* (2) avoid splitting app logic in half/duplicating app logic. Ultimately, your validation needs to exist on the server too because you can’t trust clients. As a rule of thumb, SSR then makes more sense when you have lower interactivity and not much more logic than validation. CSR makes sense when you have high interactivity and substantial app logic beyond validation.

                                        But I’m a thoroughly backend guy so take everything that I say with a grain of salt.


                                        Edit: added a /or. Thought about making the change right after I posted the comment, but was lazy.

                                        1. 8

                                          (2) avoid splitting app logic in half/duplicating app logic.

                                          This is a really the core issue.

                                          For a small team, a SPA increases the amount of work because you have a backend with whatever models and then the frontend has to connect to that backend and redo the models in a way that makes sense to it. GraphQL is an attempt to cut down on how much work this is, but it’s always going to be some amount of work compared to just creating a context dictionary in your controller that you pass to the HTML renderer.

                                          However, for a team that is big enough to have separate frontend and backend teams, using a SPA decreases the amount of communication necessary between the frontend and backend teams (especially if using GraphQL), so even though there’s more work overall, it can be done at a higher throughput since there’s less stalling during cross team communication.

                                          There’s a problem with MPAs that they end up duplicating logic if something can be done either on the frontend or the backend (say you’ve got some element that can either be loaded upfront or dynamically, and you need templates to cover both scenarios). If the site is mostly static (a “page”) then the duplication cost might be fairly low, but if the page is mostly dynamic (an “app”), the duplication cost can be huge. The next generation of MPAs try to solve the duplication problem by using websockets to send the rendered partials over the wire as HTML, but this has the problem that you have to talk to the server to do anything, and that round trip isn’t free.

                                          The next generation of JS frameworks are trying to reduce the amount of duplication necessary to write code that works on either the backend or the frontend, but I’m not sure they’ve cracked the nut yet.

                                          1. 4

                                            For a small team, a SPA increases the amount of work because you have a backend with whatever models and then the frontend has to connect to that backend and redo the models in a way that makes sense to it

                                            Whether this is true depends on whether the web app is a client for your service or the client for your service. The big advantage of the split architecture is that it gives you a UI-agnostic web service where your web app is a single front end for that service.

                                            If you never anticipate needing to provide any non-web clients to your service then this abstraction has a cost but little benefit. If you are a small team with short timelines that doesn’t need other clients for the service yet then it is cost now for benefit later, where the cost may end up being larger than the cost of refactoring to add abstractions later once the design is more stable.

                                            1. 1

                                              If you have an app and a website as a small team, lol, why do you hate yourself?

                                              1. 4

                                                The second client might not be an app, it may be some other service that is consuming your API.

                                          2. 4

                                            (2) avoid splitting app logic in half/duplicating app logic.

                                            The other thing is to avoid duplicating application state. I’m also a thoroughly a backend guy, but I’m led to understand that the difficulty of maintaining client-side application state was what led to the huge proliferation of SPA frameworks. But maintaining server-side application state is easy, and if you’re doing a pure server-side app, you expose state to the client through hypertext (HATEOAS). What these low-JS frameworks do is let you keep that principle — that the server state is always delivered to the client as hypertext — while providing more interactivity than a traditional server-side app.

                                            (I agree that there are use-cases where a more thoroughly client-side implementation is needed, like games or graphics editors, or what have you.)

                                            1. 1

                                              Well, there’s a difference between controller-level validation and model-level validation. One is about not fucking up by sending invalid data, the other is about not fucking up by receiving invalid data. Both are important.

                                            2. 4

                                              Spot on.

                                              this turns out to be tens (sometimes hundreds!) of requests because the general API is very normalized (yes we were discussing GraphQL at this point)

                                              There’s nothing about REST I’ve ever heard of that says that resources have to be represented as separate, highly normalized SQL records, just as GraphQL is not uniquely qualified to stitch together multiple database records into the same JSON objects. GraphQL is great at other things like allowing clients to cherry-pick a single query that returns a lot of data, but even that requires that the resolver be optimized so that it doesn’t have to join or query tables for data that wasn’t requested.

                                              The conclusion, which can be summed up as, “Shell art is over,” is an overgeneralized aesthetic statement that doesn’t follow from the premises. Even if the trade-offs between design choices were weighed fully (which they weren’t), a fundamentally flawed implementation of one makes it a straw man argument.

                                              1. 1

                                                The Twitter app used to lag like hell on my old Thinkpad T450. At the very least, it’d kick my fan into overdrive.

                                                1. 1

                                                  Yay for badly written apps :-p

                                                  Safari will notice when a page in the background is hogging the CPU, and either throttle or pause it after a while. It puts up a modal dialog on the tab telling you and letting you resume it. Hopefully it sends an email to the developer too (ha!)

                                              1. 15

                                                Nice write up :) It echoes a lot of the experience and sentiments I have with Go. Particularly that I can see why it exists and has gained popularity, and I can appreciate it myself (heck even enjoying it from time to time). But fundamentally the values embodied by the language don’t align with my values on software engineering as much as other languages do - as much as Rust, Haskell, and I’d even say Kotlin aligns more than Go does.

                                                Especially liked the comparison of worse-is-better to the MIT approach. I found that a nice summary.

                                                1. 4

                                                  I sort of agree. I wish the world was better calibrated for the tradeoffs Rust makes, and some of it is. If I were in some market where the costs of any single error were high and/or I can’t quickly push out a patch, I think Rust is the economical way to go. Maybe it will be the way to go when we’ve largely automated everything and the competition is no longer about being the first mover and more about having the faster and more correct product?

                                                  But for now, for a huge swatch of the software engineering space, the most important thing is rapid productivity and Go is best in class here (quite a lot better even than Python, despite my 15 years of experience with it).

                                                  And its quality is good enough—indeed, people are still shipping important software backed by fully dynamic languages! Go’s static typing story may only be 95% “complete”, but a whole lot of engineering is done entirely without static types! And not just the likes of Twitter and Reddit, but things like healthcare and finance. Go is a marked improvement here but it still gets dramatically more criticism than dynamic languages for lacking more advanced static analysis features.

                                                  Most importantly, as much as my younger self would hate to admit it, type systems aren’t that important. Having some basic type checking yields a ton of productivity, but beyond that the returns on quality diminish and the productivity returns go negative. Same deal with performance—if you’re 2 or 3 orders of magnitude faster than Python, you’re probably good enough, and to get much faster you’re often having to trade off a ton of productivity, which again is a bad tradeoff for the overwhelming majority of applications. More important are things like learning curve, tooling, ecosystem, deployment story, strong culture/standards/opinions etc. Go strikes these balances really well.

                                                  For better or worse, Go is the best fit for a giant chunk of software development at this moment in time.

                                                  1. 2

                                                    I think the “cost of a single error” is why companies in security are jumping on Rust. The issue is no one cares about the cost of a single error if it can be put off to runtime. Since no matter how much devops synergy takes place, the time and money spent diagnosing a runtime error is often not on the programmer who wrote it, let alone the person who made the language choice. By allowing nil or nil, nil to be a part of the language, the Go story is one of late nights and anxiety at runtime, instead of chewed pencils and squeezed rubber ducks at compile time. The labor market seems to prefer runtime pain, because you can hire a wider range of people to write and track down preventable bugs than you can hire to write in a language that sidesteps them completely. This is perhaps too sarcastic a take, but oh well. I’ve been paid to write both and I don’t sweat about the Rust I’ve written from a maintenance or a “will it crash” perspective. Go on the other hand, has so much room for unplanned activities.

                                                    1. 5

                                                      Eh, I’ve written plenty of bugs in Rust, which has been a frustrating and disappointing experience considering all of the energy I would put into pacifying the borrow checker. After having used it, I would say I get so wrapped up in thinking about borrow-checker-friendly architectures and planning refactoring (“If I change this thing from owned to borrowed, is this going to cascade through my program such that I spend untold hours on this refactor?”) that I actually lose track of details that the type system can’t help me with (“did I make sure to escape this string correctly?”, “did I pass the right PathBuf to this function?”, etc). And of course it takes me much longer to write that code.

                                                      I don’t want to make too big of a deal about that point–I think Rust’s quality is still a bit better than Go’s, but the difference seems exaggerated to me and overall has been disappointing considering how much time and energy goes into pacifying the borrow checker. I also haven’t heard much discussion about the quality impact associated with taking the developer’s focus off of the domain and onto the borrow-checker. That said, Rust’s productivity hit though is a huge deal for most software development, and you can diagnose and patch a lot of bugs in the amount of time you save by writing Go (including bugs that Rust’s type checker can’t prevent at all).

                                                      Again, I like Rust, and I’m glad it exists, but it’s (generally) not well suited for the sort of software that Go is used to write. It’s a great replacement for C and C++ though (including the security space).

                                                      1. 2

                                                        I agree, the problem space for Rust is ideal for implementing known solutions. After a year of coding only in Rust the ‘appeasing the borrow checker’ became about as difficult as ‘appeasing the syntax checker’ … 97% of the time. Then the other 3% of the time you can find yourself painted into a corner in a way that isn’t very satisfying to fix. Still the basic rules on lifetimes and lifetime elision do creep into working knowledge after a while, I just haven’t seen a document that distills what that knowledge entails into basic patterns or rules.

                                                        1. 1

                                                          I don’t want to make too big of a deal about that point–I think Rust’s quality is still a bit better than Go’s, but the difference seems exaggerated to me and overall has been disappointing considering how much time and energy goes into pacifying the borrow checker.

                                                          Some folks have brought this up in the past but I haven’t seen anyone try and explore this line of inquiry. I personally find myself spending a lot of time thinking about the propagation of borrows and owns as you mentioned, and I find it a drag on thinking about the problem domain, but I only hack on Rust for fun so I can’t say whether this is a problem in production situations or not.

                                                          1. 1

                                                            I’ve kind of had the inverse, going from fulltime rust to python/go I’ve found myself just saying “well, if that error happens so be it, we’ll crash at runtime”. The cognitive overhead of thinking about catching errors and exceptions (or disentangling them) is large for me returning to these none/nil langs. I do recall the borrow checker absolutely ruining me early on tho. These days I tend to think a lot of the code I write in Python would survive the borrow checker. You can write ____ in any language. :D

                                                      2. 1

                                                        I sort of agree. I wish the world was better calibrated for the tradeoffs Rust makes, and some of it is.

                                                        And that’s cool :) but I think this still comes down to values. Your personal values a constructor the software, the values embodied by the product, and the values of the business surrounding the product.

                                                        Personally, my values (and interests) align more to building systems where correctness is more important than speed to delivery, where competition of products is assessed by users on more than who has the most features, and where I would rather take the steep learning curve over a shallower one if it means better design/elegance/consistency.

                                                        But these are my values. And I fully recognize that they may not align with the values of others. Particularly with many of the “high-tech”, “fast-moving” companies and “startups” where, as you said and I do agree, being a first mover is a bigger competitive advantage than having a correct (and I’ll add more generally, a higher quality) product.

                                                        1. 1

                                                          I agree with all of this, and indeed it’s what I mean by “I wish the world was better calibrated for the tradeoffs Rust makes”. Specifically, the economic context is such that we’re largely replacing stuff that humans are doing manually with software, so we’re already talking about gains which dwarf any difference between Rust and the most error prone programming languages. The first mover advantages are enormous, so iteration velocity is king. This is what we mean when we say “the world values iteration speed over performance and correctness”. I wish this weren’t the case–I wish performance and correctness were the most important criteria, but until someone lends me a magic lamp, I must deal with reality.

                                                    1. 5

                                                      Doesn’t include anything in the APL language family. BQN is my favorite: it’s modern in many ways and supports other paradigms too. After learning the basics it’s sufficient even for tasks that don’t fit the array paradigm at all, yet much better than traditional languages, numpy etc with n-dimensional arrays.

                                                      1. 2

                                                        Wow I had not known about BQN but I’ve played with J and APL. Thanks so much for this.

                                                      1. 24

                                                        Am I the only one being completely tired of these rants/language flamewars? Just use whatever works for you, who cares

                                                        1. 11

                                                          You’re welcome to use whatever language you like, but others (e.g. me) do want to see debates on programming language design, and watch the field advance.

                                                          1. 6

                                                            Do debates in blogs and internets comments meaningfully advance language design compared to, say, researchers and engineers exploring and experimenting and holding conferences and publishing their findings? I think @thiht was talking about the former.

                                                            1. 2

                                                              I’m idling in at least four IRC channels on Libera Chat right now with researchers who regularly publish. Two of those channels are dedicated to programming language theory, design, and implementation. One of these channels is regularly filled with the sort of aggressive discussion that folks are tired of reading. I don’t know whether the flamewars help advance the state of the art, but they seem to be common among some research communities.

                                                              1. 5

                                                                Do you find that the researchers, who publish, are partaking in the aggressive discussions? I used to hang out in a couple Plan 9/9front-related channels, and something interesting I noticed is that among the small percentage of people there who made regular contributions (by which I mean code) to 9front, they participated in aggressive, flamey discussion less often than those that didn’t make contributions, and the one who seemed to contribute the most to 9front was also one of the most level-headed people there.

                                                                1. 2

                                                                  It’s been a while since I’ve been in academia (I was focusing on the intersection of PLT and networking), and when I was there none of the researchers bothered with this sort of quotidian language politics. Most of them were focused around the languages/concepts/papers they were working with and many of them didn’t actually use their languages/ideas in real-world situations (nor should they, the job of a researcher is to research not to engineer.) There was plenty of drama in academia but not about who was using which programming language. It had more to do with grant applications and conference politics. I remember only encountering this sort of angryposting about programming languages in online non-academic discussions on PLT.

                                                                  Now this may have changed. I haven’t been in academia in about a decade now. The lines between “researcher” and “practitioner” may have even become more porous. But I found academics much more focused on the task at hand than the culture around programming languages among non-academics. To some extent academics can’t be too critical because the creator of an academic language may be a reviewer for an academic’s paper submission at a conference.

                                                                  1. 2

                                                                    I’d say that about half of the aggressive folks have published programming languages or PLT/PLD research. I know what you’re saying — the empty cans rattle the most.

                                                            2. 8

                                                              You are definitely not the only one. The hide button is our friend.

                                                              1. 2

                                                                So I was initially keen on Go when it first came out. But have since switched to Rust for a number of different reasons, correctness and elegance among them.

                                                                But I don’t ever say “you shouldn’t use X” (where ‘X’ is Go, Java, etc.). I think it is best to promote neat projects in my favorite language. Or spending a little time to write more introductory material to make it easier for people interested to get started in Rust.

                                                                1. 2

                                                                  I would go further, filtering for rant, meta and law makes Lobsters much better.

                                                                  rant is basically the community saying an article is just flamebait, but short of outright removing it. You can choose to remove it.

                                                                2. 5

                                                                  I think this debate is still meaningful because we cannot always decide what we use.

                                                                  If there are technical or institutional barriers, you can ignore $LANG, such as if you’re writing Android apps, where you will use a JVM language (either Kotlin or Java) but if you are writing backend services, outside forces may compel you to adopt Go, despite its shortcomings detailed in this post (and others by the author).

                                                                  Every post of this kind helps those who find themselves facing a future where they must write Go to articulate their misgivings.

                                                                1. 2

                                                                  Seems like a lot of situational (“nuclear apocalypse”) roleplaying to say that OpenBSD has better manpages than Linux. With FreeBSD my experience has been that the manpages are about equally terse as Linux, but the Handbook makes the system a joy to read about and use. Are OpenBSD’s manpages that much better? xinit has the same manpage on OpenBSD as it does on Ubuntu. mail has a good manpage. Is there a good example of these differences between Linux and OpenBSD in a manpage? I’d love to see it.

                                                                  OpenBSD was create for free men like you, enjoy it.

                                                                  Lol

                                                                  1. 2

                                                                    Is there a good example of these differences between Linux and OpenBSD in a manpage? I’d love to see it.

                                                                    I find the OpenBSD’s manpages better written. Some of the examples I routinely consult (OpenBSD vs. Linux):

                                                                    As you will notice above, some of the OpenBSD’s manpages are shorter than Linux’s (or GNU’s) ones—notably, awk(1) vs gawk(1).On the other hand, awk(1) links to script(7), what doesn’t have an equivalent (as far as I am concerned) on Linux.

                                                                    However, in general, I tend to find the information I want on OpenBSD’s manpages way quicker than on Linux’s ones, and I usually find the explanation better.And the fact almost all of them have an examples’s section is very handy.

                                                                    As a side effect of its readability, I tend to read OpenBSD’s manpage more often, and more thoroughly, than the I use(d) to do in other system (including other BSDs).

                                                                    I wonder whether mdoc(7) is the reason OpenBSD’s manpages are so uniformly better than their equivalents.

                                                                    P.S.: I linked to the Ubuntu’s manpages instead of the man7.org’s ones because man 1 ed in my (Ubuntu) system presents the same telegraphic manpage linked, while man7.org’s one is the POSIX Programmer’s Manual manpage, which isn’t even installed in my system.

                                                                    1. 3

                                                                      I wonder whether mdoc(7) is the reason OpenBSD’s manpages are so uniformly better than their equivalents

                                                                      Having somewhat recently switched to mdoc from classic/Linux man macros, it certainly doesn’t hurt. It’s dramatically better in every way. Semantic markup, built-in decent HTML output, tool-enforced uniformity, etc. It shows that it was built by people who actually think man pages are good.

                                                                      1. 1

                                                                        I decided to go a little further and compare equivalent manpages which are introduced only for the OpenBSD and only for Linux, triggered by your comment about the manpage of xinit(1) being the same on OpenBSD and Linux—it is so because it is an “imported code” on both systems, so it makes sense it is the same in both operating systems. (The same happens, for example, for tmux(1)).

                                                                        So, I compared OpenBSD’s ktrace(1) vs. Linux’s strace(1) and, in my opinion, it shows the good difference between those systems: those tools provide the same functionality, but the manpage of the latter is overwhelming and abstruse compared to the one of the former.

                                                                        Thus, I think the manpages of OpenBSD is a great example of their KISS attitude, without sacrificing on completeness of information.

                                                                      2. 2

                                                                        I’m not sure the degree to which it’s enforced, but OpenBSD used to refuse to merge any changes that affected an interface that didn’t come with updates to the man page. FreeBSD was never quite as aggressive, most Linux things don’t come close to either. For example, consider something like _umtx_op, which describes the FreeBSD futex analog. Compare this to the futex man page and you’ll notice two things: first, it’s a lot less detailed, second it has an example of a C wrapper around the system call that isn’t actually present in glibc or musl. OpenBSD’s futex man page isn’t that great - there are a bunch of corner cases that aren’t explicit.

                                                                        Or kqueue vs epoll - the pages are a similar length, but I found the kqueue one was the only reference that I needed, whereas the epoll one had me searching Stack Overflow.

                                                                        The real difference between *BSD and Linux is in the kernel APIs. For example, let’s look up how to do memory allocation in the kernel. FreeBSD has malloc(9), OpenBSD has malloc(9), both with a description of the APIs. The level of detail seems similar. Linux has no kmalloc man page.

                                                                      1. 8

                                                                        Wow.

                                                                        It’s disappointing how slow the roll-out of fast fibre broadband in the UK has been. A couple of years ago I was living in central London, with 67Mb/s the top download speed I could get. Of course, this was asymmetric, so the upload speed was even worse. After several months, Hyperoptic wired the building up with fibre and I could get a symmetric gigabit connection, which was fantastic.

                                                                        Then I had to move slightly further out, still in the London area and still with a fibre connection, but now the best I can get is asymmetric 550Mb/s down / 35Mb/s up. Yes, this is still really fast, but… it’s so much worse than it could be!

                                                                        1. 6

                                                                          In Cambridge, I have BT’s FTTP package, which is 900 Mb/s down, 110 Mb/s up. I don’t know why they do the asymmetric thing, I’d more happily pay for a 500 Mb/s symmetric link. CitiFibre is rolling out a parallel fibre network, though it doesn’t seem reliable. Folks I know using it frequently report downtime and often have poor quality in video calls and so I suspect that they’re not getting anything like their promised 1000Mb/s symmetric bandwidth.

                                                                          Generally, with 900Mb/s downloads, the bottleneck is elsewhere. A lot of servers top out at 200-400 Mb/s, so with wired GigE I can make a second fast connection somewhere else but can’t get more speed from a single download location. With 802.11ac, the WiFi is more often a bottleneck than the wired connection. I don’t have any 11ax hardware yet, in theory it should move the bottleneck back.

                                                                          Upgrading the wiring in my house to handle more than GigE is probably a lot of effort, so I doubt that I’d get much benefit from a faster connection - I only upgraded the switches from 100Mb/s to 1Gb/s a couple of years ago after GigE equipment prices dropped to the dirt-cheap price that I paid for the 100Mb/s hardware I’ve had for 10-15 years. 10GigE switches seem to cost about 50 times as much as 1GigE ones, so I’m in no hurry to upgrade.

                                                                          I remember the upgrade from 2400baud to 14.4 Kb/s and then to 28.8 Kb/s as big jumps that made it possible to load images on web pages by default most of the time. The jump to a 512 Kb/s cable modem in a shared house was a huge improvement. First, because it was always on, but it meant that downloading entire videos or Linux ISOs was feasible (though with some traffic shaping at the router, especially so that someone using BitTorrent didn’t saturate the upstream and prevent ACK packets getting through for everything else. I learned to use PF / ALTQ on OpenBSD from one my housemates solving that problem). I was living with geeks and so when the 1Mb/s option came along we jumped on it and had enough spare bandwidth that we could listen to decent-quality Internet radio. I did set up a repeater for Radio Paradise so that we weren’t using half of the bandwidth to all download the same stream though.

                                                                          I think the provider (NTL, later Virgin Media) upgraded us to 5 Mb/s and then 10 Mb/s on the same price. That was, again, a big jump because we didn’t need to restrict usage at all. I stayed on the 10 Mb/s connection (by then living by myself) as it went from the most expensive package to the cheapest, and as the cheapest connection went from 10-20-30 Mb/s. Streaming video came out around then and Virgin Media did some annoying rate limiting, which meant if you watched an hour of HD video at peak times you’d be throttled for a few hours. They stopped that after a year or so.

                                                                          I think I stayed on 30 Mb/s until moving here. I moved from the cheapest FTTP offering to the most expensive during lockdown when working from home and wanting to make sure the Internet wasn’t a bottleneck (again, mostly for upstream) but 99% of the time I don’t notice the difference. We can play cloud games on Xbox game pass and stream HD video at the same time, but I think you could do that on the 56 Mb/s connection too. Backing up from my NAS to the cloud is faster and downloading games from Game Pass or gog.com is faster (a lot faster from gog.com), but I increasingly don’t install games locally given how good the streaming option is (Game Pass pops up a thing saying ‘Install this game for the best experience’, but I don’t consider worse graphics and longer loading times from my Xbox One S versus the Xbox Series X in the cloud to be the best experience).

                                                                          Maybe 3D AR things will drive up the demand again, but since we passed 50Mb/s we’ve been well in diminshing-returns land, unless you have a large family that all wants to watch different HD films at the same time.

                                                                          1. 4

                                                                            I don’t know why they do the asymmetric thing,

                                                                            Sometimes the underlying infrastructure is asymmetrical, e.g. with GPON. But mostly, I guess, the big end user ISP optimize their network for incoming traffic from big content provider.

                                                                            1. 1

                                                                              I suspect it’s also to discourage people from using residential connections to operate servers.

                                                                              1. 2

                                                                                It’s usually because residential users tend to consume content rather than produce content. Offering a symmetrical 200 Mbit connection is generally less useful than a 300/100 Mbit connection. This also lets ISPs cost cut more as they try and use available channels for downlink rather than uplink. There’s limits to how far this goes as you definitely don’t want to saturate your uplink while trying to consume content, but that’s typically why.

                                                                                1. 2

                                                                                  This is exactly right. People enshrine into technologies and solutions the approaches people are currently taking. This means asymmetry was an engineering shortcut to maximize the usefulness of the technology for what people actually needed.

                                                                                  And then the rest of us upload images to the cloud and actually get around to saturating that upload, dreaming of a world with symmetric links.

                                                                                2. 1

                                                                                  Also the reason why you can’t get static IPv6 prefixes at most provider.

                                                                            2. 3

                                                                              Hello from the North of England! I’m jealous; there are certainly benefits to moving away from London (I lived there for 15 years) but when it comes to internet speeds the saying “it’s grim up north” certainly rings true!

                                                                              Speedtest.net reports 25 Mb/s download, 5 Mb/s upload, and 29 ms ping times for my current connection. And that’s a fantastic improvement since I moved 5 months ago: at my old house a few miles away the fastest connection money could buy was 19 Mb/s down, and just over 1 Mb/s upload. I work from home, and Zoom calls can be rough when others in the house are playing online games.

                                                                              Edit: fixed MB/s -> Mb/s (oops)

                                                                              1. 2

                                                                                I still only get 28Mb/s down in zone 3 of London. Our infrastructure is generally awful.

                                                                                By the way you should know there’s a big difference between “Mb” and “MB”.

                                                                                1. 1

                                                                                  I’m not sure where Speedtest.net’s edge is, but 29 ms ping times can be killer for video calls depending on the latency to Zoom’s closest video edge. Is the 29 ms over WiFi?

                                                                                  1. 1

                                                                                    That’s interesting. Yes, it’s over Wi-Fi. I don’t own any computers with a physical network port in any more, but I can try to see if I can do a Speedtest from the router. If I get better ping times from that I’ll try to stretch a cable via the loft to my office and buy a usb-c network dongle.

                                                                                    1. 2

                                                                                      On my home network, speed tests tend to read a latency of 35ms ping under load on WiFi. Latency stays lower when I’m using Ethernet (and I’ve corroborated similar numbers using iPerf.) Zoom performance is way better on my home network with an ethernet connection even if I’m the only one using it (many fewer stutters or freezes). When both my partner and I are using Zoom over Wifi, the experience is pretty terrible unless one of us gets on Ethernet (since it’s easy to have frames collide on Wifi, causing retries and latency on the RTP “connections” Zoom uses to send video).

                                                                                      1. 1

                                                                                        pinging your gateway may also give you a an approximate picture of how much latency your Wi-Fi leg is contributing to your score, but with less effort.

                                                                                        1. 1

                                                                                          Thanks, that’s a great idea. Running mtr from my laptop to the domain of my ISP yields this for the first two hops:

                                                                                                                                 Packets               Pings
                                                                                           Host                                Loss%   Snt   Last   Avg  Best  Wrst StDev
                                                                                           1. 192.168.1.1                       0.0%    67   26.6   7.5   1.8 124.2  15.9
                                                                                           2. fritz.box                         0.0%    67    3.2   5.2   2.5  21.0   3.6
                                                                                          

                                                                                          192.168.1.1 is a TP-link mesh-networking thing that’s plugged into fritz.box (my ADSL router) with a short cat-5 cable.

                                                                                          Walking through the ADSL router’s options looking for a speed-test option it looks like it too supports mesh, so I will try to make it the primary. That might let me discard a hop some of the time? I can see the router itself from half my house, but tend to connect to the mesh. (It has a cooler network name ;-) )

                                                                                          1. 1

                                                                                            Does your ADSL router have an AP as well? If not then this is standard. Your packet first goes to the AP which then pushes your packet to the router and then to the upstream ISP router.

                                                                                            Try running an mtr to a remote and see how much time is spent getting to your AP.

                                                                                  2. 1

                                                                                    Honestly the state of broadband in the capital was extremely dire 8 years ago. It doesn’t surprise me that you’re not having a good time but I am impressed you’re getting those speeds.

                                                                                    I was on 16Mbit and it would die every night. 3 places in wildly different areas had the same awful oversubscribed ADSL thing. I even ranted about it at length: http://blog.dijit.sh/the-true-state-of-london-broadband

                                                                                  1. 4

                                                                                    Is any server really going to send you data fast enough to justify a huge pipe like that? I have a measly 200mbps connection (1% of that!) and I rarely see my computer receiving anything close to its capacity. Maybe just when I download a new version of Xcode from Apple.

                                                                                    (Obligatory grandpa boast about how my first modem was 110bps — on a Teletype at my middle school — and I’ve experienced pretty much every generation of modem since, from 300 to 1200 to 2400 to… Of all those, the real game changer was going to an always-on DSL connection in the late 90s.)

                                                                                    1. 4

                                                                                      It’s easy to fill a Gigabit line these days in my experience. With a faster uplink, now all devices at my home can fill at least a Gigabit line, at the same time :)

                                                                                      1. 1

                                                                                        Filling 1Gbps is trivial, but pumping 25Gbps data would be rather challenging, if you fully utilize the 25Gbps duplex with the NAT. 25Gbps on each direction means 100Gbps throughout for the router. That’s a huge load on the router, for both software and hardware. For benchmarks, you could recent hourly billed hertzer vps, they have 10Gbps connection with a fairly cheap price. I wondering how’s the peering status is this ISP, the 25Gbps doesn’t really mean anything unless you have huge pipes connected to other ASN. Even with dual 100Gbps, the network can only serve 8 customer at full speed, which is :(

                                                                                        1. 3

                                                                                          init7 peers with hetzner directly, other customers report getting 5+ Gbit/s for their backups to hetzner servers :)

                                                                                          The hetzner server I rent only has a 1 Gbit/s port. Maybe I’ll rent an hourly-billed one just for the fun of doing speed tests at some point.

                                                                                          1. 1

                                                                                            In the mean time, I found this product interesting when searching for ccr2004, at msrp of 199$.

                                                                                            https://mikrotik.com/product/ccr2004_1g_2xs_pcie

                                                                                            The 2C/3C low-end “cloud” servers has full 10G connection, and it’s available across multiple regions.

                                                                                            1. 2

                                                                                              What discourages me massively about this device is clunky integration like this:

                                                                                              This form-factor does come with certain limitations that you should keep in mind. The CCR NIC card needs some time to boot up compared to ASIC-based setups. If the host system is up before the CCR card, it will not appear among the available devices. You should add a PCIe device initialization delay after power-up in the BIOS. Or you will need to re-initialize the PCIe devices from the HOST system.

                                                                                              Also active cooling, which means the noise level is likely above the threshold for my living room :)

                                                                                      2. 2

                                                                                        DigitalOcean directly peers with my ISP and I can frequently saturate my 1 Gbit FTTH. I use NNCP to batch Youtube downloads I might be interested in and grab them on demand from DO at 1 Gbit, which I have to say is awesome, cause I can download long 4/8K videos in seconds.

                                                                                        1. 1

                                                                                          It’s pretty easy to saturate that symmetrically once you have multiple people & devices in the mix, eg) stream a 4K HDR10 movie in the living room while a couple of laptops are sending dozens of gigs to backblaze and the kid is downloading a new game from steam.

                                                                                          1. 3

                                                                                            Not really 4k streaming isn’t that scary, the highest bitrate I’ve ever seen is the spider man form sony at 80Mbps, bb backup over wifi maybe use 1Gbps, and steam download is also capped at 1Gbps. So, it only uses 3Gbps, far from saturated.

                                                                                            1. 2

                                                                                              Yeah sorry, I meant it’s not hard to saturate GP’s 200Mbps connection. The appeal of 25Gbps is that you’re not going to saturate it no matter what everyone in the house is doing, for at least the next few years.

                                                                                        1. 23

                                                                                          The thing is that systemd is not just an init system, given it wants to cover a lot of areas and “seeps” into the userspace. There is understandably a big concern about this and not just one of political nature. Many have seen the problems the pulseaudio-monoculture has brought, which is a comparable case. This goes without saying that ALSA has its problems, but pulseaudio is very bloated and other programs do a much better job (sndio, pipewire (!)) that now have a lot of problems to gain more traction (and even outright have to camouflage as libpulse.so).

                                                                                          Runit, sinit, etc. have shown that you can rethink an init system without turning it into a monoculture.

                                                                                          1. 4

                                                                                            In theory, having all (or at least most) Linux distros on a single audio subsystem seems like a good idea. Bugs should get fixed faster, compatibility should be better, it should be easier for developers to target the platform. But I also see a lot of negativity toward PulseAudio and people seem to feel “stuck” with it now.

                                                                                            So where’s the line between undesirable monoculture and undesirable fragmentation?

                                                                                            1. 21

                                                                                              The Linux ecosystem is happy with some monocultures, the most obvious one is the Linux kernel. Debian has dropped support for other kernels entirely, most other distros never tried. Similarly, with a few exceptions such as Alpine, most are happy with the GNU libc and coreutils. The important thing is quality and long-term maintenance. PulseAudio was worse than some of the alternatives but was pushed on the ecosystem because Poettering’s employer wanted to control more of the stack. It’s now finally being replaced by PipeWire, which seems to be a much better design and implementation. Systemd followed the same path: an overengineered design, a poor implementation (seriously, who in the 2010s, thought that writing a huge pile of new C code to run in the TCB for your system was a good idea?) and, again, pushed because Poettering’s employer wanted to control more of the ecosystem. The fact that the problems it identifies with existing service management systems are real does not mean that it is a good solution, yet all technical criticism is overridden and discounted as coming from ‘haters’.

                                                                                              1. 5

                                                                                                seriously, who in the 2010s, thought that writing a huge pile of new C code to run in the TCB for your system was a good idea?

                                                                                                I really want to agree with you here, but looking back at 2010 what other choice did he realistically have? Now its easy, everyone will just shout rust, but according to Wikipedia, rust didn’t have its first release till June while systemd had its first release in March.

                                                                                                There were obviously other languages that were much safer than C/C++ around then but I can’t think of any that people would have been okay with. If he had picked D, for example, people would have flipped over the garbage collection. Using a language like python probably wasn’t a realistic option either. C was, and still is, ubiquitous just like he wanted systemd to be.

                                                                                                1. 3

                                                                                                  I really want to agree with you here, but looking back at 2010 what other choice did he realistically have?

                                                                                                  C++11 was a year away (though was mostly supported by clang and gcc in 2010), but honestly my choice for something like this would be 90% Lua, 10% modern C++. Use C++ to provide some useful abstractions over OS functionality (process creation, monitoring) and write everything else in Lua. Nothing in something like systemd is even remotely performance critical and so there’s no reason that it can’t be written in a fully garbage collected language. Lua coroutines are a great abstraction for writing a service monitor.

                                                                                                  Rust wouldn’t even be on my radar for something like this. It’s a mixture of things that can’t be written in safe Rust (so C++ is a better option because the static analysis tools are better than they are for the unsafe dialect of Rust) and all of the bits that can could be written more easily in a GC’d language (and don’t need the performance of a systems language). I might have been tempted to use DukTape’s JavaScript interpreter instead of Lua but I’d have picked an interpreted, easily embedded, GC’d language (quickjs might be a better option than DukTape now but it wasn’t around back then).

                                                                                                  C was, and still is, ubiquitous just like he wanted systemd to be.

                                                                                                  Something tied aggressively to a single kernel and libc implementation (the maintainers won’t even accept patches for musl on Linux, let alone other operating systems) is a long way away from being ubiquitous.

                                                                                                2. 4

                                                                                                  In what optics are Pipewire any kind of improvement on the situation? It’s >gstreamer< being re-written by, checking notes, the same gstreamer developers - with the sole improvements over the previous design being the use of dma-buf as a primitive, with the same problems we have with dma-buf being worse than (at least) its IOS and Android counterparts. Poettering’s employers are the same as Wim Taymans. It is still vastly inferior to what DirectShow had with GraphEdit.

                                                                                                3. 14

                                                                                                  I’ve been using Linux sound since the bad old days of selecting IRQs with dipswitches. Anyone who says things are worse under PulseAudio is hilariously wrong. Sound today is so much better on Linux. It was a bumpy transition, but that was more than a decade ago. Let it go.

                                                                                                  1. 6

                                                                                                    Sound today is so much better on Linux.

                                                                                                    Mostly because of improvements to ALSA despite pulseaudio, not because of it.

                                                                                                    1. 4

                                                                                                      Yep! Pulseaudio routinely forgot my sound card existed and made arbitrary un-requested changes to my volume. Uninstalling it was the single best choice I’ve made with the software on my laptop in the last half decade.

                                                                                                  2. -2

                                                                                                    It’s no accident that PulseAudio and SystemD have the same vector, Poettering.

                                                                                                    1. 16

                                                                                                      The word you’re looking for is “developer”, or “creator”. More friendlysock experiment, less name-calling, please :)

                                                                                                      1. 3

                                                                                                        Was Poettering not largely responsible for the virulent spread of those technologies? If so, I think he qualifies as a vector. I stand by my original wording.

                                                                                                        1. 6

                                                                                                          It’s definitely an interesting word choice. To quote Merriam-Webster: vector (noun), \ˈvek-tər,

                                                                                                          1. […]
                                                                                                            1. an organism (such as an insect) that transmits a pathogen from one organism or source to another
                                                                                                            2. […]
                                                                                                          2. an agent (such as a plasmid or virus) that contains or carries modified genetic material (such as recombinant DNA) and can be used to introduce exogenous genes into the genome of an organism

                                                                                                          To be frank, I mostly see RedHat’s power hunger at fault here. Mr. Poettering was merely an employee whose projects, who without doubt follow a certain ideology, fit into this monopolistic endeavour. No one is to blame for promoting their own projects, though, and many distributions quickly followed suit in adopting the RedHat technologies which we are now more or less stuck with.

                                                                                                          Maybe we can settle on RedHat being the vector for this, because without their publicitly no one would’ve probably picked up any of Poettering’s projects in the large scale. To give just one argument for this, consider the fact that PulseAudio’s addition to Fedora (which is heavily funded by RedHat) at the end of 2007 coincides with Poettering’s latest-assumed start of employment at RedHat in 2008 (probably earlier), while Pulseaudio wasn’t given much attention beforehand.

                                                                                                          Let’s not attack the person but discuss the idea though. We don’t need a strawman to deconstruct systemd/pulseaudio/avahi/etc., because they already offer way more than enough attack surface themselves. :)

                                                                                                          1. 5

                                                                                                            Let’s not attack the person but discuss the idea though. We don’t need a strawman to deconstruct systemd/pulseaudio/avahi/etc., because they already offer way more than enough attack surface themselves. :)

                                                                                                            This is why this topic shouldn’t be discussed on this site.

                                                                                                1. 5

                                                                                                  The new hotness is a single binary blog, with all the posts embedded inside.

                                                                                                  mutters kids these days <closes tab>

                                                                                                  Seriously though, why? Generating a static site from a bunch of files or some content in a DB is a Solved Problem™. I guess it’s the age old truth that it’s way more fun to design and code a blog engine than to… blog.

                                                                                                  1. 13

                                                                                                    That’s what I’d expect, one is programming and the other is writing. Most of us find programming easier.

                                                                                                    1. 8

                                                                                                      to me, part of the idea is that the “blog” can be extended to be much more than a blog. if you use hugo or a similar tool, how would you implement a tool like my age encryption thing or even a public ip fetcher?

                                                                                                      you noted that building a blogging engine is fun - it is! a lot of fun! my take is that people should focus on making a website that’s not just a blog - it’s a fun, personal engine that happens to contain a blog. focusing on “blogs are a solved problem” is missing the point, imo.

                                                                                                      1. 6

                                                                                                        I use Hugo, and to provide the tools you do I’d simply use a separate subdomain and program for each tool. Why should my blog be concerned with the details of age encryption or even echoing a user’s IP? A blog, in my mind, is a collection of articles. And a website can be backed by several programs.

                                                                                                        In fact, I provide a variety of small sub-services to myself, and they’re simply separate programs. This has the added benefit that I could shuffle them between machines independently of one another.

                                                                                                        1. 4

                                                                                                          Right. It feels like we’re basically reinventing CGI scripts, but worse.

                                                                                                          1. 1

                                                                                                            why should my blog concern itself with…

                                                                                                            if you enjoy what you’re currently doing, i’m not here to persuade you. i optimize my personal projects for fun, reliability, and maintainability. :3 building separate services and making subdomains for trivial functions isn’t a good time imho. i also don’t like that i’d have to think pretty hard if i wanted to integrate those services on the blog (say, dynamically load a users IP in a post). with one binary, everything is already all connected. but honestly to defend the “validity” of “my way” feels meh. i like parades - the more the merrier!

                                                                                                          2. 1

                                                                                                            I use software to generate HTML from Markdown for my blog, but that’s only part of my site. I have some other static content, some CGI service, a gemsite, etc.

                                                                                                            As far as I can see from my limited understanding of the linked post, it’s basically akin to serving a single HTML page, albeit compressed, from a compiled executable. You still need to generate the HTML before zipping it. So you’ve just shifted one step of the integration pipeline.

                                                                                                            By using tried and tested technology, I can focus on producing the stuff I want. I’ve already run into some issues with gemini server software and it reminded me why I don’t want to deal with that kind of stuff.

                                                                                                            https://portal.mozz.us/gemini/gerikson.com/gemlog/tek/gemserv-update-woes.gmi

                                                                                                            In summary and conclusion, serving my site as a single executable would give me nothing other than bragging rights, and like I stated above, I’m too old for that particular flavor of ordure.

                                                                                                          3. 5

                                                                                                            Same reason why it’s so much more fun to write a game engine than a game 😅

                                                                                                          1. 5

                                                                                                            Yet Another Anti-Web Manifesto (and the question of solving social problems with technical solutions) aside, I’m not sure how this is separate from just writing native apps or web apps. WebAssembly is still required to hook into JS for many bits of page rendering. I guess you could try to use JS/DOM to create a canvas and then have WebAssembly write to the canvas, but then you lose all the work that went into adding native-ish widgets in browsers (moreover you can do something similar by writing raw OpenGL/SDL in one of the many libraries that spawns a native window for you.)

                                                                                                            Writing new net protocols can be fun. With Websockets or HTTP2 you can probably tunnel them into the Web as needed.

                                                                                                            1. 2

                                                                                                              My opinion is:

                                                                                                              • Write the core in cross-platform code. Use a language that lets you build something fast and compact, e.g. C++, Rust, Nim or even Go … but not JS.
                                                                                                              • Use a web-view for rendering stuff the web is good at, like flows of text and media.
                                                                                                              • Use platform-native toolkits for the “chrome” / surrounding UI, and for good platform integration. That does involve sucking it up and writing a few versions of this layer, one per platform you want to support. Cross-platform UI toolkits are crap for people with no taste.
                                                                                                            1. 6

                                                                                                              Who wrote this? By “…when I wrote SSB” I’m guessing it’s Dominic Tarr?

                                                                                                              Mobile platforms are also autonomy robbing simply because they are not cross platform. You have to develop software twice simply because people need to use them on different brands of phone. And again if someone wants to use it on a regular computer. That’s just a silly waste of time.

                                                                                                              Hey, at least there are only two major mobile OSs, as opposed to at least three on “regular” computers. (And all mobile OSs are POSIX compliant.) And it is not a “silly waste of time” to tailor an app to a platform so it supports platform features and integrates coherently with the platform UX.

                                                                                                              I’m not clear what the point of this article is. The web takes away autonomy but so do native apps, somehow WASM will fix it, stay tuned for more. … ?

                                                                                                              1. 8

                                                                                                                There’s a bit of irony here coming from the SSB project which spends an inordinate amount of time dealing with how its state serialization format is tied to the Node runtime.

                                                                                                                1. 3

                                                                                                                  Like the way their signatures rely on the exact behavior of JS’s JSON.stringify function, which makes it inordinately difficult to write a compatible implementation of SSB in any other language.

                                                                                                                2. 2

                                                                                                                  Also, there are a bunch of cross-platform toolkits. Yes, the UX isn’t as good, but it can be done where it is considered economical or otherwise desirable.

                                                                                                                  1. 1

                                                                                                                    Written by Dominic Tarr

                                                                                                                    1. 2

                                                                                                                      😣 I swear that line musta been added after I first read the article…

                                                                                                                      1. 1

                                                                                                                        It may well have been :) Perhaps Dominic lurks around here…

                                                                                                                  1. 7

                                                                                                                    Figured I’d explain why I flagged as “troll” of all things. It’s not a statement on the author @cadey, nor even on systemd itself. I think discussing systemd is fine, but I don’t want flamewars started elsewhere to be adjudicated here on Lobsters.

                                                                                                                    On the nature of “flamewars”: Obviously there are well-intentioned parties involved, there’s no questioning that. Indeed, few flamewars are started for “no good reason”. But that’s just the problem. Once people start using charged language, it becomes hard to distinguish the healthy and unhealthy parts of the conversation. You need look no further than this thread to see Lobsters using the language of epidemiology to discuss other people and projects.

                                                                                                                    Again, I’m not trying to smear the author or the participants in general, or even the topic. But in my view, we should strive to have healthier discussions here on Lobsters.

                                                                                                                    1. 6

                                                                                                                      +1 (though I called it off-topic.)

                                                                                                                      I don’t think it’s useful for Lobsters to turn into the FOSS-drama-gossip site. There’s several Mastodons/Matrix rooms/IRC channels where folks can discuss these flamewars if they so choose.

                                                                                                                    1. 14

                                                                                                                      I work for AWS, my views are my own and do not reflect my employer’s views.

                                                                                                                      Thanks for posting your frustrations with using AWS Lambda, AWS API Gateway, and AWS EventBridge. I agree, using new technologies and handing more responsibility over to a managed service comes with the risk that your organization is unable to adopt and enforce best standards.

                                                                                                                      I also agree that working in a cult-like atmosphere is deeply frustrating. This can happen in any organization, even AWS. I suggest focusing on solving problems and your business needs, not on technologies or frameworks. There are always multiple ways to solve problems. Enumerate at least three, put down pros and cons, then prototype on two that are non-trivially different. With this advice you will start breaking down your organization’s cult-like atmosphere.

                                                                                                                      Specifically addressing a few points in the article:

                                                                                                                      Since engineers typically don’t have a high confidence in their code locally they depend on testing their functions by deploying. This means possibly breaking their own code. As you can imagine, this breaks everyone else deploying and testing any code which relies on the now broken function. While there are a few solutions to this scenario, all are usually quite complex (i.e. using an AWS account per developer) and still cannot be tested locally with much confidence.

                                                                                                                      This is a difficult problem. I have worked in organizations that have solved this problem using individual developer AWS accounts deploying a full working version of “entire service” (e.g. the whole of AWS Lambda), with all its little microservices as e.g. different CloudFormation stacks that take ~hours to set up. It works. I have also worked in organizations that have not solved this problem, and resort to maintaining brittle shared test clusters that break once a week and need 1-2 days of a developer’s time to set up. Be the organization that invests in its developer’s productivity and can set up the “entire service” accurately and quickly in a distinct AWS account.

                                                                                                                      Many engineers simply put a dynamodb:* for all resources in the account for a lambda function. (BTW this is not good). It becomes hard to manage all of these because developers can usually quite easily deploy and manage their own IAM roles and policies.

                                                                                                                      If you trust and train your developers, use AWS Config [2] and your own custom-written scanners to automatically enforce best practices. If you do not trust and do not train your developers, do not give them authorization to create IAM roles and policies, and instead bottleneck this authorization to a dedicated security team.

                                                                                                                      Without help from frameworks, DRY (Don’t Repeat Yourself), KISS (Keep It Simple Stupid) and other essential programming paradigms are simply ignored

                                                                                                                      I don’t see how frameworks are connected with DRY and KISS. Inexperienced junior devs using e.g. Django or Ruby on Rails will still write bad, duplicated code. Experienced trained devs without a framework naturally gravitate towards helping their teams and other teams re-use libraries and create best practices. I think expecting frameworks to solve your problem is an equally cult-like thought pattern.

                                                                                                                      Developers take the generic API Gateway generated DNS name (abcd1234.amazonaws.com) and litter their code with it.

                                                                                                                      Don’t do this, attach a Route 53 domain name to API Gateway endpoints.

                                                                                                                      The serverless cult has been active long enough now that many newer engineers entering the field don’t seem to even know about the basics of HTTP responses.

                                                                                                                      Teach them.

                                                                                                                      Cold starts - many engineers don’t care too much about this.

                                                                                                                      I care about this deeply. Use Go or Rust first, see how much cold starts are still a problem, in my experience p99.99 latency is < 20 ms for trivial (empty) functions (this is still an outrageously high number for some applications). If cold starts on Go or Rust are still a problem, yes you need to investigate provisioned concurrency. But this is a known limitation of AWS Lambda.

                                                                                                                      As teams chase the latest features released by AWS (or your cloud provider of choice)

                                                                                                                      Don’t do this, give new features / libraries a hype-cool-down period that is calibrated to your risk profile. My risk profile is ~6 months, and I avoid all libraries that tell me they are not production ready.

                                                                                                                      When it’s not okay to talk about the advantages and disadvantages of serverless with other engineers without fear of reprisal, it might be a cult. Many of these engineers say Lambda is the only way to deploy anymore.

                                                                                                                      These engineers have stopped solving problems, they are now just lego constructors (I have nothing against lego). Find people who want to solve problems. Train existing people to want to solve problems.

                                                                                                                      I am keeping track of people’s AWS frustrations, e.g. [1]. I am working on the outline of a book I’d like to write on designing, deploying, and operating cloud-based services focused on AWS. Please send me your stories. I want to share and teach ideas for solving problems.

                                                                                                                      [1] https://blog.verygoodsoftwarenotvirus.ru/posts/babys-first-aws/

                                                                                                                      [2] https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html

                                                                                                                      1. 4

                                                                                                                        The serverless cult has been active long enough now that many newer engineers entering the field don’t seem to even know about the basics of HTTP responses.

                                                                                                                        Teach them.

                                                                                                                        I’m happy to teach anyone who wants to learn. Unfortunately this usually comes up in the form of their manager arguing that it’s too much overhead to spend time getting their employee(s) up to speed on web tech and insist on using serverless as a way to paper over what is happening throughout the stack. This goes to the heart of why people characterize it as a cult. The issues it brings into orgs isn’t about the tech as much as it is about the sales pitches coming from serverless vendors.

                                                                                                                        1. 9

                                                                                                                          Interesting. At $WORK, we’re required to create documents containing alternatives that were considered and rejected, often in the form of a matrix with multiple dimensions like cost, time to learn, time to implement, etc. Of course there’s a bit of a push-pull going on with the managers, but we usually timebox it (1 person 1 week if it’s a smaller decision, longer if it’s a bigger one.) Sometimes when launching a new service we’ll get feedback from other senior engineers asking why we rejected an alternative maybe even urging us to reconsider the alternative.

                                                                                                                          Emotional aspects of the cult aside (which sucks, not saying it doesn’t just bringing up a different point), I don’t think I’d ever let a new system be made at work if at least a token attempt weren’t made at evaluating different technologies. I fundamentally think comparing alternatives makes for better implementations, especially when you have different engineers with different amounts of experience with different technologies.

                                                                                                                          1. 1

                                                                                                                            So you write an RFP with metrics/criteria chosen to perfectly meet the solution already settled on?

                                                                                                                            1. 2

                                                                                                                              I mean if that’s what you want to do, sure. Humans will be human after all. But having this kind of a process offers an escape hatch from dogma around a single idea. Our managers also try to apply pressure to just get started and ignore comparative analyses, but with a dictum from the top, you can always push back, citing the need for a true comparative analysis. When big outages happen, questions are asked in the postmortem whether an alternate architecture would have prevented any issues. In practice we often get vocal, occasionally bikeshed-level comments on different approaches.

                                                                                                                              I’m thankful for our approach. Reading about other company cultures reminds me of why I stay at $WORK.

                                                                                                                          2. 2

                                                                                                                            Try giving them alternatives. Want to train your developers, or sign off on technical debt and your responsibility to fix it?, when presented well, can point out the issue. This happens with all tech vendors, and all managers can suck at this. But that’s not the fault of serverless.

                                                                                                                            Note that I’m not arguing that serverless is actually good. As with any tech, the answer is usually “it depends”. But just like serverless, you need experience with other things as well to be able to see this pattern.

                                                                                                                            In fact, I agree with several commenters saying that majority of issues in the article can be applied to any tech. The only real insurmountable technical issue is the testing/local stack. The rest is mostly about processes of the company, or maybe a team in the company.

                                                                                                                          3. 4

                                                                                                                            Specifically addressing a few points in the article

                                                                                                                            … while carefully avoiding the biggest one:

                                                                                                                            “All these solutions are proprietary to AWS”

                                                                                                                            That right there is the real problem. An entirely new generation of devs is learning, the hard way, why it sucks to build on proprietary systems.

                                                                                                                            Or to put it in economic terms, ensure that your infrastructure is a commodity. As we learned in the 90s, the winning strategy is x86 boxen running Linux, not Sun boxen running Solaris ;) And you build for the Internet, not AOL …

                                                                                                                            1. 2

                                                                                                                              I think there are three problems with a lot of the serverless systems, which are closely related:

                                                                                                                              • They are proprietary, single-vendor solutions. If you use an abstraction layer over the top then you lose performance and you will still end up optimising to do things that are cheap with one vendor but expensive for others.
                                                                                                                              • They are very immature. We’ve been building minicomputer operating systems (and running them on microcomputers) for 40+ years and know what abstractions make sense. We don’t really know what abstractions make sense for a cloud datacenter (which looks a bit like a mainframe, a bit like a supercomputer, and a bit like a pile of servers).
                                                                                                                              • They have a lot of vertical integration and close dependencies between them, so it’s hard to use some bits without fully buying into the entire stack.

                                                                                                                              If you think back to the late ’70s / early ‘80s, a lot of things that we take for granted now were still very much in flux. For example, we now have a shared understanding that a file is a variable-sized contiguous blob of bytes. A load of operating systems provided record-oriented filesystems, where each file was an array of strongly typed records. If you do networking, then you now use the Berkeley Sockets API (or a minor tweak like WinSock), but that wasn’t really standardised until 1989.

                                                                                                                              Existing FaaS offerings are quite thin shims over these abstractions. They’re basically ‘upload a Linux program and we’ll run it with access to some cloud things that look a bit like the abstractions you’re used to, if you use a managed language then we’ll give you some extra frameworks that build some domain-specific abstractions over the top’. The domain-specific abstractions are often overly specialised and so evolve quite quickly. The minicomputer abstractions are not helpful (for example, every Azure Function must be associated with an Azure Files Store to provide a filesystem, but you really don’t want to use that filesystem for communication).

                                                                                                                              Figuring out what the right abstractions are for things like persistent storage, communication, fault tolerance, and so on is a very active research area. This means that each cloud vendor gains a competitive advantage by deploying the latest research, which means that proprietary systems remain the norm, that the offerings remain immature. I expect that it will settle down over the next decade but there are so many changes coming on the hardware roadmap (think about the things that CXL enables, for one) that anything built today will look horribly dated in a few years.

                                                                                                                              1. 1

                                                                                                                                Many serverless frameworks are built upon Kubernetes, which is explicitly vendor-neutral. However, this does make your third point stronger: full buy-in to Kubernetes is required.

                                                                                                                                1. 2

                                                                                                                                  Anything building on Kubernetes is also implicitly buying into the idea that the thing that you’ll be running is a Linux binary (well, or Windows, but that’s far less common) with all of the minicomputer abstractions that this entails. I understand why this is being done (expediency) but it’s also almost certainly not what serverless computing will end up looking like. In Azure, the paid FaaS things use separate VMs for each customer (not sure about the free ones), so using something like Kubernetes (it’s actually ACS for Azure Functions, but the core ideas are similar) means a full Linux VM per function instance. That’s an insane amount of overhead for running a few thousand lines of code.

                                                                                                                                  A lot of the focus at the moment is on how these things scale up (you can write your function and deploy a million instances of it in parallel!) but I think the critical thing for the vast majority of users is how well they scale down. If you’re deploying a service that gets an average of 100 requests per day, how cheap can it be? Typically, FaaS things spin up a VM, run the function, then leave the VM running for a while and then shut it down if it’s not in use. If your function is triggered, on average, at an interval slightly longer than the interval that the provider shuts down the VM then the amount that you’re paying (FaaS typically charges only for CPU / memory while the function is running) is far less than the cost of the infrastructure that’s running it.

                                                                                                                              2. 2

                                                                                                                                S3 was a proprietary protocol that has become a de facto industry standard. I don’t see why the same couldn’t happen for Lambda.

                                                                                                                            1. 8

                                                                                                                              I feel like it’s my obligatory “use Vivaldi” comment. The core functionality of Chromium is really great from a technical perspective. What you don’t want is all the “google phone home” nonsense. So just don’t use it. Vivaldi comes with ads/trackers blockers built-in.

                                                                                                                              1. 7

                                                                                                                                Ungoogled Chromium is a much better choice for addressing the concerns expressed in the webcomic.

                                                                                                                                1. 0

                                                                                                                                  I don’t really think a cryptocurrency scam is the answer here tbh.

                                                                                                                                  1. 15

                                                                                                                                    I don’t use Vivaldi, but I think you may be thinking of Brave?

                                                                                                                                    1. 9

                                                                                                                                      Ah sorry, I usually see people shilling for Brave in these kinds of discussions, so I mixed them up.

                                                                                                                                      I really don’t think a cryptocurrency scam closed-source proprietary browser is the answer here tbh.

                                                                                                                                      1. 3

                                                                                                                                        Chrome and Firefox are open-source on paper only. You have no say in how they’re developed unless you work for a trillion dollar advertising company, can’t contribute code unless it’s your full time job, and can’t ever hope to audit even a tiny fraction of the tens of millions of lines of code even if it’s your full time job.

                                                                                                                                        1. 2

                                                                                                                                          Can’t comment on Chrome, but for Firefox I can personally tell you that is not true. I scratched my own itch in the developer tools when I was in high school. Was it easy? No. Was it impossibly difficult? Also no.

                                                                                                                                          (In fairness though this was easier with the developer tools than with, say, Gecko core.)

                                                                                                                                        2. 3

                                                                                                                                          Their explanation of why their UI is not open source. To be perfectly honest, though, you’re clearly not coming from a place of honest exploration or debate.

                                                                                                                                          1. 14

                                                                                                                                            I’m coming from a place of dismissing closed-source browsers. I don’t think that’s unwarranted. We have really good open-source browsers.

                                                                                                                                            When the concern is that Chrome is phoning home and invading your privacy, it seems absolutely bonkers to me to suggest that switching to another closed-source browser is the solution.

                                                                                                                                          2. 1

                                                                                                                                            At this point you seem to have an axe to grind. We get it. FOSS is good and crypto is bad here on Lobsters.

                                                                                                                                            1. 1

                                                                                                                                              Serious question: aren’t cryptocurrencies subjectively bad? Some waste energy, some things don’t work, a lot of things are scams., the main use is for illegal trades. Is there something amazing somewhere that I am missing?

                                                                                                                                              1. 3

                                                                                                                                                We’re going far off-topic from the OP so I don’t think there’s value in starting that discussion here. If you’d like to discuss the topic, I’m always open to DMs discussing stuff, though I get busy and may take time to respond.

                                                                                                                                        3. 3

                                                                                                                                          There is nothing crypto-related with Vivaldi. It’s just a browser.

                                                                                                                                      1. 7

                                                                                                                                        Then theoretical limit a server can support on a single port is 2⁴⁸ which is about 1 quadrillion because:

                                                                                                                                        Over IPv4. This goes to 2¹⁴⁴ over IPv6, which is exceeding by far the estimated 2⁸⁰ atoms in the entire observable universe.

                                                                                                                                        1. 10

                                                                                                                                          According to https://educationblog.oup.com/secondary/maths/numbers-of-atoms-in-the-universe, it’s not 2^80 but on the order of 10^80, which (log2(2.4 * 10 ^ 78), to be more exact, as taken from the article) works out to approx. 2^260, so IPv6 is still not ready to cover it all. But I agree with the general idea IPv6 address space should be sufficient for humankind in the observable future.

                                                                                                                                          1. 4

                                                                                                                                            I hope we get there one day. For now I’m stuck unsupported: https://test-ipv6.com/ 0/10 :(

                                                                                                                                            1. 5

                                                                                                                                              Surprised to see I’m 0/10, too. As far as I know this has never impacted me. Given that the IPv4 scarcity worries turned out like the Peak Oil scare did, can someone remind me why I should still care about IPv6? (I’m only half joking)

                                                                                                                                              1. 8

                                                                                                                                                IPv4 scarcity is not the only reason to care about v6 (having millions of IPs per server can be very useful, for just one example) but it’s also not a fake problem. v4 scarcity is getting worse all the time. T-Mobile LTE users don’t even get a NAT’d v4 anymore, just a sort of edge translation to be able to mostly reach v4 only hosts (this breaks a lot of WebRTC stacks if the server is v4 only for example).

                                                                                                                                                1. 2

                                                                                                                                                  T-Mobile LTE users don’t even get a NAT’d v4 anymore

                                                                                                                                                  Forgive me for being ignorant here, but I thought NAT was pretty much the bandaid for mitigating the symptoms of IPv4 address exhaustion (at least on the client side). Is there some fundamental limit to how many users can be behind a NAT, and is T-Mobile doing a type of translation different from standard NAT in order to get around it?

                                                                                                                                                  1. 5

                                                                                                                                                    Yes, T-Mobile isn’t using a standard NAT or CGNAT at all. They use 464XLAT if you want to look up the tech.

                                                                                                                                                    1. 1

                                                                                                                                                      There are limits to NAT but it’s mostly either port exhaustion or too much translation happening on a single router. Layered NAT can solve that but that degrades performance. There are probably limits at which point IPv6 would be cheaper to run than layers and layers of NAT, but I don’t know if that time is coming any time soon.

                                                                                                                                                      1. 0

                                                                                                                                                        CGnat means you share an ipv4 address; makes hole punching even worse, but most things can be made to work.

                                                                                                                                                  2. 3

                                                                                                                                                    10/10 here

                                                                                                                                                    1. 1

                                                                                                                                                      Thanks for the link — that’s new to me. I get 10/10 at home; I’m not fond of Comcast/Xfinity the company but I’m happy to see they’re giving me up-to-date service.

                                                                                                                                                      So does this mean that I could run a publicly-accessible server from home without NAT shenanigans and with a stable IPv6 address?

                                                                                                                                                      1. 2

                                                                                                                                                        Yeah. Once enabled, your router (depending on the router) will usually delegate addresses in the assigned /64 to devices on your network. You can live a NAT-free life! Just be careful to firewall.

                                                                                                                                                  1. 11

                                                                                                                                                    People get the browsers they deserve.

                                                                                                                                                    We’d see more competition in this space, but developers have voted with their feet every time Google or whoever implements a feature and dangles it out. Developers wanted a more complex web and more complicated services–well guess what that means for browser complexity? Webshits played themselves. Don’t complain about browser monocultures enabling spying at the same time you support endless feature creep and evergreen standards.

                                                                                                                                                    We’d see better privacy, but consumers flocked to hand over their digital everything to anybody willing to dangle a blinking cat picture or whatever in their face. People who don’t take responsibility for behaviors that, by construction, undermine their freedom and privacy shouldn’t act surprised when they lose either.

                                                                                                                                                    1. 8

                                                                                                                                                      The domination of Chrome came way before “stuff only works in Chrome” things started becoming the norm. Chrome got popular cuz it was super fast and had a smooth UI.

                                                                                                                                                      I do understand that an expensive-to-implement standard plays into the lock-in effect… I do think it’s not super cut and dry, though. Flash existed, plugins existed… maybe the web shouldn’t have any of those either, but lots of people wanted them. And I’m honestly glad I don’t have to download “the netflix application”.

                                                                                                                                                      I don’t know how you square the circle of “people want to use interactive applications in a low friction way” with “we should not make web browsers turing machines” , without the gaps being filled by stuff that could be worse. I don’t have a good solution though

                                                                                                                                                      1. 6

                                                                                                                                                        Do you really think developer preferences played a large role in Chrome’s dominance of the market? Seems to me that Google created their market share through PR and advertising, especially on their own sites, and from their control of the default apps on Android.

                                                                                                                                                        1. 4

                                                                                                                                                          This is where the glib “nobody actually cares about privacy” rejoinder comes from. When it comes down to it, consumers don’t actually seem to care about privacy. I don’t know if it’s an education thing (“hey look your personal data is being sold to target ads to you”) or maybe people really don’t care and it’s odd folks like us that do. These days I genuinely believe that data privacy is a niche interest and the average user doesn’t care as long as they can connect with their friends and loved ones.

                                                                                                                                                          At the very least GDPR style disclosures of what data is being collected can help folks who are willing understand what data they are giving up.

                                                                                                                                                          1. 12

                                                                                                                                                            This comic tried to address it near the end but I think the big problem is that most consumers don’t really understand what it means to lose something as nebulous as ‘privacy’. If you asked if they want a webcam in their bedroom streaming data to Google / Amazon / Facebook, that’s one thing, but having one of these companies track everything that you do on the web? It’s much harder to understand why that’s bad. As the comic explains, the real harm comes from aggregation and correlation signals. Even then, most of the harm isn’t done directly to the individual who is giving up their privacy.

                                                                                                                                                            Bruce Schneier had a nice example attack. If people see ‘I have voted’ badges on their friends social media things, then they are around 5-10% more likely to vote. If you track browsing habits, especially which news sites people visit, then you can get a very good estimate of someone’s voting intention. You can easily correlate that with other signals to get address. In a constituency with a fairly narrow margin (a lot of them in states with effectively two-party systems) then you can identify the people most likely to vote for candidates A and B. If you hide ‘I’ve voted’ badges from the social media UIs for people who lean towards B and show them for people who lean towards A then you have a very good chance of swinging the election.

                                                                                                                                                            That said, the fact that a person using Chrome / Facebook / WhatsApp / whatever is giving that company a hundred-millionth of the power that they need to control the government in their country is probably not a compelling reason for most people to switch. Individually, it doesn’t make much of a difference whether you use these services or not.

                                                                                                                                                            Unless you’re a member of a minority, of course. Then you have to worry about things like the pizza-voucher attack (demonstrated a few years ago, you can place an ad with Google targeting gay 20-somethings in a particular demographic with a voucher that they can claim for discounted pizza delivery. Now you have the names and addresses of a bunch of victims for your next hate crime spree).

                                                                                                                                                            1. 9

                                                                                                                                                              I think the 2 main reasons people don’t care about privacy are that

                                                                                                                                                              • it simply doesn’t make a huge difference in their lives whether their right to privacy is respected or not. Most people simply have bigger fish to fry and don’t have the cycles to spare on things that may be bad but aren’t actively causing them harm.
                                                                                                                                                              • technology companies like Google, Meta, etc. have done a great job of presenting their software as “free”. I think most people think of signing up for Gmail or Instagram like they would getting a driver’s license or library card; they’re just signing up for some public service. These companies do the most to avoid framing this for what it is: an exchange of value, just like any other. You’re paying with your data, and you’re getting access to their service in exchange for that data. As long as using “free” software isn’t understood by consumers as a value exchange, they will never demand protection of their right to privacy and data dignity.

                                                                                                                                                              As someone who works in the data privacy and governance space, it’s encouraging to see growing awareness of these issues at the consumer and government regulation level. Hopefully with enough movement from the government and private sector, we can keep fighting “Big Tech’s” deceptive narratives around data and their software.

                                                                                                                                                          1. 11

                                                                                                                                                            Hardly the smallest when things like https://github.com/jcalvinowens/asmhttpd or https://github.com/nemasu/asmttpd exist.

                                                                                                                                                            Heck, even my own hittpd is only 124k statically-linked with musl and I didn’t optimize for size.

                                                                                                                                                            1. 4

                                                                                                                                                              It would be interesting comparing these servers (thttpd, asmhttpd, asmttpd, hittpd, etc) along a few dimensions (latency, throughput, etc). I might try this over the weekend if I get a chance.

                                                                                                                                                              1. 1

                                                                                                                                                                I’d love to see how they compare! I’m betting the one written in C gets a speed boost, but it might depend on the optimization

                                                                                                                                                            1. 24

                                                                                                                                                              Then there’s a whole different help document in the man file.

                                                                                                                                                              1. 19

                                                                                                                                                                Sometimes --help opens a pager with the man file.

                                                                                                                                                                Then there’s the GNU info system; info cat is a whole different document than man cat.

                                                                                                                                                                1. 5

                                                                                                                                                                  “Modern” software doesn’t tend to come with a man page IME, and most of the new package managers don’t even have a mechanism for packages to provide man pages. Increasingly, documentation websites and a --help/-h/-help/help flag/subcommand is the only documentation you’ll get.

                                                                                                                                                                  1. 12

                                                                                                                                                                    I think this is because people can lie to themselves that their website is adequate documentation, but it’s harder to be equally self deluded when you have to write a text only man page. A couple exhortations and an example look so much less impressive stripped of hero graphics.

                                                                                                                                                                    1. 4

                                                                                                                                                                      Yes. I basically agree with this trend. Man is one too many ways to do it. Everything should just support -h/--help and include a prominent URL at the top if there’s more help than can fit in there.

                                                                                                                                                                      1. 26

                                                                                                                                                                        I’m exactly the opposite. My first instinct is always man <command> generally because I want more info than -h/--help should provide. I always get very annoyed when a tool doesn’t come with manpages. --help isn’t sufficient imo.

                                                                                                                                                                        1. 10

                                                                                                                                                                          Especially when the site disappears from the web a few years later.

                                                                                                                                                                          1. 1

                                                                                                                                                                            I don’t want the accessibility of documentation to depend on an external program.

                                                                                                                                                                            1. 1

                                                                                                                                                                              I’m confused, are you referring to a web browser or man(1) here? Or both, and you’re advocating plain text docs?

                                                                                                                                                                              1. 2

                                                                                                                                                                                I guess both, but it was primarily aimed at man. If the program just spits out its help, you can pipe it into your favorite pager and be happy. Having a man page creates a dependency and limits to program to being installed by the system package manager.

                                                                                                                                                                          2. 20

                                                                                                                                                                            No, please don’t do that. I spend most of my day working on an air-gapped system and going to a URL simply isn’t straightforward. Don’t just assume people have an Internet connection. Also, the version of a man page will always match the version of the installed software.

                                                                                                                                                                            It’d be better if there were usage and version commands that would be like man but output basic usage only and version details, respectively.

                                                                                                                                                                            1. 4

                                                                                                                                                                              There’s tldr. No idea if it works behind an air gap, but I guess you could download the database locally.

                                                                                                                                                                            2. 3

                                                                                                                                                                              If I had to give a one word advice on how to become a hacker, that would be “manpages”. Unix mastery of those who red manpages is a whole other level than those who think piping to grep with simple string matching isnan advance hack.

                                                                                                                                                                            3. 1

                                                                                                                                                                              Is there a manpage writing flow that you prefer? Every time I’ve wanted to write a manpage I’ve shied away. I usually settle with writing Markdown or HTML documentation and just shipping that instead.

                                                                                                                                                                              1. 3

                                                                                                                                                                                You can use pandoc to convert from Markdown to man pages. It grabs some of the details from a magic title line (search for pandoc_title_block on the page).

                                                                                                                                                                                1. 1

                                                                                                                                                                                  I’ve done that in the past but usually end up just converting it to HTML instead. I wish there was a version of man that didn’t rely on troff/groff and didn’t use man’s somewhat baroque section system, but there’s not much else in its place. I have really enjoyed using CHM files in the past for offline documentation. For long trips without connectivity (e.g. train, plane) I’ve downloaded language and library docs in the past and chugged along on projects.

                                                                                                                                                                                2. 2

                                                                                                                                                                                  I tend to just write mandoc directly and use the lint option of mandoc to check it (I have now started adding it to CI, so man pages get checked on commit) but LLVM uses Sphinx to generate the man pages. The Sphinx docs are a lot easier to edit than mandoc (troff macros) and seem to generate good output, but they require a separate toolchain to build the man pages so you either need that as part of your build flow or you need to commit the generated pages to your source repo.

                                                                                                                                                                            1. 6

                                                                                                                                                                              What an excellent enthusiast video. Took me right back to squeaks and squawks.

                                                                                                                                                                              It also highlighted the not-hugely-publicised fact that BT, the UK’s monopoly telco infrastructure provider, is switching off POTS in 2025 and replacing it with VOIP to every home, with (according to the video) no real plan for backup emergency calling and no governmental requirements for mobile phone service providers to do it either. And apparently in the last few big power cuts over this winter, due to increasingly frequent adverse weather events, the mobile networks just stopped working because their masts didn’t have enough backup power. So I did a bit of quick reading around and while the operators will be supposed to provide Optical Terminator Kits with batteries (I’m sure that’ll be 100% good in all cases, for sure, no worries about that at all), it seems we don’t even know yet exactly what’s going to happen for telecare services for elderly & vulnerable and they’re currently testing that, even tho they’ve set the deadline fixed for 2025 - but it’s OK because “Ofcom has also made it a requirement for telecoms providers to identify people who are reliant on their landline and provide them with a free back-up option in case there’s a power outage.” So that’s OK then, they’ll definitely make sure they have that covered.

                                                                                                                                                                              Regulators asleep at the wheel, governments running with their arms full of cash full speed into the wall.

                                                                                                                                                                              1. 2

                                                                                                                                                                                Wait, do you have any proof for these allegations? It makes sense to transition off POTS in most countries with decent POTS infrastructure (like the UK). Maintaining POTS infrastructure is quite costly, and moving to IP based connectivity makes a lot of sense in this day and age where most of the demand for networks is for IP based networks. The closest I found was https://www.ispreview.co.uk/index.php/2022/03/isp-bt-pauses-uk-digital-voice-rollout-after-consumer-complaints.html which seems to indicate that BT is well-aware of some of the issues surfacing from the POTS transition and is now taking steps to make sure there aren’t any service disruptions. So what’s the basis of this negativity?

                                                                                                                                                                                1. 4

                                                                                                                                                                                  Hello! Err, proof for “allegations”? Umm, I reported what it said in the video (note “apparently”), and pointed to a website I found about it, which is Which?, the UK’s main consumer support website that’s really got any credibility. (The bit about OTK was on a BT support forum, a quick google away.) Not sure how far I’m making allegations or particularly need to prove them, but if there’s factual inaccuracy in what I posted, I accept and apologise.

                                                                                                                                                                                  Basis of “negativity”? Nearly 50 years’ experience of life lived mostly in UK under a succession of governments who time and again champion capital-favouring measures at the expense of public service.

                                                                                                                                                                                  To me this seems a perfect example of that. I’m sure it does make financial sense to transition off POTS, especially for the bottom line of the private company running the infrastructure that it was gifted from public ownership by the UK government in its fire sale in the ‘80s, regardless of whether it makes sense for the public it’s supposed to serve; I don’t take that as a prima facie reason for allowing it (because I believe that some things are more important than finance and governments should work such that public service benefit should trump private profit every time) but I could even accept it if there was a demonstrated and proven strategy for maintaining the public service elements (e.g. emergency calls and telecare). It said in the video that there aren’t, so I commented on that, and I did some Googling to find out if this was true, and posted a link to what I found.

                                                                                                                                                                                  Thanks for the link, it’s interesting - I’m sorry to have to say though that I’ve heard things like “private company X is taking steps to make sure there aren’t any service disruptions” so many times before that I just don’t have any faith in such statements, particularly under the current government whose public service priority is reduction in that service at the cost of public benefit to the gain of private interests, as shown repeatedly in the PPE and NHS disasters and manipulation of the last couple of years. If you do have faith in these kinds of statements and the private companies and governments that make them, great, I genuinely hope you’re right, but I’m afraid I’m not holding my breath. Time will tell whether “steps” taken will actually do anything at all, or whether they fail completely to provide any benefit to the public (in this case, by definition vulnerable members of the public in particular) and whether the companies “responsible” will have to deal with even censure, let alone any strand of accountability, if not.

                                                                                                                                                                                  1. 3

                                                                                                                                                                                    Only time will tell, but I think it is very obvious VoIP is much more complicated and easier to break compared to POTS. Of course it’s possible to make VoIP or even Cellular as stable as POTS, especially in times of high stress(natural disasters, etc). The question becomes, will that happen? The incentives are not really there for the providers to do that, POTS wasn’t designed originally to be so reliable, but it’s simplicity made it pretty easy comparatively. It necessitates offloading some of the reliability onto end users(battery maintenance at the very least), which points to things not going well for many, compared to POTS where end-user maintenance was near zero.