Threads for glesica

  1. 5

    We are the generation that see the rot under the surface. That see wallen garden web silos and sideload-locked devices as steps backwards. We had it all and lost it.

    1. 7

      It’s human nature to look back to a time when things were novel and new. Such memories are inherently radiant with the glow of nostalgia.

      I have that too. Ask me about the Amiga or the Atari 8 bit sitting behind me :)

      However as I mention in my comment above, while I totally get it, I also think there’s value in choosing a different path as I personally feel like the tech world is drowning in negativity, which can be cathartic and habit forming in and of itself.

      I’m NOT judging anyone, just wondering if perhaps choosing differently might be something to consider.

      1. 4

        We had it all and lost it.

        I disagree. Editing AUTOEXEC.BAT to play a game wasn’t something we “had”, it was something that was forced upon us by circumstance. And the article actually makes that quite clear. No one wanted it, we just didn’t have a choice.

        1. 5

          I was really hopeful during the “downhill battle” era for FOSS and commons.

          Then came three big blows:

          1. Smartphones. We were caught up or ahead on the desktop, then the arena was moved to the pocket with these super locked down computers.
          2. Web silos. Facebook, Twitter, Instagram, MySpace, YouTube ate the web. This specifically is what I mean as a sense of “loss”. I know there is IRC and mailing lists for some projects but there used to be for all kinds of mainstream topics.
          3. DRM-laden streaming sites like Spotify and Netflix.

          I feel that we who went through edit autoexec.bat, notepad index.html, ./configure && make have a different perspective on this. That’s not to say that those three were good things, they were bad, but they were markers; I’ve just noticed that more people from that era share the perspective of how messed up the current tech stacks are. They paved paradise and out up a parking lot.

          The advantage of the current era is easier UI. Silents and boomers were overjoyed in the era of iPhone and Facebook since they were finally “let in”, and millennials and younger grew up in a Truman Show world where Insta and YouTube were as established and inescapable as TV, radio, roads, and grocery stores had been for us. It’s like that old story of what a fish thinks about water.

          (With plenty of individual exceptions in all directions because I’m talking cohort trends here; we have our pioneer elders, our non-nerdy peers, and people of all ages who wanna to see behind the curtain of how the tech world works.)

      1. 18

        The post by the author (who I believe is a lobste.rs member) ends on a sad note. I don’t think just the publicity caused this crash in enthusiasm - I’m guessing the internet was the internet and people were unkind to them which I can totally see killing enthusiasm for an endeavor, especially if the spotlight was shone too early.

        To the author - I hope, once this 15min of hell has passed, your motivation comes back, and you keep working on it, since there must have been interesting problems in that space you wanted to solve.

        1. 21

          Generally I’d agree with this sentiment.

          But the author is known for being rather obnoxious and rude towards other projects he disagrees with, and was even banned from lobsters for this reason. So in this case I don’t feel too bad.

          1. 28

            He’s also made significant effort - and improvement! - on those fronts.

            I have first-hand experience of interacting with him on IRC, as a paying customer requesting with questions about his products. I wish all vendors were as approachable, polite, and direct as he is.

            Re. the note on his ban - I too find myself disappointed in the world (of software) at times, as do many of my friends and colleagues. I note though that few people take the step of launching our own commercial products as a means of improving it.

            1. 8

              commercial products

              commercial and ethical products

              They might be opinionated, but they are still free software. That’s really not typical nowadays.

              I have noticed some introspection, e.g. https://drewdevault.com/2022/07/09/Fediverse-toxicity.html.

              I too have issues dealing with my frustration and textual interactions don’t make it any easier. Without easily accessible peers to discuss things with, it falls to the online community to help people cultivate their opinions.

              I am thankful that many people here have the patience.

              1. 5

                They might be opinionated, but they are still free software. That’s really not typical nowadays.

                Agreed; and that’s a large part of the reason I made the switch to sourcehut from GitLab.

            2. 12

              In this case you are the one being obnoxious and rude. You don’t know the guy, don’t spread rumors and hate.

              1. 12

                I agree that it’s time lobsters moved on from this and stop bringing up DeVault’s past mistakes.

                However, this isn’t a “rumor” or “hate”. They were simply stating a well-known fact about Drew’s aggressiveness and rudeness, one which I’ve also experienced and seen others experience. (To be fair, I’ve noticed good behavior has improved a lot over the past 12 months.)

                Jeez, I really look forward to the day when lobsters can discuss Drew’s work before dragging up shit from 1 year ago.

                1. 15

                  I think it certainly is hate. These comments seem a lot like targeted harassment to me. Most of the commenters don’t seem to have first hand experience with what they are talking about. They also appear whenever drew does something good which just detracts from everything.

              2. 11

                The reasons were not made public and it’s bad form to attack someone who can’t respond.

                1. 7

                  Ah, I am no longer as active on lobste.rs as I used to be and I missed that Drew got banned. I just searched through his history but didn’t find the smoking gun that got him banned. Anyhoo, sad all around.

                  1. 16

                    There’s some context in this thread, though it doesn’t provide an exact reason.

                    I had a long response to his Wayland rant because I think the generalizations in that post were simply insulting at best and it drove me crazy.

                    He is a clever engineer, but he has a tendency to invite controversy and alienate people for no reason. After that rant of his, I lost any desire to ever engage with him again or use his products if I can help it, which may be extreme, but after numerous similar exchanges I think it’s unfortunately necessary.

                    1. 12

                      Yeah, I’m surprised and somewhat sad. He’s difficult and abrasive sometimes, but I respect his engineering.

                      1. 3

                        im so tired of this sentiment

                        1. 17

                          im so tired of this sentiment

                          Saying you’re tired of another person’s take without giving any reason is a pretty vacuous and unnecessary comment. The button for minimizing threads is there for a reason.

                          1. 18

                            I’m also tired of the sentiment that allows someone to be shitty just because they’re good at solving a problem.

                            1. 2

                              Unfortunately (?) you can’t disallow someone from being shitty.

                              1. 7

                                One can for certain exclude them from a group of friends that you care for.

                            2. 2

                              This comment is inappropriate. I am sure that the tone and attitude here is not a fit for the community what we are aiming for on lobsters

                            3. 2

                              The opposite leads to bad engineering decisions.

                              1. 12

                                Health care and related fields have a concept of the quality-adjusted life year, which is used to measure impacts of various treatments, or policies, by assigning a value to both the quantity and quality of life. There are grounds for critiquing the way the concept is used in those fields, but the idea probably ports well to our own field where we could introduce the concept of the quality-adjusted code unit. Let’s call it QALC to mirror QALY for life-years.

                                The gist of the argument here is that while there are some people who produce an above-average number of QALCs, if they are sufficiently “abrasive” they may well end up driving away other people who would also have produced some number of QALCs. So suppose that a is the number of QALCs produced by such a person, and l is the number lost by their driving away of other people. The argument, then, is that in many cases l > a or, more simply, that the person’s behavior causes a net loss overall, even when taking quality (or “good engineering” or whatever synonym you prefer) into account.

                                My own anecdotal experience of involvement in various open-source projects is that we often drastically overestimate the “abrasive” person’s QALCs and underestimate the QALCs of those who are driven away, making it almost always a net loss to tolerate such behavior.

                                1. 5

                                  I’m 100% OK with bad engineering decisions (within reason) if it means my life is more pleasant. If hanging out with brilliant assholes makes your life more pleasant, then by all means, go for it!

                                  1. 2

                                    It took me 20 minutes to pay for something on my iPhone today because the app wouldn’t let me scroll down to the “submit” button, and the website wouldn’t either until I looked up how to hide the toolbar on Safari. That doesn’t make my life more pleasant.

                                    Besides, you aren’t forced to hang out with people just because they are allowed to post.

                                    1. 2

                                      By allowing them to post you allow them to hang out in your and the other users’ brains.

                                  2. 6

                                    there is no tradeoff

                                    we don’t have to accept abusive or toxic people in our communities

                                    1. 5

                                      I think this mindset is what has lead to the success of the Rust project in such a short span of time. It turns out that having a diverse community of respectful individuals invites more of them and leads to better problem solving.

                                    2. 3

                                      Are you implying that only difficult and abrasive engineers do good work? Because I have personal experience of the opposite, not to speak of numerous historical accounts.

                                      1. 2

                                        No.

                          1. 22

                            However, there should be no restriction on using Open Source software for training. Actually, what Microsoft does here, is the scalable version of someone learning from Open Source code and then starting a consulting business.

                            It is not the same. Computers do not “learn” in the way that humans do, they do not possess semantic understanding. Computers do not have the capacity for creativity that humans do. This argument only serves to further confuse people about what ML actually is and what it isn’t.

                            1. 4

                              But argument is about whether looking at a bunch of source code is copyright infringement, not the nature of consciousness or something. I can’t think of how training a neural network to, say, identify possible authorship, using copyrighted images as a training set is any different than me studying a bunch of books of copyrighted images to gain enough expertise to do the same.

                              1. 1

                                No, the problem with this is not the “looking”, they already do that for their search engine and nobody cared.

                                The problem is the thoughtless regurgitation of other people’s code, without attribution, and without regard for whether it constitutes fair use or not. Since authorship has been scrubbed from the model, there is no way to determine for yourself if you are in violation, and MS absolves themselves of responsibility. It’s a time bomb.

                                See this copilot is stupid for a deeper exploration of this problem.

                                1. 1

                                  But the quote in your comment only deals with “looking”.

                                  1. 1

                                    there should be no restriction on using Open Source software for training

                                    Not “looking”, training. Training what? An ML model. For what purpose? Producing code.

                              2. 3

                                Computers do not “learn” in the way that humans do, they do not possess semantic understanding

                                Now prove that humans possess semantic understanding

                                1. 5

                                  Now prove that humans possess semantic understanding

                                  Proof presupposes it.

                                  1. 3

                                    We have to. All this code doesn’t fit into our brain. We can’t even turn it off, you can’t read a piece of code without imagining how it will continue.

                                    Attention models will just embed whole files if that’s the best way to do it. They don’t care about scope.

                                1. 10

                                  This is one of the reasons I started using DuckDuckGo. It doesn’t have these garbage widgets that suddenly pop up 2 seconds after the page is ‘loaded’ making everything jump around causing miss clicks.

                                  1. 5

                                    Funny you should say that, because I have had that exact problem with DDG because of their “instant answers” or whatever they call it that pop in at the top of the results.

                                    1. 1

                                      DDG is similar in my opinion.

                                      At least they have a decluttered version - DDG lite - that I switched to because I’m so fed up with the lack of results - 10 after the initial search plus other features I don’t like - “more results”, embedded image or video results above the actual results - there are already tabs for images or videos.

                                      I set up 2 keyword searches (in firefox) - one for lite and one for regular search pages.

                                      The good thing about keyword searches is that you can take full advantage of their URL parameter support to control the look, feel and functionality (including turning instant answers off). Some of those options may no longer work though but most of them do.

                                  1. 2

                                    Good read! I remember wondering about this when Go came out. I didn’t think too hard about it, but it seemed like a potential problem. But now, thinking through the things that actually have to go wrong, I absolutely agree with the author.

                                    1. 3

                                      I would like it if there were a user agent that took a more adversarial approach to the modern web; start with the idea that it shouldn’t leak data, and be willing by default to break applications by being principled. I guess I should get to coding with nyxt.

                                      1. 5

                                        The problem, I think, is that such a browser would need to gain a lot of market share to drive an improvement in the situation.

                                        Firefox broke the original browser monoculture by offering new features that allowed people to do things they couldn’t do before (like tabs, extensions, and general performance). Early Firefox users had to put up with breakage, but it was worthwhile because they got new features. Then, as Firefox got more popular, sites started working better on it because their owners didn’t want to lose traffic.

                                        It just so happened that Firefox was also philosophically superior to IE (for many of us). Unfortunately, “philosophically superior” alone simply isn’t a good enough reason for most people to put up with breakage. And yes, I consider (most) data leakage a philosophical issue because it doesn’t (usually) impact an individual directly, it impacts groups, in the aggregate.

                                        So if someone came up with a browser that didn’t leak any data, but added few or no other (widely desirable) features, then sure, a handful of people would use it, but not enough to force site owners to make their sites work with it. So the breakage would never go away and the broader situation would remain unchanged.

                                        Anyway, my point is that a highly principled stand doesn’t seem like a great strategy in this case, if you actually want to improve the wider situation. If, on the other hand, you just don’t like the idea of personally leaking data, then sure, go for it!

                                        1. 4

                                          Oh, for sure. I am under no illusions about my relationship with the larger world of technology. My manager at Apple used to tell me “Apple would go broke if we made software for you” and he’s right. But, nyxt exists, so …. hmmm.

                                          1. 1

                                            I haven’t gotten into nyxt development, but there’s a ton of stuff I’d like to add, starting with “cosmetic” element blocking (like uBlock Origin).

                                      1. 4

                                        The parallel between societies and software is a great find! The big thing that I disagree with though is:

                                        and a fresh-faced team is brought in to, blessedly, design a new system from scratch. (…) you have to admit that this system works.

                                        My experience is the opposite. No customer is willing to work with a reduced feature set, and the old software has accumulated a large undocumented set of specific features. The new-from-scratch version will have to somehow reproduce all of that, all the while having to keep up with patching done to the old system that is still running as the new system is under development. In other words, the new system will never be completed.

                                        In short, we have no way to escape complexity at all. Once it’s there, it stays. The only thing we can do to keep ourselves from collapse as described in the article is avoid creating complexity in the first place. But as I think is stated correctly, that is not something most organisations are particularly good at.

                                        1. 11

                                          No customer is willing to work with a reduced feature set…

                                          Sure they are, because the price for the legacy system keeps going up. They eventually bite the bullet. That’s been my experience, anyway. The evidence is that products DO actually go away, in fact, we complain about Google doing it too much!

                                          Yes, some things stay around basically forever, but those are things that are so valuable (to someone) that someone is willing to pay dearly to keep them running. Meanwhile, the rest of the world moves on to the new systems.

                                          1. 3

                                            Absent vandals ransacking offices, perhaps this is what ‘collapse’ means in the context of software; the point where its added value can no longer fund its maintenance.

                                            1. 1

                                              Cost is one way to look at it, but it’s much harder to make this argument in situations like SaaS. The cost imposed on the customer is much more indirect than when it’s software the customer directly operates. You need to have a deprecation process that can move customers onto the supported things in a reasonable fashion. When this is done well, there is continual evaluation to reduce the bleeding from new adoption of a feature that’s going away while migration paths are considered.

                                              I think the best model for looking at this overall is the Jobs To Be Done (JTBD) framework. Like many management tools, it can actually be explained in to a software engineer on a single page rather than requiring a book, but people like to opine.

                                              You split out the jobs that customers need done which are sometimes much removed from the original intent of a feature. These can then be mapped onto a solution, or the solution can be re-envisioned. Many people don’t get to the bottom of the actual job the customer is currently doing and then they deprecate with alternatives that only partially suit task.

                                            2. 4

                                              My experience is the opposite. No customer is willing to work with a reduced feature set

                                              Not from the same vendor. But if they’re lucky enough not to be completely locked in, once the first vendor’s system is sufficiently bloated and slow and buggy, they might be willing to consider going to the competition.

                                              It’s still kind of a rewrite, but the difference this time is that one company might go under while another rises. (If the first company is big enough, they might also buy the competition…)

                                            1. 9

                                              When you tell them the original game Elite had a sprawling galaxy, space combat in 3D, a career progression system, trading and thousands of planets to explore, and it was 64k, I guess they HEAR you, but they don’t REALLY understand the gap between that, and what we have now.

                                              Hi! I’m a young programmer. When someone says “this game had x, y, and z in (N < lots) bytes”, which I hear is that it was built by dedicated people working on limited hardware who left out features and polish that is often included in software today, didn’t integrate it with other software in the ecosystem that uses freeform, self-describing formats that require expensive parsers, and most importantly took a long time to build and port the software.

                                              Today, we use higher-level languages which give us useful properties like:

                                              • portability
                                              • various levels of static analysis
                                              • various levels of memory safety
                                              • scalability
                                              • automatic optimization
                                              • code reuse via package managers

                                              and the tradeoff there is that less time is spent in manual optimization. It’s a tradeoff, like anything in engineering.

                                              1. 8

                                                While I’m curious about the free-form, self-describing formats you’re talking about (and why their parsers should be so expensive), cherry picking from your arguments, there are a lot of interesting mismatches between expectations and reality

                                                which I hear is that it was built by dedicated people (…) and most importantly took a long time to build and port the software.

                                                Elite was written by two(!) undergraduate students, and ran on more, and more different CPU architectures than any software developed today. It’s true that the ports were complete rewrites, but if wikipedia is correct, these were single-person efforts.

                                                • various levels of static analysis
                                                • various levels of memory safety
                                                • code reuse via package managers

                                                Which are productivity boosters; modern developers should be faster, not slower than those from the assembly era.

                                                • scalability

                                                Completely irrelevant for desktop software, as described in the article.

                                                • automatic optimization

                                                If optimization is so easy, why is software so slow and big?

                                                My personal theory is that software development in the home computer era was sufficiently difficult that it demotivated all but the most dedicated people. That, and survivor bias, makes their efforts seem particularly heroic and effective compared to modern-day, industrialized software development.

                                                1. 7

                                                  My personal theory is that software development in the home computer era was sufficiently difficult that it demotivated all but the most dedicated people. That, and survivor bias, makes their efforts seem particularly heroic and effective compared to modern-day, industrialized software development.

                                                  I tend to agree. How many games like Elite were produced, for example? Also, how many epic failures were there? I’m not saying I know the answers, I just don’t think the debate is productive without them. Pointing to Elite and saying “software was better back then” is just nostalgia.

                                                  Edit: Another thought, how much crap software was created with BASIC for specific purposes and we’ve long since forgotten about it?

                                                  1. 2

                                                    I’m curious about the free-form, self-describing formats you’re talking about (and why their parsers should be so expensive)

                                                    I’m mostly talking about JSON. JSON is, inherently, a complex format! It requires that you have associative maps, for one thing, and arbitrarily large ones at that. Interoperating with most web APIs requires unbounded memory.

                                                    You respond to my assertion that building software in assembly on small computers requires dedication by saying:

                                                    Elite was written by two(!) undergraduate students,

                                                    But then say:

                                                    My personal theory is that software development in the home computer era was sufficiently difficult that it demotivated all but the most dedicated people.

                                                    It seems like you agree with me here. Two undergraduates can be dedicated and spend a lot of time on something.

                                                    [scalability is] Completely irrelevant for desktop software, as described in the article.

                                                    No, it’s not. Scalability in users is irrelevant, but not in available resources. Software written in Python on a 32-bit system can easily be run on a 64-bit one with all the scaling features that implies. There are varying shades of this; C and Rust, for instance, make porting from 32 to 64 bit easy, but not trivial, and assembly makes it a gigantic pain in the ass.

                                                    Which are productivity boosters; modern developers should be faster, not slower than those from the assembly era.

                                                    I don’t agree. These are not productivity boosters; they can be applied that way, but they are often applied to security, correctness, documentation, and other factors.

                                                    1. 1

                                                      I’m mostly talking about JSON. JSON is, inherently, a complex format! It requires that you have associative maps, for one thing, and arbitrarily large ones at that. Interoperating with most web APIs requires unbounded memory.

                                                      JSON is mostly complex because it inherits all the string escaping rules from JavaScript; other than that, SAX-style parsers for JSON exist, they’re just not commonly used. And yes, theoretically, I could make a JSON document that just contains a 32GB long string, blowing the memory limit on most laptops, but I’m willing to bet that most JSON payloads are smaller than a kilobyte. If your application needs ‘unbounded memory’ in theory, that’s a security vulnerability, not a measure of complexity.

                                                      (And JSON allows the same key to exist twice in a document, so associative maps are not a good fit)

                                                      It seems like you agree with me here. Two undergraduates can be dedicated and spend a lot of time on something.

                                                      But it also puts a bound on the ‘enormous effort’ involved here. Just two people with other obligations, just two years of development.

                                                      No, it’s not. Scalability in users is irrelevant, but not in available resources. Software written in Python on a 32-bit system can easily be run on a 64-bit one with all the scaling features that implies. There are varying shades of this; C and Rust, for instance, make porting from 32 to 64 bit easy, but not trivial, and assembly makes it a gigantic pain in the ass.

                                                      As someone who has spent time both porting C code from 32 bit to 64 bit, and porting Python2 string handling code to Python3 string handling code, I’d say the former is much easier.

                                                      And that’s part of my pet theory for why modern software development is so incredibly slow: a lot of effort goes into absorbing breaking changes from libraries and language runtimes.

                                                      1. 3

                                                        You’re moving the goalposts. My initial point was that, given some source code and all necessary development tools, it’s far easier to expand a Python or Lua or Java program to use additional resources - such as but not limited to bus width and additional memory - than an equivalent assembly program. You’re now talking about something totally else: problems with dependencies and the constant drive to stay up to date.

                                                        my pet theory for why modern software development is so incredibly slow: a lot of effort goes into absorbing breaking changes from libraries and language runtimes.

                                                        I agree with you here, but it’s a complete non-sequitur from what we were talking about before. It’s at least as hard, if not harder, to port an assembly program to a new operating system, ABI, or processor as it is to port a Python 2 program to Python 3.

                                                        1. 1

                                                          You’re moving the goalposts. My initial point was that, given some source code and all necessary development tools, it’s far easier to expand a Python or Lua or Java program to use additional resources - such as but not limited to bus width and additional memory - than an equivalent assembly program.

                                                          That is most definitely true. I actually think the use of extremes doesn’t make this discussion any easier. I don’t think anyone wants to go back to assembly programming. But at the same time there’s obviously something wrong if it takes 200 megabytes of executables to copy some files.

                                                          1. 3

                                                            But at the same time there’s obviously something wrong if it takes 200 megabytes of executables to copy some files.

                                                            What’s wrong, exactly? The company providing the service was in business at the time of the rant, and there’s no mention of files being lost.

                                                            The only complaint is an aesthetic one. Having 200MB of executables to move files feels “icky”.

                                                            1. 2

                                                              There are externalities to code bloat, in the form of e-waste (due to code bloat obsoleting less powerful computers), and energy use. It’s not very relevant in the case of one 200MB file transfer program, but over an industry, it adds up horribly.

                                                              1. 4

                                                                Agreed. These externalities are not taken into account by producers or most consumers. That said, I think there are more important things to focus on before one gets to software bloat: increased regulation regarding privacy and accessibility among them.

                                                1. 16

                                                  In some ways, high-level languages with package systems are to blame for this. I normally code in C++ but recently needed to port some code to JS, so I used Node for development. It was breathtaking how quickly my little project piled up hundreds of dependent packages, just because I needed to do something simple like compute SHA digests or generate UUIDs. Then Node started warning me about security problems in some of those libraries. I ended up taking some time finding alternative packages with fewer dependencies.

                                                  On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t. It’s cool to look at how tiny and efficient code can be — a Scheme interpreter in 4KB! The original Mac OS was 64KB! — but yowza, is it ever difficult to code that way.

                                                  There was an early Mac word processor — can’t remember the name — that got a lot of love because it was super fast. That’s because they wrote it in 68000 assembly. It was successful for some years, but failed by the early 90s because it couldn’t keep up with the feature set of Word or WordPerfect. (I know Word has long been a symbol of bloat, but trust me, Word 4 and 5 on Mac were awesome.) Adding features like style sheets or wrapping text around images took too long to implement in assembly compared to C.

                                                  The speed and efficiency of how we’re creating stuff now is crazy. People are creating fancy OSs with GUIs in their bedrooms with a couple of collaborators, presumably in their spare time. If you’re up to speed with current Web tech you can bring up a pretty complex web app in a matter of days.

                                                  1. 24

                                                    I don’t know, I think there’s more to it than just “these darn new languages with their package managers made dependencies too easy, in my day we had to manually download Boost uphill both ways” or whatever. The dependencies in the occasional Swift or Rust app aren’t even a tenth of the bloat on my disk.

                                                    It’s the whole engineering culture of “why learn a new language or new API when you can just jam an entire web browser the size of an operating system into the application, and then implement your glorified scp GUI application inside that, so that you never have to learn anything other than the one and only tool you know”. Everything’s turned into 500megs worth of nail because we’ve got an entire generation of Hammer Engineers who won’t even consider that it might be more efficient to pick up a screwdriver sometimes.

                                                    We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t

                                                    That’s the argument, but it’s not clear to me that we haven’t severely over-corrected at this point. I’ve watched teams spend weeks poking at the mile-high tower of leaky abstractions any react-native mobile app teeters atop, just to try to get the UI to do what they could have done in ten minutes if they’d bothered to learn the underlying platform API. At some point “make all the world a browser tab” became the goal in-and-of-itself, whether or not that was inefficient in every possible dimension (memory, CPU, power consumption, or developer time). It’s heretical to even question whether or not this is truly more developer-time-efficient anymore, in the majority of cases – the goal isn’t so much to be efficient with our time as it is to just avoid having to learn any new skills.

                                                    The industry didn’t feel this sclerotic and incurious twenty years ago.

                                                    1. 7

                                                      It’s heretical to even question whether or not this is truly more developer-time-efficient anymore

                                                      And even if we set that question aside and assume that it is, it’s still just shoving the costs onto others. Automakers could probably crank out new cars faster by giving up on fuel-efficiency and emissions optimizations, but should they? (Okay, left to their own devices they probably would, but thankfully we have regulations they have to meet.)

                                                      1. 1

                                                        left to their own devices they probably would, but thankfully we have regulations they have to meet.

                                                        Regulations. This is it.

                                                        I’ve long believed that this is very important in our industry. As earlier comments say, you can make a complex web app after work in a weekend. But then there are people, in the mentioned above autoindustry, that take three sprints to set up a single screen with a table, a popup, and two forms. That’s after they pulled in the internet worth of dependencies.

                                                        On the one hand, we don’t want to be gatekeeping. We want everyone to contribute. When dhh said we should stop celebrating incompetence, majority of people around him called this gatekeeping. Yet when we see or say something like this - don’t build bloat or something along the line - everyone agrees.

                                                        I think the middle line should be in between. Let individuals do whatever the hell they want. But regulate “selling” stuff for money or advertisement eyeballs or anything similar. If an app is more then x MB (some reasonable target), it has to get certified before you can publish it. Or maybe, if a popular app does. Or, if a library is included in more then X, then that lib either gets “certified”, or further apps using it are banned.

                                                        I am sure that is huge, immensely big, can of worms. There will be many problems there. But if we don’t start cleaning up shit, it’s going to pile up.

                                                        A simple example - if controversial - is Google. When they start punishing a webapp for not rendering within 1 second, everybody on internet (that wants to be on top of google) starts optimizing for performance. So, it can be done. We just have to setup - and maintain - a system that deals with the problem ….well, systematically.

                                                      2. 1

                                                        why learn a new language or new API when you can just jam an entire web browser the size of an operating system into the application

                                                        Yeah. One of the things that confuses me is why apps bundle a browser when platforms already come with browsers that can easily be embedded in apps. You can use Apple’s WKWebView class to embed a Safari-equivalent browser in an app that weighs in at under a megabyte. I know Windows has similar APIs, and I imagine Linux does too (modulo the combinatorial expansion of number-of-browsers times number-of-GUI-frameworks.)

                                                        I can only imagine that whoever built Electron felt that devs didn’t want to deal with having to make their code compatible with more than one browser engine, and that it was worth it to shove an entire copy of Chromium into the app to provide that convenience.

                                                        1. 1

                                                          Here’s an explanation from the Slack developer who moved Slack for Mac from WebKit to Electron. And on Windows, the only OS-provided browser engine until quite recently was either the IE engine or the abandoned EdgeHTML.

                                                      3. 10

                                                        On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

                                                        The problem is that your dependencies can behave strangely, and you need to debug them.

                                                        Code bloat makes programs hard to debug. It costs programmer time.

                                                        1. 3

                                                          The problem is that your dependencies can behave strangely, and you need to debug them.

                                                          To make matters worse, developers don’t think carefully about which dependencies they’re bothering to include. For instance, if image loading is needed, many applications could get by with image read support for one format (e.g. with libpng). Too often I’ll see an application depend on something like ImageMagick which is complete overkill for that situation, and includes a ton of additional complex functionality that bloats the binary, introduces subtle bugs, and wasn’t even needed to begin with.

                                                        2. 10

                                                          On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

                                                          The problem is that computational resources vs. programmer time is just one axis along which this tradeoff is made: some others include security vs. programmer time, correctness vs. programmer time, and others I’m just not thinking of right now I’m sure. It sounds like a really pragmatic argument when you’re considering your costs because we have been so thoroughly conditioned into ignoring our externalities. I don’t believe the state of contemporary software would look like it does if the industry were really in the habit of pricing in the costs incurred by others in addition to their own, although of course it would take a radically different incentive landscape to make that happen. It wouldn’t look like a code golfer’s paradise, either, because optimizing for code size and efficiency at all costs is also not a holistic accounting! It would just look like a place with some fewer amount of data breaches, some fewer amount of corrupted saves, some fewer amount of Watt-hours turned into waste heat, and, yes, some fewer amount of features in the case where their value didn’t exceed their cost.

                                                          1. 7

                                                            We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t

                                                            But we aren’t. Because modern resource-wastfull software isn’t really realeased quicker. Quite the contrary, there is so much development overhead that we don’t see those exciting big releases anymore with a dozen of features every ones loves at first sight. They release new features in microscopic increments so slowly that hardly any project survives 3-5 years without becoming obsolete or out of fashion.

                                                            What we are trading is quality, by quantity. We lower the skill and knowledge barrier so much to acompdate for millions of developers that “learned how tonprogra in one week” and the results are predictably what this post talks about.

                                                            1. 6

                                                              I’m as much against bloat as everyone else (except those who make bloated software, of course—those clearly aren’t against it). However, it’s easy to forget that small software from past eras often couldn’t do much. The original Mac OS could be 64KB, but no one would want to use such a limited OS today!

                                                              1. 5

                                                                The original Mac OS could be 64KB, but no one would want to use such a limited OS today!

                                                                Seems some people (@neauoire) do want exactly that: https://merveilles.town/@neauoire/108419973390059006

                                                                1. 6

                                                                  I have yet to see modern software that is saving the programmer’s time.

                                                                  I’m here for it, I’ll be cheering when it happens.

                                                                  This whole thread reminds me of a little .txt file that came packaged into DawnOS.

                                                                  It read:

                                                                  Imagine that software development becomes so complex and expensive that no software is being written anymore, only apps designed in devtools. Imagine a computer, which requires 1 billion transistors to flicker the cursor on the screen. Imagine a world, where computers are driven by software written from 400 million lines of source code. Imagine a world, where the biggest 20 technology corporation totaling 2 million employees and 100 billion USD revenue groups up to introduce a new standard. And they are unable to write even a compiler within 15 years.

                                                                  “This is our current world.”

                                                                  1. 11

                                                                    I have yet to see modern software that is saving the programmer’s time.

                                                                    People love to hate Docker, but having had the “pleasure” of doing everything from full-blown install-the-whole-world-on-your-laptop dev environments to various VM applications that were supposed to “just work”… holy crap does Docker save time not only for me but for people I’m going to collaborate with.

                                                                    Meanwhile, programmers of 20+ years prior to your time are equally as horrified by how wasteful and disgusting all your favorite things are. This is a never-ending cycle where a lot of programmers conclude that the way things were around the time they first started (either programming, or tinkering with computers in general) was a golden age of wise programmers who respected the resources of their computers and used them efficiently, while the kids these days have no respect and will do things like use languages with garbage collectors (!) because they can’t be bothered to learn proper memory-management discipline like their elders.

                                                                    1. 4

                                                                      I’m of the generation that started programming at the tail end of ruby, and Objective-C, and I would definitely not call this the golden age, if anything, looking back at this period now it looks like mid-slump.

                                                                    2. 4

                                                                      I have yet to see modern software that is saving the programmer’s time.

                                                                      What’s “modern”? Because I would pick a different profession if I had to write code the way people did prior to maybe the late 90s (at minimum).

                                                                      Edit: You can pry my modern IDEs and toolchains from my cold, dead hands :-)

                                                                2. 6

                                                                  Node is an especially good villain here because JavaScript has long specifically encouraged lots of small dependencies and has little to no stdlib so you need a package for near everything.

                                                                  1. 5

                                                                    It’s kind of a turf war as well. A handful of early adopters created tiny libraries that should be single functions or part of a standard library. Since their notoriety depends on these libraries, they fight to keep them around. Some are even on the boards of the downstream projects and fight to keep their own library in the list of dependencies.

                                                                  2. 6

                                                                    We’re trading CPU time and memory, which are ridiculously abundant

                                                                    CPU time is essentially equivalent to energy, which I’d argue is not abundant, whether at the large scale of the global problem of sustainable energy production, or at the small scale of mobile device battery life.

                                                                    for programmer time, which isn’t.

                                                                    In terms of programmer-hours available per year (which of course unit-reduces to active programmers), I’m pretty sure that resource is more abundant than it’s ever been any point in history, and only getting more so.

                                                                    1. 2

                                                                      CPU time is essentially equivalent to energy

                                                                      When you divide it by the CPU’s efficiency, yes. But CPU efficiency has gone through the roof over time. You can get embedded devices with the performance of some fire-breathing tower PC of the 90s, that now run on watch batteries. And the focus of Apple’s whole line of CPUs over the past decade has been power efficiency.

                                                                      There are a lot of programmers, yes, but most of them aren’t the very high-skilled ones required for building highly optimal code. The skills for doing web dev are not the same as for C++ or Rust, especially if you also constrain yourself to not reaching for big pre-existing libraries like Boost, or whatever towering pile of crates a Rust dev might use.

                                                                      (I’m an architect for a mobile database engine, and my team has always found it very difficult to find good developers to hire. It’s nothing like web dev, and even mobile app developers are mostly skilled more at putting together GUIs and calling REST APIs than they are at building lower-level model-layer abstractions.)

                                                                    2. 2

                                                                      Hey, I don’t mean to be a smart ass here, but I find it ironic that you start your comment blaming the “high-level languages with package systems” and immediately admit that you blindly picked a library for the job and that you could solve the problem just by “taking some time finding alternative packages with fewer dependencies”. Does not sound like a problem with neither the language nor the package manager honestly.

                                                                      What would you expect the package manager to do here?

                                                                      1. 8

                                                                        I think the problem in this case actually lies with the language in this case. Javascript has such a piss-poor standard library and dangerous semantics (that the standard library doesn’t try to remedy, either) that sooner, rather than later, you will have a transient dependency on isOdd, isEven and isNull because even those simple operations aren’t exactly simple in JS.

                                                                        Despite being made to live in a web browser, the JS standard library has very few affordances to working with things like URLs, and despite being targeted toward user interfaces, it has very few affordances for working with dates, numbers, lists, or localisations. This makes dependency graphs both deep and filled with duplicated efforts since two dependencies in your program may depend on different third-party implementations of what should already be in the standard library, themselves duplicating what you already have in your operating system.

                                                                        1. 2

                                                                          It’s really difficult for me to counter an argument that it’s basically “I don’t like JS”. The question was never about that language, it was about “high-level languages with package systems” but your answer hyper focuses on JS and does not address languages like python for example, that is a “high-level language with a package system”, which also has an “is-odd” package (which honestly I don’t get what that has to do with anything).

                                                                          1. 1

                                                                            The response you were replying to was very much about JS:

                                                                            In some ways, high-level languages with package systems are to blame for this. I normally code in C++ but recently needed to port some code to JS, so I used Node for development. It was breathtaking how quickly my little project piled up hundreds of dependent packages, just because I needed to do something simple like compute SHA digests or generate UUIDs.

                                                                            For what it’s worth, whilst Python may have an isOdd package, how often do you end up inadvertently importing it in Python as opposed to “batteries-definitely-not-included” Javascript? Fewer batteries included means more imports by default, which themselves depend on other imports, and a few steps down, you will find leftPad.

                                                                            As for isOdd, npmjs.com lists 25 versions thereof, and probably as many isEven.

                                                                            1. 1

                                                                              and a few steps down, you will find leftPad

                                                                              What? What kind of data do you have to back up a statement like this?

                                                                              You don’t like JS, I get it, I don’t like it either. But the unfair criticism is what really rubs me the wrong way. We are technical people, we are supposed to make decisions based on data. But this kind of comments that just generates division without the slightest resemblance of a solid argument do no good to a healthy discussion.

                                                                              Again, none of the arguments are true for js exclusively. Python is batteries included, sure, but it’s one of the few. And you conveniently leave out of your quote the part when OP admits that with a little effort the “problem” became a non issue. And that little effort is what we get paid for, that’s our job.

                                                                        2. 3

                                                                          I’m not blaming package managers. Code reuse is a good idea, and it’s nice to have such a wealth of libraries available.

                                                                          But it’s a double edged sword. Especially when you use a highly dynamic language like JS that doesn’t support dead-code stripping or build-time inlining, so you end up having to copy an entire library instead of just the bits you’re using.

                                                                        3. 1

                                                                          On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

                                                                          We’re trading CPU and memory for the time of some programmers, but we’re also adding the time of other programmers onto the other side of the balance.

                                                                          1. 1

                                                                            I definitely agree with your bolded point - I think that’s the main driver for this kind of thing.

                                                                            Things change if there’s a reason for them to be changed. The incentives don’t really line up currently to the point where it’s worth it for programmers/companies to devote the time to optimize things that far.

                                                                            That is changing a bit already, though. For example, performance and bundle size are getting seriously considered for web dev these days. Part of the reason for that is that Google penalizes slow sites in their rankings - a very direct incentive to make things faster and more optimized!

                                                                          1. 16

                                                                            I think the reason it’s primarily “grumpy old developers” (and I count myself amongst that crowd) complaining about software bloat is that we were there 20 years ago, so we have the benefit of perspective. We know was possible with the limited hardware available at the time, and it doesn’t put today’s software in a very flattering light.

                                                                            The other day I was editing a document in Pages and it made my MacBook Pro slow down to a crawl. To be fair my machine isn’t exactly new, but as far as I can tell Pages isn’t doing anything that MS Word 2000 wasn’t doing 20 years ago without straining my 200Mhz Pentium. Sure, Pages renders documents in HD but does that really require 30 times the processing power?

                                                                            1. 14

                                                                              This might be selective memory of the good old days. I was in high school when Office 97 came out, and I vaguely remember one of my classmates complaining about it being sluggish.

                                                                              1. 7

                                                                                I think there’s A LOT of this going around. I used Office 97 in high school and it was dog shit slow (tick tick tick goes the hard disk)! Yes, the school could have sprung for $2,500 desktops instead of $1,500 desktops (or whatever things cost back then) but, adjusted for inflation, a high-end laptop today costs what a low-end laptop cost in 1995. So we’re also comparing prevailing hardware.

                                                                                1. 2

                                                                                  Should’ve gone for the Pentium II with MMX

                                                                                2. 10

                                                                                  Word processing programs were among the pioneers of the “screenshot your state and paint it on re-opening” trick to hide how slow they actually were at reaching the point where the user could interact with the app. I can’t remember a time when they were treated as examples of good computing-resource citizens, and my memory stretches back a good way — I was using various office-y tools on school computers in the late 90s, for example.

                                                                                  Modern apps also really are generally doing more; it’s not like they stood still, feature-wise, for two decades. Lots of things have “AI” running in the background to offer suggestions and autocompletions and offer to convert to specific document templates based on what they detect you writing; they have cloud backup and live collaborative editing; they have all sorts of features that, yes, consume resources. And that some significant number of people rely on, so cutting them out and going back to something which only has the feature set of Word 97 isn’t really an option.

                                                                                  1. 5

                                                                                    When a friend of mine showed me Youtube, before the Google acquisition, on the high-school library computers, I told him “Nobody will ever use this, it uses Macromedia Flash in the browser and Flash in browser is incredibly slow and nobody will be able to run it. Why don’t we just let users download the videos from an FTP server?” I ate those words hard. “grumy old developers” complain about software bloat because they’re always looking on the inside, never the out. When thinking about Youtube, I too was looking on the inside. But fundamentally people use software not for the sake of software but for the sake of deriving value from software.

                                                                                    In other words, “domain expert is horrified at the state of their own domain. News at 11.”

                                                                                  1. 20

                                                                                    …since I follow rather obscure artists.

                                                                                    I really appreciate the FAQ at the beginning, particularly the acknowledgement above. It encodes a kind of empathy: the author realizes that their technology decision is contextual, which helps me reciprocate and remember that my own decisions are also contextual.

                                                                                    1. 19

                                                                                      There’s a surprising number of languages without case sensitivity. SQL is probably the most used, also Ada, Fortran, Pascal. I write my SQL without capitalizing the keywords and it freaks people out but it sure looks better to me.

                                                                                      1. 6

                                                                                        I write my SQL without capitalizing the keywords and it freaks people out but it sure looks better to me.

                                                                                        I generally do the same (although if I’m editing an existing script / query then I take a “when in Rome” approach). The reason, though, is that I can never figure out what EXACTLY is supposed to be capitalized! Some people capitalize only statements, others capitalize operators, etc. Since it’s not case sensitive, “capitalize nothing” is just less cognitive load.

                                                                                        1. 6

                                                                                          In SQL specifically I prefer to have (only) the keywords capitalized because it makes the names stand out better.

                                                                                        1. 1

                                                                                          Wouldn’t a VPN pretty trivially block this, though?

                                                                                          1. 9

                                                                                            The problem is you’re paying for a service, and then to use that service safely you have to pay, and trust, yet another service.

                                                                                            1. 5

                                                                                              Oh sure, I didn’t mean to imply that it wasn’t problematic. I’m just surprised that Vodafone is (apparently) investing a bunch of money in something that Apple (and possibly Google) can easily circumvent for their users. In fact, I wonder if Private Relay would already mitigate this for iOS users.

                                                                                              1. 2

                                                                                                Ah, fair enough :D

                                                                                                I’m honestly not even sure that 100% TLS wouldn’t be sufficient - all the obvious implementations trivially fail with TLS, the slightly less obvious implementations would generally fail for any packets that have to travel across network boundaries. I would assume that to be willing to take the potential publicity hit, they’d have to be sure that they can make a profit so presume that they can defeat anything the clients can do?

                                                                                                1. 2

                                                                                                  I’m honestly not even sure that 100% TLS wouldn’t be sufficient - all the obvious implementations trivially fail with TLS, the slightly less obvious implementations would generally fail for any packets that have to travel across network boundaries.

                                                                                                  Yeah presumably they’re not counting on injecting ads into pages, given that something like 90% of traffic is now encrypted.

                                                                                                  I’d have to guess that this works something like:

                                                                                                  1. User visits site.
                                                                                                  2. Site serves ad-network code.
                                                                                                  3. Ad-network observes that user’s IP is a vodafone IP and passes that IP on to a vodafone API to get vodaphone’s profile info – built out of data fed into it via calls like this, and vodafone’s observation of DNS and whatever other unencrypted data they can observe, buy, infer, etc (or maybe they just give out their Vodafone super-ID and let the ad networks worry about maintaining the profile for the ID). Vodafone makes money off of these calls.
                                                                                                  4. That profile info feeds into the ad bidding.

                                                                                                  Ad blockers still defeat this (unless the ad network stuff starts happening server-side instead of in-browser), but this would let them turn their complete knowledge of the user-IP mapping into an unblockable super tracking cookie that never resets.

                                                                                                  1. 1

                                                                                                    You presume the people pushing for this stuff to be used are the same as those who understand and implement it.

                                                                                                  2. 2

                                                                                                    In fact, I wonder if Private Relay would already mitigate this for iOS users.

                                                                                                    Private Relay isn’t used by ads requested in-app.

                                                                                                    For Safari traffic, reading Apple’s literature on iCloud Private Relay suggests that Vodafone could block private relay by preventing DNS resolution for mask.icloud.com and mask-h2.icloud.com or collect the super-cookie by using an (otherwise) unroutable IP address (it would be considered either Cellular services or Local network service; see the section on Coverage and Compatability). I think if you have a VPN installed on your handset, then some TrustPid code will probably be able to associate your VPN session with the identifier with help from an app, and I would suspect Vodafone would know how to do this.

                                                                                                    If you route your traffic through a VPN on your laptop and tether your laptop through your vodafone mobile, you still might not be safe: This might be substantially harder for Vodafone because the only way to apply the TrustPid is now through correlation instead of a super-cookie. Advertisers are uncomfortable with this stuff in my experience, so I suspect if Vodafone is doing this, they probably won’t launch with it.

                                                                                                  3. 3

                                                                                                    The problem is you’re paying for a service, and then to use that service safely you have to pay, and trust, yet another service.

                                                                                                    Boy is that not a new problem.

                                                                                                    /me eyes his private school and private hospital bills

                                                                                                1. 7

                                                                                                  I’ve been watching the Python community build out pypi of late. I sincerely don’t know how you make a community both welcoming to new contributors and first time module authors and yet safe from this kind of attack.

                                                                                                  I’m not sure it’s a solvable problem.

                                                                                                  1. 4

                                                                                                    Idk, I feel like Linux distros have been doing a pretty good job for decades. It seems it’s far harder to compromise GPG keys than it is to compromise a GH/PyPi/etc. login. The real problem is there’s not a low-barrier to entry way to getting mass adoption of a package system using GPG because the ergonomics are awful.

                                                                                                    1. 9

                                                                                                      They have! But they’ve done so with a tremendous trade-off in terms of time to release. If that works for your use case, fantastic! Rock on with your bad self! But there are other use cases where getting the very latest code really IS important.

                                                                                                      The distro model also relies on the rarefied fairy dust that is the spare time, blood sweat and tears of distro / package maintainers, and thus doesn’t scale well at all.

                                                                                                      1. 5

                                                                                                        I think a big part of that time trade-off comes from the fact that distro maintainers do a lot more than build and publish packages, they test that they all build together, don’t break distro functions, etc.. IMO the real issue issue with weakly secured package repositories is that it’s a big burden to get package developers to just sign their packages. The ideal package repository for me does the following:

                                                                                                        • packages must be cryptographically signed by one of the authors
                                                                                                        • signatures are validated by package managers at download/install time
                                                                                                        • new versions of an existing package must be signed by a key in the same signature chain(s) as the last published version except in the following scenarios
                                                                                                          • explicit handoff of ownership via a token signed by the previous key that contains the signature of the root of the new chain, subsequent packages can be signed by either key unless the token includes a revoke of signature rights flag that prevents the previous key from being used
                                                                                                          • to support lost keys, the repository administrators can sign the same type of token mentioned above after a verification step (such as verifying ownership over the email attached to the GPG key, signed tag on relevant git repo, etc.)
                                                                                                        • packages are namespaced with repo username or group by default. This supports forks and forces an acknowledgement of the owner(s) of a package onto the user. Most git hosts work this way anyways

                                                                                                        The only real barrier to doing something like this is adoption due to overhead of creating and maintaining signing keys on the publisher’s end. Part of the reason npm/pypi/etc. are so ubiquitous is there’s basically zero barrier to entry, which is not what I want my software to rely on.

                                                                                                        1. 8

                                                                                                          Now factor several other variables into your ideal:

                                                                                                          • Most packaging systems are built by volunteer help on volunteer time
                                                                                                          • They need to operate at crazy bananas pants scale. Pypi had 400K packages at last count I saw.
                                                                                                          • People have legitimate needs for development purposes of being able to get the VERY latest code when they want/need it.

                                                                                                          I think all of what you’re saying here is spot on, I just don’t know how you actually make it real given the above. You’re comparing to the Linux distro model, where the entire universe of packages is in the 30-60K range according to the web page I just saw.

                                                                                                          1. 2

                                                                                                            Most packaging systems are built by volunteer help on volunteer time

                                                                                                            True, but there’s more complex ambitious projects (like Matrix) that are also built by volunteers. Hell, you could probably build a sustainable business model by selling access to such a repository in a b2b fashion.

                                                                                                            They need to operate at crazy bananas pants scale. Pypi had 400K packages at last count I saw

                                                                                                            I mean, yeah? It’s still read-heavy which is easier to scale out that write-heavy systems.

                                                                                                            People have legitimate needs for development purposes of being able to get the VERY latest code when they want/need it

                                                                                                            This requirement isn’t really mutually exclusive with my ideas above. If you’re saying you need to operate on the latest unpublished code, you should just clone master of the code itself and go from there. I’m not saying you have a group of volunteers (or employees) comb through published packages and sign them themselves, I’m saying you force signatures of any package uploaded to the repo from the person who wrote the code and is publishing it. The obvious problem with that being adoption because who wants to go through the bs process of setting up GPG/PGP keys, it’s a pain.

                                                                                                            1. 6

                                                                                                              I hardly think it’s fair to say Matrix is developed by volunteers…

                                                                                                              1. 1

                                                                                                                Who is it developed by then?

                                                                                                                1. 3

                                                                                                                  New Vector Limited

                                                                                                      2. 8

                                                                                                        …GPG because the ergonomics are awful.

                                                                                                        I’ve had probably a dozen keys over the years, many of which were created improperly (e.g. no expiration) because I was literally just doing it to satisfy some system that demanded a key.

                                                                                                        So, on top of the bad ergonomics around GPG in general, you also have the laziness / apathy / resentment of developers who didn’t actually want to create a key and view it as an annoyance to contend with. Like, how long do we think it would take before people started committing their private keys to avoid losing them or having to deal with weird signature chains to grant access to collaborators?

                                                                                                        1. 3

                                                                                                          PyPI already supports PGP-signing your packages, and has supported this for many years. Which should be a big hint as to its effectiveness.

                                                                                                          1. 1

                                                                                                            Not just supporting PGP/GPG-signatures, enforcing signatures. And yeah, that ecosystem sucks.

                                                                                                            1. 5

                                                                                                              Tell me how you’d usefully enforce in an anyone-can-publish package repository like PyPI. Remember that distros only manage it because they have a very small team of trusted package publishers who act as the gatekeepers to the whole thing, and so there’s only a small number of keys and identities to worry about.

                                                                                                              In an anyone-can-publish package repository it’s simply not feasible to try to verify keys for every package publisher, especially since packages can have multiple people with publish permissions and the membership of that group can change over time. All you’d be able to say is “signed with a key that was listed as one of the approved keys for this package”, which then gets you back to square one because an account takeover would let you alter the list of approved keys (and requiring that changes to approved keys be signed by prior approved keys also doesn’t work because at the scale of PyPI the number of lost/expired/etc. keys that will need to do a recovery workflow would be enough to still allow the basic attack vector that worked here — take over an expired domain and do a recovery workflow).

                                                                                                              1. 1

                                                                                                                packages can have multiple people with publish permissions and the membership of that group can change over time

                                                                                                                Yes, I didn’t go into detail because it’s a lobsters comment, not a white paper, but the idea is that only a revoke/removal of a key from the approved keylist of a package can be done without a signed grant from a previously supplied key. What this means is the first person to upload a version of a package will sign it, then that key will have to be used to add any additionally allowed keys via a signed token grant. Allowed keys are explicitly not tied directly to group membership (except maybe an auto-revoke being triggered by a member being removed from a group), or really accounts at all. Handling the recovery workflow is the hardest part to get right. In the case of an expired key, supplying a payload from the email attached to the key and account (should probably also enforce key emails and account emails match) signed with the expired key is significantly better than simply sending a magic link with a temporarily URL. For supporting lost keys, I can’t think of a way to support this safely without basically just making a new “package lineage” that has a new namespaced account or something. Either way, the accounts would still only be as secure as the security practices of the users on the publishing end, so there’s only so much you can do.

                                                                                                        2. 2

                                                                                                          I don’t understand why we stick to flat namespaces, or rather, it implies separate authentication. What’s wrong with the Go way of doing things? Why can’t we go directly to GitHub (and friends) for our dependencies, instead of having pypi / npm / cargo inbetween?

                                                                                                          1. 3

                                                                                                            I guess the only problem that solves is typosquatting? Because maintainer account compromise and repojacking will still get you malicious code.

                                                                                                            1. 1

                                                                                                              This topic brings out strong opinions on all fronts :) See @ngp’s eloquent statement of the exact and total opposite opinion that we should have MORE in between, not less.

                                                                                                            2. 2

                                                                                                              An open community is not defined by a single central register of packages where all dependencies are pulled from by default by just adding some sort of identifier in your project.

                                                                                                              It is a solvable problem and it has been solved. We just broke it relatively recently with this horrible idea of pulling a tree of dependencies with hundreds of nodes, whenever we want to left pad a string representation of an integer.

                                                                                                              The solution is: don’t import arbitrary dependencies dozens at a time just because there is a simple way to do it. It was never a good idea. Not that package managers are a bad idea per se. It.s.tge way they’re [ab]used. The means to do it can perfectly be there, just use them reasonably.

                                                                                                              Pearl’s CPAN was probably the first instance of these central package repositories. But it always posed itself as a convenience with no authorative instance. Multiple mirrors existed with different sets of packages available. It was just always an easier way to download code, not an hijacker of a programmings language import routine.

                                                                                                              1. 1

                                                                                                                Guessing you mean “perl” but point taken.

                                                                                                            1. 4

                                                                                                              I don’t quite understand the problem that’s being solved here. I already have seamless access to Gitlab (or whichever Git remote I choose to use) from any IDE or the command line. What value is there in working in a browser, using a web app hosted alongside my code repository? Honest question.

                                                                                                              1. 1

                                                                                                                You have your tools set up already.

                                                                                                                Others do not.

                                                                                                                1. 3

                                                                                                                  But who are these people who write code but don’t have their tools set up? Is that a large group of people?

                                                                                                                  1. 3

                                                                                                                    Every beginner, and most programmers away from their home setups.

                                                                                                                    1. 2

                                                                                                                      I haven’t played with the gitlab version, but gitpod is pretty nice for getting something done quickly when I’m on the road and not carrying my real dev system.

                                                                                                                      It’s also absolutely brilliant for getting people started on open source projects. You can include a link in your README, like this wagtail demo site does that lets people pop open a remove VS Code instance with your project checked out and a container set up to build/run it.

                                                                                                                      I think kicking the tires on new things and contributing a quick fix to someone else’s project that I don’t regularly contribute to have been the best uses I’ve seen for something like this.

                                                                                                                      1. 1

                                                                                                                        I’m not familiar with the GitLab version here but the GitHub version is a bit more than you’re suggesting. It’s a combination of a web UI and a base container image that has all of the tools set up. Each user can then add additional container layers to tailor the environment to their taste. When you want to onboard a new developer to the project, there’s a one-click thing to give them a complete working build environment.

                                                                                                                  1. 3

                                                                                                                    I like the changes, but this feels like the end of an era. IDEs have mostly looked and worked the same since I used VB 5.0 back in the 90s, so it’s a little bittersweet!

                                                                                                                    1. 1

                                                                                                                      I suppose, if there is one core influence on me, it’s Envelop Basic: I never understood the language very well when I was writing it, but I definitely picked up an attitude of “however I can get something to work” from abusing UI controls to make a video game, even though that was not their intended purpose.

                                                                                                                      Which means that I don’t a lot of misgivings about doing non-standard things, if it feels called for by the constraints of the project I’m working on. This attitude has lead me to do things like adding string-join to SQL Server via .NET code, or writing a project in Ruby in Sequel and Sinatra instead of Rails, or building a .NET server/client on web sockets, where the server was intended to make the client easier to test/validate against another service.

                                                                                                                      It also means that I read widely, so as to add to my bag of tricks that I can reference when I hit a tricky situation.

                                                                                                                      The counter-balance to this is that because I like reading code, I also want to write code that other people like to read, which has also been a long-running influence in how I write code, and it helps me try to avoid treating any particular bit of code as Sacred or Precious, and means that I don’t do non-standard things in shared projects without a reason.

                                                                                                                      1. 3

                                                                                                                        … I don’t do non-standard things in shared projects without a reason.

                                                                                                                        This is the key, for me. Do whatever you want in side projects or toys, or even “real” code that only you will ever need to maintain. But if there’s a good chance other people will need to be involved, make “boring” choices. This was a hard lesson for me to learn, personally, so I understand why some people recoil a bit, but it’s important.

                                                                                                                        1. 2

                                                                                                                          I will say, I do think it’s useful to be able to go off the beaten path when it’s called for. It can be a bit of strong leverage, or help you sort out messes that other folks have gotten themselves into.

                                                                                                                          1. 2

                                                                                                                            I think it’s useful to have confidence that you could, if you had to, go and do something complicated and weird. The confidence permits you to go ahead with a straightforward implementation without any hedging in case it’s not “fast enough”. That often runs plenty fast on the first try anyway.

                                                                                                                            1. 2

                                                                                                                              I agree. Granted, to do that, you have to try and do complicated/weird things. I’ve done done both as part of work and as part of hobby stuff. For work, I try to only do it where there seems to be much to gain by doing so.

                                                                                                                              For hobby stuff, I work almost exclusively in niche tech, mostly to keep stuff interesting.

                                                                                                                              I saw a suggestion that developers should be given some room to play outside of production, because if you don’t, they’ll play inside production. And, for me, I took that as a mandate to try strange/different things on my own time, and to generally not do them in prod without a good reason.

                                                                                                                              And, now that I’ve been that for many years, a lot of stuff I’d have had questions about some years ago seems relatively standard. When you’ve written in non-standard programming paradigm (stack-based was this for me), more standard paradigms seem tame and easy to follow by comparison.

                                                                                                                      1. 35

                                                                                                                        Public offices using open standards should be the norm.

                                                                                                                        It is sad that this isn’t the case, and thus still makes the news.

                                                                                                                        1. 8

                                                                                                                          Public offices using open standards should be the norm.

                                                                                                                          Exactly. Ever since I can remember I couldn’t comprehend why governments, even the army, so willingly use social media, plaster their logo on their website, and so on. It’s such an obvious bad idea. I mean, why would one willingly make oneself dependent on massive for-profit corporations with a history of scandals every fortnight.

                                                                                                                          1. 27

                                                                                                                            …so willingly use social media…

                                                                                                                            Because that’s where the people are. If your goal is to reach, or be available to, as many people as possible, then using social media sites is necessary (though, I would also argue, insufficient). That being said, I agree that governments shouldn’t allow their data to become trapped in walled gardens and the like, hence the “insufficient” bit.

                                                                                                                            Edit: As an example, the county I live in posts notices and such on Instagram. They also post the information on their web site, but honestly, I only see them on Instagram. I don’t want them to stop doing that just because Instagram is problematic in various ways. It still exposes me to interesting info that I wouldn’t otherwise go out of my way to find.

                                                                                                                        1. 9

                                                                                                                          This is fantastic, to me, because probably half the value I get out of static typing is auto-complete and related niceties! I love a compiler to catch bugs for me, but I love my IDE to catch the bugs (and help me not create them in the first place) before I ever even run the compiler even more!

                                                                                                                          1. 2

                                                                                                                            Vimwiki supports a similar, though less rich, syntax. That’s the only TODO list I’ve ever found to work for me (I only wish it worked on my phone, and no, I’m not going to try to use Vim on my phone). The part that makes it “work”, I think, is that my notes are right inline with my TODO items.

                                                                                                                            1. 1

                                                                                                                              Is that the task wiki add on for vimwiki or something native to it?

                                                                                                                              1. 1

                                                                                                                                It’s built-in. You just do * [ ] Do some stuff to create a TODO item, and then you can toggle it with ctrl-space.

                                                                                                                                1. 1

                                                                                                                                  Ah, thanks! Taskwiki (if you don’t know) extends vimwiki to store the todo items in “task warrior” aka /usr/bin/task. It uses the same short hand and i wasn’t really sure where vimwiki stopped and taskwiki started.