1. 6

    The writer seems to think that “developers” are the same thing as so-called “web ‘developers’”. As a developer myself, I am quite content with Safari and don’t see any overarching need to add features to an already over-bloated web. As a developer, I am content with Safari if it works with Github and Stackoverflow, and leave functionality which has no place in a web browser to programs which are better suited to handle that.

    1.  

      Yeah, Apple telling developers “make a native app” is, in our Electron based world, “wtf based”.

      It’s not like Safari is like IE6-in-a-Firefox-2 level bad either. Oh no, I can’t use WebUSB, what will I ever do?

    1. 17

      This article has everything: databases, rad, different visions of computing as a human field of endeavor, criticism of capitalism. I commend it to everyone.

      1. 13

        Criticism of capitalism is, in theory, my main thesis, but I find it difficult to convey in a way that doesn’t get me a lot of angry emails with valid complaints, because the issue is complex and I can’t fully articulate it in a few paragraphs. But it is perhaps my core theory that capitalism has fundamentally diminished the potential of computing, and I hope to express that more in the future.

        1.  

          But it is perhaps my core theory that capitalism has fundamentally diminished the potential of computing, and I hope to express that more in the future

          I am on a team that is making a documentary about the history of personal computing. One of the themes that has come out is how the kind of computing that went out to consumers in the early 80s on was fundamentally influenced by wider socioeconomic shifts that took place beginning in the 70s (what some call a “neoliberal turn”). These shifts included, but were not limited to, the elevation of shareholder primacy and therefore increased concentration on quarterly reports and short-termism.

          These properties were antithetical to those that led to what we would say were the disproportionate advances in computing (and proto-personal computing) from the 50s, 60s, and 70s. Up until the 80s, the most influential developments in computing research relied on long-term, low-interference funding – starting with ARPA and ultimately ending with orgs like PARC and Bell Labs. The structures of government and business today, and for the past few decades, are the opposite of this and therefore constitutionally incapable of leading to huge new paradigms.

          One more note about my interviews. The other related theme that has come out is that what today we call “end user programming” systems were actually the goal of a good chunk of that research community. Alan Kay in particular has said that his group wanted to make sure that personal computing “didn’t become like television” (ie, passive consumption). There were hints of the other route personal computing could have gone throughout the 80s and 90s, some of which are discussed in the article. I’d add things like Hypercard and Applescript into the mix. Both were allowed to more or less die on the vine and the reasons why seem obvious to me.

          1.  

            These properties were antithetical to those that led to what we would say were the disproportionate advances in computing (and proto-personal computing) from the 50s, 60s, and 70s. Up until the 80s, the most influential developments in computing research relied on long-term, low-interference funding – starting with ARPA and ultimately ending with orgs like PARC and Bell Labs. The structures of government and business today, and for the past few decades, are the opposite of this and therefore constitutionally incapable of leading to huge new paradigms.

            This is something I’ve been thinking about a while - most companies are incapable of R&D nowadays; venture capital funded startups have taken a lot of that role. But they can only R&D what they can launch rapidly and likely turn into a success story quickly (where success is a monopoly or liquidity event).

          2. 3

            As with so many things. But I think mass computing + networking and our profession have been instrumental in perfecting capitalism.

            Given the values that were already dominating society, I think this was inevitable. This follows from my view that the way out is a society that lives by different values. I think that this links up with our regularly scheduled fights over open source licenses and commercial exploitation, because at least for some people these fights are at core about how to live and practice our craft in the world as it is, while living according to values and systems that are different from our surrounding capitalist system. In other words, how do we live without being exploited employees or exploiting others, and make the world a marginally better place?

            1. 2

              Complaints about the use of the word, and maybe calling you a socialist, or something?

              I wouldn’t do that to you, but I do have to “mental-autocorrect” capitalism into “We may agree software developers need salary and some SaaS stuff is useful, but social-media-attention-rent-seekers gain power, which sucks, so he means that kind of ‘capitalism’”.

              There should be a word, like cronyism is the right word for what some call capitalism, or a modifier like surveillance capitalism.

              1.  

                But I am a socialist. The problem is this: the sincere conviction that capitalist organization of global economies has diminished human potential requires that I make particularly strong and persuasive arguments to that end. It’s not at all easy, and the inherent complexity of economics (and thus of any detailed proposal to change the economic system) is such that it’s very difficult to express these ideas in a way that won’t lead to criticism around points not addressed, or of historic socialist regimes. So is it possible to make these arguments about the history of technology without first presenting a thorough summary of the works of Varoufakis and Wolff or something? I don’t know! That’s why I write a blog about computers and not campaign speeches or something. Maybe I’m just too burned out on the comments I get on the orange website.

                1.  

                  Sure, I appreciate that, though it would maybe attract bad actors less if there was some thread of synopsis that you could pull on instead of “capitalism”.

                  I think the problem is broad terms, because they present a large attack surface, though I do realize people will also attack outside the given area.

                  I’m also saddened by a lot of what’s going on in ICT, but I wouldn’t attribute it blindly to “capitalism”, but I don’t have all the vocabulary and summaries, if you will, to defend that position.

                  One’s capitalism is anyway different from another’s, so the definitions must be laid out. Maybe all of Varoufakis isn’t needed every time?

                  Nor am I convinced we’ll reach anything better with socialist or other governmental interventions. An occasional good law may be passed, or money handouts that lead to goodness, but each of those will lose in the balance to detrimental handouts/malfeasance, corruption, unintended consequences, and bad laws.

                  Maybe some kind of libertarian-leaning world where people have a healthy dose of socialist values but are enlightened enough to practice them voluntarily?

              2. 1

                I would love to see the hoops you jump through to express that. Honestly. It seems so alien to my worldview that anyone making that claim (beyond the silly mindless chants of children which I’m assuming is not the case here) would be worth reading.

                1. 7

                  I’ve made a related argument before, which I think is still reasonably strong, and I’ll repeat the core of here.

                  In my experience, software tends to become lower quality the more things you add to it. With extremely careful design, you can add a new thing without making it worse (‘orthogonal features’), but it’s rare that it pans out that way.

                  The profit motive drives substantial design flaws via two key mechanisms.

                  “Preventing someone from benefiting without paying for it” (usually means DRM or keeping the interesting bits behind a network RPC), and “Preventing someone from churning to another provider” (usually means keeping your data in an undocumented or even obfuscated format, in the event it’s accessible at all).

                  DRM is an example of “adding a new thing and lowering quality”. It usually introduces at least one bug (sony rootkit fiasco?).

                  Network-RPC means that when your network connection is unreliable, your software is also unreliable. Although I currently have a reliable connection, I use software that doesn’t rely on it wherever feasible.

                  Anti-churn (deliberately restricting how users use their own data) is why e.g. you can’t back up your data from google photos. There used to be an API, but they got rid of it after people started using it.

                  I’m not attempting to make the argument that a particular other system would be better. However, every human system has tradeoffs - the idea that capitalism has no negative ones seems ridiculous on the face of it.

                  1.  

                    Those are shitty aspects to a lot of things, and those aspects are usually driven by profits, although not always the way people think. I’ll bet dollars to donuts that all the export features Google removes are done simply because they don’t want to have to support them. Google wants nothing less than they want to talk to customers.

                    But without the profit motive in the first place, none of these things would exist at all. The alternatives we’ve thought up and tried so far don’t lead to a world without DRM, they lead to a world where media is split into that the state approves and nobody wants to copy, and that where possession of it gets you a firing squad, whether you paid or not.

                    1. 5

                      But without the profit motive in the first place, none of these things would exist at all.

                      It’s nonsensical to imagine a world lacking the profit motive without having any alternative system of allocation and governance. Nothing stable could exist in such a world. Some of the alternative systems clearly can’t produce software, but we’ve hardly been building software globally for long enough to have a good idea of which ones can, what kinds they can, or how well they can do it (which is a strong argument for the status quo).

                      As far as “made without the profit motive” go, sci-hub and the internet archive are both pretty neat and useful (occupying different points on the legal-in-most-jurisdictions spectrum). I quite like firefox, too.

                  2.  

                    “Capitalism” is a big thing which makes it difficult to talk about sensibly, and it’s not clear what the alternatives are. That said, many of the important aspects of the internet were developed outside of commercial considerations:

                    • DARPA was the US military

                    • Unix was a budget sink because AT&T wasn’t allowed to go into computing, so they just shunted extra money there and let the nerds play while they made real money from the phone lines

                    • WWW was created at CERN by a guy with grant money

                    • Linux is OSS etc.

                    A lot of people got rich from the internet, but the internet per se wasn’t really a capitalist success story. At best, it’s about the success of the mixed economy with the government sponsoring R&D.

                    On the hardware side, capitalism does much better (although the transistor was another AT&T thing and NASA probably helped jumpstart integrated circuits). I think the first major breakthrough in software that you can really peg to capitalism is the post-AlphaGo AI boom, which was waiting for the GPU revolution, so it’s kind of a hardware thing at a certain level.

                    1.  

                      I still disagree, but man it’s nice to just discuss this sort of thing without the name-calling and/or brigading (or worse) you see on most of the toobs. This sort of thing is pretty rare.

                    2. 2

                      Obviously not op, but observe the difference between the growth in the scale and distribution of computing power, and what has been enabled, over the last 40 years.

                      Business processes have been computerized and streamlined, entertainment has been computerized, and computerized communications especially group communications like lobsters or other social media have arrived. That’s not nothing, but it’s also nothing that wasn’t imaginable at the start of that 40 year period. We haven’t expanded the computer as a bicycle of the mind - consider simply the lack of widespread use of the power of the computer in your hand to create art. I put that lack of ambition down to the need to intermediate, monetize, and control everything.

                      1.  

                        And additionally the drive to drive down costs means we have much less blue sky research and ambition; but also means that things are done to the level that they’re barely acceptable. We are that right now with the security situation: everything is about playing whackamole quicker than hackers, rather than investing in either comprehensive ahead of time security practices or in software that is secure by construction (whatever that would look like).

                    3.  

                      What I used to tell them is it’s basically a theory that says each person should be as selfish as possible always trying to squeeze more out of others (almost always money/power), give less to others (minimize costs), and put as many problems on them as possible (externalities).

                      The first directly leads to all kinds of evil, damaging behavior. There’s any number of schemes like rip-offs, overcharging, lockin, cartels, disposable over repairable, etc. These are normal, rather than the exception.

                      The second does every time cost-cutting pressure forces the company to damage others. I cite examples with food, medicine, safety, false career promises, etc. They also give less to stakeholders where fewer people get rewarded and get rewarded less for work put in. Also, can contrast to utilitarian companies like Publix that gives employees benefits and private stock but owners still got rich. Or companies that didn’t immediately lay off workers during recessions. An easy one is most can relate to is bosses, esp executives, paid a fortune to do almost nothing for the company vs workers.

                      Externalities affect us daily. They’re often a side effect of the other two. Toxic byproducts of industrial processes is a well known one. Pervasive insecurity of computers, from data loss to crippling DDOS’s to infrastructure at risk, is almost always an externality since the damage is someone else’ problem but preventing it would be supplier’s. You see how apathy is built-in when the solution is permissible, open-source, well-maintained software and they still don’t replace vulnerable software with it.

                      Note: Another angle, using game of Monopoly, was how first movers or just lucky folks got an irreversible, predatory advantage over others. Arguing to break that up is a little harder, though.

                      So, I used to focus on those points, illustrate alternative corporate/government models that do better, and suggest using/tweaking everything that already worked. Meanwhile, counter the abuse at consumer level by voting with wallet, sensible regulations anywhere capitalist incentives keep causing damage, and hit them in court with bigger damages that preventing it would cost. Also, if going to court, I recommend showing where it was true how easy or inexpensive prevention was asking the court basically orders it. Ask them to define reasonable, professional standard as not harming stakeholders in as many cases as possible.

                      Note: Before anyone asks, I don’t have the lists of examples anymore or just inaccessible. Side effect of damaged memory is I gotta stay using it or I lose it.

                  1. 15

                    This is exactly what I was thinking when more and more stuff was pushed into the USB-C stack.

                    Previously it was kinda easy to explain that “no, you can’t put the USB cable in the HDMI slot, that won’t work”. Now you have cables that look identical, but one can charge with 90W and the other can’t, even though both fit. It’s going to be confusing for everyone having to be careful with the which cable can be plugged into where.

                    1. 9

                      Everything about the official naming/branding used in USB 3 onward seems purposely designed to be confusing.

                      1. 5

                        It seems like for some reason the overriding priority was making the physical connector the same, but it’s fine to run all kinds of incompatible power and signals through it. I preferred the old way of giving different signals different connectors so you knew what was going on!

                        1. 2

                          The downside to that is I guess that each different type of device that you would want to be able to connect to a small form factor device such as a phone or a slim laptop would need to have with every type of connector that you might want, or alternatively you would need dongles left and right.

                          I can now charge my phone with my laptop charger, that has not been the case in previous generations.

                          I believe we are moving into POE-enabled network cable territory on some conceptual level; data+power (either optional) is the level of abstraction that the connector is made common on.

                          1. 4

                            I’m surprised the business laptop manufacturers haven’t tried getting into PoE based chargers, considering most offices have not just Ethernet, but PoE, and it’d solve two cables at once.

                            1.  

                              I’d love to know what you’re basing this thesis on because as far as I know I’ve never worked in a German office with PoE in the last 20 years. (Actually it was a big deal in my last company because we had PoE-powered devices, so there was kind of a “where and in which room do we put PoE switches for the specialty hardware”)

                              1.  

                                Most offices nowadays have PoE if only for deskphones.

                          2. 1

                            It’s fine if you have a manufacturer that you can trust to make devices that work with everything (USB, DP, TB, PD, etc.) the cable can throw at it. (Like, my laptop will do anything a Type C cable can do, so there’s no confusion.) The problem is once you get less scrupulous manufacturers of the JKDSUYU variety on Amazon et al, the plan blows up spectaularly.

                          3. 1

                            When the industry uses a bunch of mutually-incompatible connectors for different types of cables, tech sites complain “Ugh, why do I need all these different types of cables! It’s purposely designed to be overcomplex and confusing!”

                            When the industry settles on one connector for all cable types, tech sites complain “Ugh, how am I supposed to tell which cables do which things! It’s purposely designed to be overcomplex and confusing!”

                            1. 4

                              Having the same connector but incompatible cable is much worse than the alternative.

                              1. 3

                                The alternative is that every use case develops its own incompatible connector to distinguish its particular power/data rates and feature set. At which point you need either a dozen ports on every device, or a dozen dongles to connect them all to each other.

                                This is why there have already been well-intentioned-but-bad-idea-in-practice laws trying to force standardization onto particular industries (like mobile phones). And the cost of standardization is that not every cable which has the connector will have every feature of every other cable that currently or might in the future exist.

                              2. 1

                                They could’ve avoided this by either making it obvious when one cable or connector doesn’t support the full set or simply disallowing any non full-featured cables and connectors. Have fun buying laptops and PCs while figuring out how many of their only 3 USB-C connections are actually able to handle what you need, which of them you can use in parallel for stuff you want to use in parallel and which of them is the only port that can actually do everything but is also reserved for charging your laptop. It’s a god damn nightmare and makes many laptops unusable outside some hipster coffee machine.

                                Meanwhile I’m going to buy something that has a visible HDMI, DP, LAN and USB-A connector, so I’m not stranded with either charging, mouse connection, external display or connecting my USB3 drive. It’s enraging.

                                1. 1

                                  or simply disallowing any non full-featured cables and connectors

                                  OK, now we’re back on the treadmill, because the instant someone works out a way to do a cable that can push more power or data through, we need a new connector to distinguish from already-manufactured cables which will no longer be “full-featured”. And now we’re back to everything having different and incompatible connectors so that you either need a dozen cables or a dozen dongles to do things.

                                  Or we have to declare an absolute end to any improvements in cable features, so that a cable manufactured today will still be “full-featured” ten years from now.

                                  There is no third option here that magically lets us have the convenience of a universal connector and always knowing the cable’s full capabilities just from a glance at the connector shape, and ongoing improvements in power and data transmission. In fact for some combinations it likely isn’t possible to have even two of those simultaneously.

                                  It’s a god damn nightmare and makes many laptops unusable outside some hipster coffee machine.

                                  Ah yes, it is an absolute verifiable objective fact that laptops with USB-C ports are completely unsuitable and unusable by any person, for any use case, under any circumstance, in any logically-possible universe, ever, absolutely and without exception.

                                  Which was news to me as I write this on such a laptop. Good to know I’m just some sort of “hipster coffee” person you can gratuitously insult when you find you’re lacking in arguments worthy of the name.

                                  1. 1

                                    Ah yes, it is an absolute verifiable objective fact that laptops with USB-C ports are completely unsuitable and unusable by any person, for any use case, under any circumstance, in any logically-possible universe, ever, absolutely and without exception.

                                    You really do want to make this about yourself, don’t you ? I never said you’re not allowed to have fun with them, I just say that for many purposes those machines are pretty bad. And there are far too many systems produced now with said specs, that its becoming a problem for people with a different use case than the one you have. With more connections, less dongles or hubs and with the requirement that you know about the specific capabilities before buying it: Have fun explaining your family why model X doesn’t actually do what they thought, because their USB-C is just a USB 2.0. Why their USB-C cable doesn’t work - even though it looks the same, why there are multiple versions of the same connector with different specs, why one port of USB-C doesn’t mean it can do everything the port right beside it can do. Why there is no way to figure out if the USB-C cable is actually able to handle a 4k60 display before trying it out. Even for 1000+€ models that you might want to use with an external display, mouse, keyboard, headset,charging and some yubikey you get 3 USB-C connections these days. USB-C could’ve been something great, but now it’s a RNG for what you actually get. And some colored cables and requirements towards labeling the capabilities would have already helped a lot.

                                    Yes I’m sorry for calling it hipster in my rage against the reality of USB-C, let’s call it “people who do not need many connections (2+ in real models) / like dongles or hubs / do everything wireless”. Which is someone commuting by train, going to lectures or whatnot. But not me when I’m at home or at work.

                                    This is where I’m gonna mute this thread, you do not seem to want a relevant conversation.

                            2. 1

                              Yeah, I was thinking this too. Though even then we were already starting to get into it with HDMI versions.

                            1. 1

                              Talk about a Sublime Text ripoff! Pascal is certainly an inspired choice to write it in though.

                              1. 3

                                why a “ripoff”, rather than the more usual “open source alternative to”? the cudatext author never tried to hide the fact that it was inspired by sublime text and there is even a link to the list of things he improved on the home page.

                                1. 2

                                  Eh, code editors are a tale as old as time, I don’t think it’s really fair to call any of them ripoffs. Sublime is really just a notepad.exe ripoff 😉

                                1. 10

                                  I feel they emphasize the wrong things to be modularizable; I think the battery is the most important by far. Everything else is likely not worthwhile due to signalling changes/bottlenecking/etc, but batteries are perishable and the most important thing for portability. It’d be great if you could easily put in some 16850s without soldering and have it Just Work.

                                  1. 10

                                    The battery is replaceable, although internal so you have to open it up. It is a bit odd to me that that isn’t called out explicitly in their marketing, but I did find this thread: https://community.frame.work/t/framework-team-why-did-you-choose-to-make-the-battery-internal/1187/3

                                    which is about why it’s internal vs. external, but one of their employees confirms that it is at least replaceable (it is internal mainly for space savings; there’s a genuine design trade-off there).

                                    1. 7

                                      I don’t really care about being able to hotswap batteries - that’s a stupid parlour trick. What matters more is if you can get new (not NOS, since those decay) batteries that fit. Prismatic batteries are an essential compromise, but they make this much harder.

                                      1. 2

                                        True; I think making standardized form factors for prismatic batteries is important future work if this kind of thing is going to take off.

                                        1. 2

                                          We kinda have this for smaller devices - many Sony, Nokia, and Samsung batteries became de facto standards for things like wireless keyboards.

                                    2. 2

                                      16850s are way, way too fat for “ultrabooks”. There should be some kind of new thin-but-big battery standard…

                                      1. 8

                                        Honestly, I’ve never seen the appeal in ultrabooks. Oh, it’s thin? That’s nice. My mechanical keyboard is at an excellent height, and that’s rather more than an inch off of the table.

                                        What matters is weight and mechanical stiffness. If the user can pick it up, open, by one corner, and not get flexing, there’s nothing wrong with the physical specs.

                                        1. 9

                                          Size and weight matter. I don’t want it to weigh much when it’s in my bag or even carting it around desks, but thickness is underrated in terms of “can I hold it around my arms easily?” and “can I fit more stuff into my luggage?”

                                          I will say my MBA is much thinner and lighter than my old X230t, yet is much more physically stiff. Old ThinkPads are bendier than people remember.

                                        2. 7

                                          Sure but that’s just another way of saying “ultrabook is the wrong form factor for a device that prioritizes long life” if you ask me.

                                          1. 4

                                            Well… the form factor is just the bigger priority for lots of people, myself included. For long life, max power, upgradeability, and all the other good things I have a big beefy desktop already. I don’t need a laptop that competes with the desktop, I need a laptop I can take anywhere easily — it must occupy minimum weight and space in my backpack, should be light enough to carry around in one hand.

                                            1. 2

                                              I understand the argument about weight, but what kind of backpack do you have that you can’t fit a non-ultrabook laptop in it? Maybe you should buy a bigger bag instead of a less useful laptop.

                                              1. 2

                                                “Less useful” for you, maybe. For him, it’s the ideal compromise.

                                      1. 16

                                        It’s pretty absurd you need to hire a programmer to develop a simple CRUD application.

                                        In college, they tasked us with developing a backroom management solution for a forestry college. They were using Excel (not even Access!). One day, the instructor told us we weren’t the first - we were the second, maybe even third attempt at getting programmers to develop a solution for them. I suspect they’re still using Excel. Made me realize that maybe letting people develop their own solutions is a better and less paternalistic option if it works for them.

                                        Related: I also wonder if tools like Dreamweaver or FrontPage were actually bad, or if they were considered a threat to low-tier web developers who develop sites for like, county fairs…

                                        1. 12

                                          Made me realize that maybe letting people develop their own solutions is a better and less paternalistic option if it works for them.

                                          There’s also a related problem that lots of people in our field underestimate: domain expertise. The key to writing a good backroom management solution for a forestry college is knowing how a forestry college runs.

                                          Knowing how it runs will help you write a good management solution, even if all you got is Excel. Knowing everything there is to know about the proper way to do operator overloading in C++ won’t help you one bit with that. Obsessing about the details of handling inventory handouts right will make your program better, obsessing about non-type template parameters being auto because that’s the right way to do it in C++-17 will be as useful as a hangover.

                                          That explains a great deal about the popularity of tools like Excel, or Access, or – back in the day – Visual Basic, or Python. It takes far less time for someone who understands how forestry colleges run to figure out how to use Excel than it takes to teach self-absorbed programmers about how forestry colleges run, and about what’s important in a program and what isn’t.

                                          It also doesn’t help that industry hiring practices tend to optimise for things other than how quickly you catch up on business logic. It blows my mind how many shops out there copycat Google and don’t hire young people with nursing and finance experience because they can’t do some stupid whiteboard puzzles, when they got customers in the finance and healthcare industry. If you’re doing CRM integration for the healthcare industry, one kid who worked six months in a hospital reception and can look up compile errors on Google can do more for your bottom line than ten wizkids who can recite a quicksort implementation from memory if you wake them up at 3 AM.

                                          Speaking of Visual Basic:

                                          I also wonder if tools like Dreamweaver or FrontPage were actually bad, or if they were considered a threat to low-tier web developers who develop sites for like, county fairs…

                                          For all its flaws in terms of portability, hosting, and output quality, FrontPage was once the thing that made the difference between having a web presence and not having one, for a lot of small businesses that did not have the money or the technical expertise to navigate the contracting of development and hosting a web page in the middle of the Dotcom bubble. That alone made it an excellent tool, in a very different technical landscape from today (far less cross-browser portability, CSS was an even hotter pile of dung than it is today and so on and so forth).

                                          Dreamweaver belonged in a sort of different league. I actually knew some professional designers who used it – the WYSIWYG mode was pretty cool for beginners but the way I remember, it was a pretty good tool all around. It became less relevant because the way people built websites changed.

                                          1. 4

                                            It also doesn’t help that industry hiring practices tend to optimise for things other than how quickly you catch up on business logic. It blows my mind how many shops out there copycat Google and don’t hire young people with nursing and finance experience because they can’t do some stupid whiteboard puzzles, when they got customers in the finance and healthcare industry. If you’re doing CRM integration for the healthcare industry, one kid who worked six months in a hospital reception and can look up compile errors on Google can do more for your bottom line than ten wizkids who can recite a quicksort implementation from memory if you wake them up at 3 AM.

                                            I’ve been meaning to write about my experiences in community college (it’s quite far removed from the average CS uni experience of a typical HN reader; my feelings are complex about it), but to contextualize:

                                            • Business analysts were expected to actually talk to the and refine the unquantifiable “we want this” into FR/NFRs for the programmers to implement.

                                            • Despite this, programmers weren’t expected to be unsociable bugmen in a basement that crank out code, but also be able to understand, refine requirements, and even talk to the clients themselves. Despite this, I didn’t see much action in this regard; we used the BAs as a proxy most of the time. They did their best.

                                            1. 2

                                              I’m pretty torn on the matter of the BA + developer structure, too (which has somewhat of a history on this side of the pond, too, albeit through a whole different series of historical accidents).

                                              I mean on the one hand it kind of makes sense on paper, and it has a certain “mathematical” appeal that one would be able to distill the essence of some complex business process into a purely mathematical, axiomatic model, that you can implement simply in terms of logical and mathematical statements.

                                              At the same time, it’s very idealistic, and my limited experience in another field of engineering (electrical engineering) mostly tells me that this is not something worth pursuing.

                                              For example, there is an expectation that an EE who’s working on a water pumping installation does have a basic understanding of how pumps work, how a pumping station operates and so on. Certainly not enough to make any kind of innovation on the pumping side of things, but enough to be able to design an electrical installation to power a pump. While it would technically be possible to get an “engineering analyst” to talk to the mechanical guys and turn their needs into requirements on the electrical side, the best-case scenario in this approach is that you get a highly bureaucratic team that basically designs two separate systems and needs like twenty revisions to get them hooked up to each other without at least one of them blowing up. In practice, it’s just a lot more expedient to teach people on both sides just enough about each others’ profession to get what the other guys are saying and refine specs together.

                                              Obviously, you can’t just blindly apply this – you can’t put everything, from geography to mechanical engineering and from electrophysiology to particle physics in a CS curricula because you never know when your students are gonna need to work on GIS software, SCADA systems, medical devices or nuclear reactor control systems.

                                              But it is a little ridiculous that, 80 years after the Z3, you need specially-trained computer programmers not just in order to push the field of computing forward (which is to be expected after all, it’s computer engineers that push computers forward, just like it’s electrical engineers who push electrical engines forward), but also to do even the most basic GIS integration work, for example. Or, as you said, to write a CRUD app. This isn’t even close to what people had in mind for computers 60 years ago. I’m convinced that, if someone from Edsger Dijkstra’s generation, or Dijkstra himself were to rise from the grave, he wouldn’t necessarily be very happy about how computer science has progressed in the last twenty years, but he’d be really disappointed with what the computer industry has been doing.

                                              1. 2

                                                I mean, the biggest reason why salesforce is such a big deal is that you don’t need a programmer to get a CRUD app. They have templates covering nearly every business you could get into.

                                                1.  

                                                  Their mascot literally used to be a guy whose entire body was the word “SOFTWARE” in a crossed-out red circle: https://www.gearscrm.com/wp-content/uploads/2019/01/Saasy1.jpg

                                            2. 2

                                              FWIW I was neck deep in all of that back in the day. Nobody I knew looked down on Dreamweaver with any great enthusiasm, we viewed it as a specialised text editor that came with extra webby tools and a few quirks we didn’t like. And the problem with FrontPage was never that it lets noobs make web pages, just the absolute garbage output it generated that we would then have to work with.

                                            3. 3

                                              re: “related” — hmm, these days services like Squarespace and Wix are not really considered bad, and it’s not uncommon for a web developer to say to a client they don’t want to work with: “your project is too simple for me, just do it yourself on Squarespace”. I wonder what changed. The tools have, for sure — these new service ones are more structured, more “blog engine” than “visual HTML editor”, but they still do have lots and lots of visual customization. But there must be something else?

                                              1. 3

                                                I have found that things like Wix and Squarespace (or Wordpress) don’t scale very well. They work fine for a few pages that are manageable, but when you want to do more complex or repetitive things (generate a set of pages with minor differences in text or theme) they obstruct the user and cost a lot of time. A programmatic approach would then be a lot better, given that the domain is well mapped out.

                                            1. 5

                                              Wait, there’s WebDAV based email?

                                              1. 5

                                                This K-9 mail documentation page on configuring incoming WebDAV server settings seems to shed some light on this. It looks like Microsoft Exchange servers pre-2010 supported WebDAV email, but the feature has been deprecated ever since.

                                                1. 2

                                                  Huh, that must be the “HTTP” option for main server types I saw in old Outlook/OE back in the day. Exchange has supported EAS since 2003 though, so it’d be an odd choice to support.

                                                  Well, there’s also EWS, which is still supported and used by Mac Outlook and Evolution, so…

                                              1. 8

                                                To be somewhat cynical (not that that is new), the goal of the ’10s and ’20s is monetization and engagement. Successful software today must be prescriptive, rather than general, in order to direct users to the behaviors which are most readily converted into a commercial advantage for the developer.

                                                I agree with the article but just to play devil’s advocate, I think the death of these halfway “it’s not programming but they’re still flexible building blocks” attempts at democratized personal computing ran into two realities:

                                                1. Normal people don’t want to program, not even a little bit, because it tends to get complicated fast. GUIs are more intuitive and picking one up for its intended task is usually very fast. UX research has come a long way.
                                                2. Programming is actually way more popular than ever, and when people do pick it up, they tend to want to go all the way, using Python, or even “enterprise” languages. And if you do just pick up python and learn how to start doing the things you want to use it for, the world has only gotten friendlier: cheap cloud hosting, heroku, a golden era of open source software, raspberry pis, and so on.
                                                1. 8

                                                  There’s also a lot more consumer computing too. People who bought computers in the 80’s were likelier to use it for business stuff, even a small one. My parents bought a computer for the first time in ~1997 solely for the internet and CD-ROMs like Encarta or games. They’d have no use for databases, and wrote with a pen instead of a word processor.

                                                  Also as a counterpoint: As much as SaaS is reviled, it also does deliver in its absolution of many responsibilities like maintenance, providing a readymade service, and being a service instead of liability for taxation.

                                                1. 31

                                                  This is cool. The commit message doesn’t really explain the security, so I’ll have a go:

                                                  Chroot is not a security tool. If you are in a chroot and run as root, you can mount devfs (on older systems, you can create device nodes) and can then mount the filesystem that contains you and escape trivially. It cannot therefore constrain a root process.

                                                  If a chroot is allowed by unprivileged processes then it can be used to launch confused deputy attacks. If a privileged process runs in a chroot then it may make decisions based on the assumption that files in certain locations are only writeable by sufficiently privileged entities on the system and may write to locations that are not where it thinks they are. Auditing everything that runs as root to ensure that this isn’t the case is difficult.

                                                  This patch provides a simple solution to both problems by denying the ability to run setuid binaries after the chroot has taken place. If you chroot, you can still run any programs that you could previously run but you can’t elevate privileges. This means that you can’t use chroot to mount a confused deputy attack using a setuid binary (you can’t run a setuid binary) and you don’t have to worry about escapes via root privileges (you can’t acquire root privileges).

                                                  In summary, this lets me do all of the things I actually want to do with chroot.

                                                  EDIT: It looks as if this is being added as part of the work to improve the Linux compat layer. I’m not sure when this was actually merged in Linux, but the patches were proposed in March, so it’s pretty recent. All of the discussion I see about it in Linux is pretty negative, which is a shame because it looks as if this feature was originally proposed for Linux in 2012 and it’s really useful.

                                                  1. 2

                                                    In summary, this lets me do all of the things I actually want to do with chroot.

                                                    What uses do you have in mind? The commits and review comments don’t seem to share any particular use cases.

                                                    1. 16

                                                      The main one is run a tool in an environment where it can’t touch the rest of my filesystem. This isn’t particularly useful for security (it still has full network access, including the loopback adaptor) but it’s great for testing programs if you want to make sure that they’re not going to accidentally trample over things. It’s also great for a bunch of things like reproduceable builds, where you want to make sure that the build doesn’t accidentally depend on anything it shouldn’t, and for staging install files (you can create a thing that looks like the install location and run the install there and test it without touching anything on the host system.

                                                      Container infrastructure can solve a bunch of these problems, but spinning up a container (even on ZFS with cheap O(1) CoW clones) is far more expensive than a chroot.

                                                      1. 1

                                                        The obvious one is chroot would be really useful for isolating unprivileged binaries, but you have to let them become root via setuid or something first, then drop to the normal privileges once it’s done chrooting. Seems kinda backwards.

                                                        1. 5

                                                          That’s already done today in a lot of applications that drop privilege. I mainly ask the use case for un-priviledged users that can’t use chroot(2) or chroot(8) because they can’t elevate privilege.

                                                          The build system use case @david_chisnall mentions seems the most logical to me. That seems valuable.

                                                          When you start using the phrase “isolating unprivileged binaries” it’s into the realm of defending against something. And to make some software run in a chroot, you often have to start setting the chrooted filesystem with dynamic libraries, etc. It’s a really rough tool for an arbitrary user to just use for running an arbitrary application that doesn’t expect to be chrooted.

                                                    1. 5

                                                      Recently, I’ve been thinking: there should exist a “save to drag-and-drop” option - basically, instead of:

                                                      1. selecting “save as”, and saving the file at location X
                                                      2. opening up the file manager and navigating to the location X
                                                      3. then dragging and dropping the file where you want it…

                                                      You should just be able to hit a shortcut which gives you an icon of the current file to drag+drop wherever you want it.

                                                      1. 3

                                                        Risc OS might be interesting in this regard; save is done through drag and drop.

                                                        I wrote a WIndows shell extension that might be interesting; it displays all the open file manager windows from file picker dialogs.

                                                      1. 3

                                                        This has always been a source of frustration for me. The net effect is that if I compile an executable and give it to someone, it sometimes works, and sometimes blows up with a user-hostile error message. To make things worse, googling that message can lead to pages with malware.

                                                        I’m especially annoyed that the missing runtime error is a “fuck-you” level of error handling. Even if Microsoft didn’t want to bundle all the runtimes with Windows, this error should have been “this program needs a runtime, which we’ll download and install for you right now” (see how macOS installs Rosetta).

                                                        At very least Microsoft could have added if (dll == MSVCR??.DLL) to show a URL to the official Microsoft site rather than let users wander all over the Internet full of sites like “DLL problems? Download our crapware!”

                                                        1. 2

                                                          They learned how to do just that with .net framework. You can distribute a single small exe targeting .net framework 4.7 for example and it will run on updated systems immediately. On older ones you get a message “this requires newer framework, do you want to install it now?” This is a great approach.

                                                          For DLLs I can imagine it’s a bit harder because it needs to hand over to some downloader and it’s not necessarily true that a name match means you want that runtime… but they could surely try a little bit harder.

                                                          1. 1

                                                            They kinda unlearned that with Core though; that’s built with the app instead.

                                                        1. 3

                                                          I saw the “pdf” tag and was simply going to skip it. pdfs are so hard to read that I rarely find them worth the bother, but the comments made me curious. Of course it is really hard to use, but the one but that amused me is on page 7, it says pdf has feature creep but you can just ignore it, but then on the very same page, says you can’t just ignore html’s feature creep. OK.

                                                          But that does bring me to a nice thing, I do legitimately like being able to say “on page 7”. (though I’d often still say “halfway down page 7”) In html documents, sometimes I’ll say things like “about 3/4 down the page, under the “backwards to the future” header, but it isn’t quite as nice. Ironically I’d probably paper (lol) this over with some javascript on the web, making a numeric scroll position indicator.

                                                          I also agree with the immutable aspect, to some extent. What I like to do is indeed avoid edits, and if I do make one, clearly mark when and where an edit was made. I also agree all documents should have a date on them. It is often frustrating to me when it is hard to figure out when something was written - the date is part of the context of the article.

                                                          1. 4

                                                            There was a mini-trend of “purple hashes” on blogs ca. 2005 where each paragraph had an auto-generate HTML anchor so you could easily link to a section on a paragraph basis. Like so many other early aughts blogging trends it was way too twee to catch on, but I’ve since then become friends with the <a name> tag and kind of miss it.

                                                            When I need to reference a section on another page unambiguously I cite it verbatim - it both gives a context to my comment, as well as making it relatively easy for someone else to search the page for the text. Copying and pasting from PDFs is fraught, though.

                                                            1. 3

                                                              IIRC, browsers will treat any id attribute like <a name>; kill two birds (CSS and anchors) at once. Chrome has an extension to link directly to unmarked text too.

                                                          1. 0

                                                            What’s the point of this? Most of the unique features of zOS are only really useful or interesting if you’re running it on a mainframe, which this person isn’t doing. 90% of the blogpost is the person trying to get a copy of it in the first place, and talking about code licensing bullshit.

                                                            I don’t see why anyone would go through this trouble except out of curiosity, but as far as I can tell for ‘normal’ use it’s basically just a unix box with some quirks, which along with the earlier licensing BS makes it seem like a lot of effort for very little gain – compare with running something like 9front where it’s a mostly unique system and you can acquire the entire thing for free without much effort.

                                                            Can someone explain why this is useful / interesting to do?

                                                            1. 2

                                                              What’s the point of this?

                                                              It makes the OP happy. What other justification does he need?

                                                              1. 1

                                                                Ok, that’s cool. But this is a guide on installing it and he doesn’t really give me a reason to do any of that. He said part of his reason for installing it is to pass the knowledge on to the next generation, but he just utterly fails to give any kind of reason on why this is worthwhile knowledge to pass on if you’re not working literally as a sysadmin on wallstreet

                                                              2. 1

                                                                They say why, it’s the first sentence

                                                                Some people retire and buy an open top sports car or big motorbike. Up here in Orkney the weather can change every day, so instead of buying a fast car with an open top, when I retired, I got z/OS running on my laptop for the similar sort of price! This means I can continue “playing” with z/OS and MQ, and helping the next generation to use z/OS. At the end of this process I had the software installed on my laptop, many unwanted DVDs, and a lot of unnecessary cardboard

                                                                1. 1

                                                                  Calling z/OS a “unix box with quirks” is underselling it extremely. It’s quite a bizarre OS branch people know little about, but that’s because IBM has no hobbyist program and you only see it if you’re basically MIS at a Fortune 500.

                                                                  I don’t think there’s too much other than licensing bullshit in the OP either (it’s thin otherwise); he’d be better off using literally anything z/PDT for hyucks.

                                                                  1. 1

                                                                    Calling z/OS a “unix box with quirks” is underselling it extremely.

                                                                    And yet neither this blogpost, nor the wikipage for the operating system does anything to disabuse me of this notion, and there doesn’t seem to be any feature of this that is useful for someone running it on something that isn’t a mainframe.

                                                                    I don’t think there’s too much other than licensing bullshit in the OP either (it’s thin otherwise); he’d be better off using literally anything z/PDT for hyucks.

                                                                    yup

                                                                1. 1

                                                                  I think most people mainframe curious are either…

                                                                  1. Run the last public domain versions (or enhanced versions of) the OSes to get a feeling for them (or run newer versions in, but that’s sketch)

                                                                  2. Are better off running something like i instead, which gives you the IBM mainframe aesthetics in a far easier to use and more manageable tower/4U server. (edit: Or Unisys’ hobbyist program, which is very reasonable. But it’s not IBM…)

                                                                  z/PDT is awfully expensive to run current generation mainframe stuff; it’s priced like it’s you’re actually using it for your job. (edit: And nowadays at least for z, IBM does provide i.e cloud instances for people to fuck around and find out about z/OS on…)

                                                                  1. 1

                                                                    i

                                                                    i.e

                                                                    IBM really needs to work on naming.

                                                                  1. 2

                                                                    Back in 2000, AMD included cmov in its 64-bit x86 ISA extensions. Then, Intel had to adopt them when Itanium flopped.

                                                                    The first sentence is technically true, in the sense that AMD64 did include cmov, but the instruction was originally introduced in 1996 by Intel with the Pentium Pro, and became widespread with the Pentium II.

                                                                    1. 11

                                                                      The Alpha also had a conditional move as well. RISC-V doesn’t have one. It’s a very interesting microarchitectural trade-off.

                                                                      On a low-end core, a conditional move can require an extra port on the register file (this was why it was such a large overhead on the Alpha): it requires you to read three values: the condition (cheap if you have condition codes, a full register if you don’t), the value the source register to be conditionally moved, the value of the destination register that may need to be written back to if the condition is false.

                                                                      On a high-end core, you can fold a lot of the behaviour into register rename and you already have a lot of read ports on rename registers so conditional move isn’t much overhead.

                                                                      In both cases, conditional move has a huge effect on the total amount of branch predictor state that you need. You can get away with significantly less branch predictor state with a conditional move than without and get the same performance - a modern compiler can transform a phenomenal number of branches into conditional moves. The total amount varies between pipeline designs (it’s been years since I measured this, but I think it was about 25% less on a simple in-order pipeline, more on an out-of-order one).

                                                                      Once you have sufficient speculative execution that branch predictor performance becomes important to performance, conditional move becomes incredibly important for whole-system performance. For x86 chips, it would probably start to make sense around the Pentium, given a modern compiler. Compilers in the ’90s were much less good at the if conversion than they are now and so it may not have made very much difference when it was introduced in the Pentium Pro that it would do on an equivalent pipeline today.

                                                                      Arm went all-in on predication from the first processor and so managed without any branch prediction for much longer than other processors. It’s much easier for a compiler to do if conversion if every instruction is conditional than if you just have a conditional move. AArch64 dialled this back because compilers are now much better at taking advantage of conditional move and microarchitects really hate predicated loads (compiler writers, in contrast, really love them, especially for dynamic languages).

                                                                      1. 2

                                                                        Or the fact AMD64 came out a few years later than that…

                                                                      1. 4

                                                                        Seeing this makes me wish XMPP was more of a contender.

                                                                        1. 5

                                                                          Is Docker a hard requirement? I’d love to see an article on getting Mailu running on FreeBSD or HardenedBSD. But if Docker’s a hard requirement, then software portability’s a no-go.

                                                                          1. 1

                                                                            I think Mailu is only available as a set of Docker images, based on their documentation.

                                                                            1. 13

                                                                              It saddens me when developers choose Docker, which in reality is an “open source vendor lock-in” tool. Enforcing the use of Docker places arbitrary limits on how the project can be used.

                                                                              Software monocultures are bad.

                                                                              1. 6

                                                                                Assuming:

                                                                                1. Maybe Docker takes full advantage of Linux specific-features like cgroups, porting (it and the dependent Dockerfiles using it) would be difficult
                                                                                2. Maybe Docker can instead mostly be generalized to jails, but the tooling doesn’t exist (and good luck on OSes without a concept, like OpenBSD!)
                                                                                3. The alternative is a pile shell scripts that mutate a system in place, welcome to suffering

                                                                                Why should they bend over backwards to make deployment harder and less maintainable, especially when Docker has significant mindshare already? This seems like the kind of portability that maintainers don’t like to hear; it feels like kneecapping yourself for a small few. (I say this as someone who ports software to weird platforms professionally and runs FreeBSD on a server.)

                                                                                Disclaimer: I’ve used Docker (well, podman) once, and because the alternative was installing Db2 myself.

                                                                                1. 6

                                                                                  I generally don’t like to use Docker even on Linux because it trades off a little bit of complexity of running serverd and setting up cgroups vs. a the massive beast that is Docker and its 90M dockerd that needs to run as root and does all sorts of automagic stuff. dockerd is exactly the sort of thing I would put in a container, ironically.

                                                                                  runc is okay though, and what I use if I really want to run a container image.

                                                                                  1. 1

                                                                                    Docker services are also like blue/red “functions” we’ve had here previously. Have fun trying to talk from a docker container to a mariadb instance on your host etc, you’ll end up making the mariadb server also a container, just so you don’t have to deal with changing the docker owned iptable rules and stuff like “what is the IP address of my container/host”. And if you’re running a good proxmox cluster you’ll already have a multitude of VMs to isolate services but don’t want to have a host with a VM with a docker container, just because somebody decided it’s funnier for deployment to them. You can “easily” package something debian native into docker, but the other way around is annoying as hell. Even worse: most of the time such a project may not even have any kind of docs about what you need if you’re trying to setup this by yourself. And I’ve been the producer of such a project once, it’s still annoying as hell for people that do not just stuff everything into docker.

                                                                                2. 2

                                                                                  Docker images are less locked in than you think (or then this comment makes it look like). The actual image format is an open specification and can be converted or used directly with other container orchestration tools.

                                                                                  1. 1

                                                                                    Except if you don’t want any fancy orchestration tools because you’re running a bunch of VMs with KVM & Ceph as an HA cluster.

                                                                                    1. 1

                                                                                      Sure, but I said less locked in, not “Not locked in at all”.

                                                                                      That said, I wonder how hard would be to convert a dockerfile or an OCI image into a VM image … Maybe there’s even something out there already

                                                                                  2. 1

                                                                                    From my point of view, I’d build something like this in Docker because… well, I know how to use Docker, everyone I’ll share my project with knows how to use Docker. I’m actually not well versed in the alternatives. What would you recommend someone to use who wants to set up a project like Mailu, that isn’t Docker?

                                                                                    1. 3

                                                                                      I’m primarily a BSD user, where Docker’s not supported. I haven’t run Linux in any serious capacity in well over a decade. I get that projects like this have a bunch of moving parts. Shell scripts and configuration management tools can help simplify things.

                                                                                      If I wanted to use Mailu on the BSDs, I’d have a lot of headaches ahead of me. And, even then, I might fail if there’s dependencies on other Linux-only projects (for example: SystemD).

                                                                                      1. 2

                                                                                        I’m not a fan either but in this case it solves a problem. “A set of n packages (not sure if they are all packaged for this OS version) with m configs that all play well together.”

                                                                                        There’s also sovereign which solves the same problem for one host OS version via an ansible playbook, but even if your OS of choice has all the packages, most seem to lack this “I want a dedicated set of configs”.

                                                                                        1. 2

                                                                                          A set of n packages (not sure if they are all packaged for this OS version)

                                                                                          I’m a huge fan of helping out with the package system. For me, if the project in question doesn’t exist in the package repo, the project won’t get used at all. Everything must reside in the package repo for me. If it’s not there, then I add it.

                                                                              1. 2

                                                                                For me, this was the most interesting part of the article:

                                                                                The installation process is very fast in both cases, it took around 29 seconds for the Core and around 76 for the Full one.

                                                                                While there are major differences, that’s a very far cry from Windows.

                                                                                1. 1

                                                                                  Don’t know about the article yet, but I’ve just noticed on the linked GitHub repo this oddity: they provide instructions for building on a Ubuntu 18.04, but you need to include rpm. What gives? Onwards into the article I guess!

                                                                                  1. 1

                                                                                    It looks like Anaconda, so I’m assuming it’s something RH shaped. But then building on Ubuntu would be odd.

                                                                                    (I’m assuming the fast install time is the fact there’s probably not much in it.)

                                                                                    1. 1

                                                                                      WSL runs Ubuntu by default, that’s my best guess. But Fedora / RHEL has more security features, that’s probably why they use it for the produced server image.

                                                                                      1. 1

                                                                                        Yes, the article mentioned they based the initial project on Fedora spec files, so that’s where they started I guess.

                                                                                  1. 0

                                                                                    ….K&R syntax?

                                                                                    1. 2

                                                                                      I had a feeling my limited C experience would expose itself somehow! Thanks for the feedback. The K&R syntax comes from me copying and pasting from Bash’s source code which often still uses the K&R syntax. I have updated the examples with the modern function definition syntax.

                                                                                      1. 3

                                                                                        I had a feeling my limited C experience would expose itself somehow!

                                                                                        In the spirit of education, a few nits:

                                                                                        char *sleep_doc[] = {"Patience please, wait for a bit!", (char *)NULL};
                                                                                        

                                                                                        NULL is just (void *)0, and

                                                                                        A pointer to void can be implicitly converted to and from any pointer to object type…

                                                                                        so it doesn’t need to be explicitly casted to char *.

                                                                                        return (EX_USAGE);
                                                                                        

                                                                                        EX_USAGE is just a plain old number, so you don’t need any parentheses. In general, macros should include their own parenthesis (unless you have a truly sadistic coding style, or are doing something funny with blocks).

                                                                                        sleep(secs);
                                                                                        

                                                                                        Note that sleep can fail, so you might want to stick this in a loop (yes, I know this was probably abbreviated for the sake of example, but you handled all the other errors).

                                                                                        struct builtin sleep_struct = {
                                                                                            "sleep",         /* Builtin name */
                                                                                            sleep_builtin,   /* Function implementing the builtin */
                                                                                            BUILTIN_ENABLED, /* Initial flags for builtin */
                                                                                            sleep_doc,       /* Array of long documentation strings. */
                                                                                            "sleep NUMBER",  /* Usage synopsis; becomes short_doc */
                                                                                            0                /* Reserved for internal use */
                                                                                        };
                                                                                        

                                                                                        Note that with C99, you can use designator expressions to initialize your structs like

                                                                                        struct builtin sleep_struct = {
                                                                                            .name      = "sleep",         /* Builtin name */
                                                                                            .function  = sleep_builtin,   /* Function implementing the builtin */
                                                                                            .flags     = BUILTIN_ENABLED, /* Initial flags for builtin */
                                                                                            .log_doc   = sleep_doc,       /* Array of long documentation strings. */
                                                                                            .short_doc = "sleep NUMBER",  /* Usage synopsis; becomes short_doc */
                                                                                        };
                                                                                        

                                                                                        but, alas, it appears that bash does not use this style.

                                                                                        (variable_context > 0) && (global_vars == false)
                                                                                        

                                                                                        Note that && has lower precedence than both > and ==, so this would typically be written

                                                                                        variable_context > 0 && global_vars == false
                                                                                        

                                                                                        In addition, I believe that although variable_context is an int, it should always be positive or zero (see e.g. execute_function). And any comparison to zero like x == 0 may be replaced by !x, so this could also be rewritten like

                                                                                        variable_context && !global_vars
                                                                                        
                                                                                        SHELL_VAR *toc_var = NULL;
                                                                                        

                                                                                        Note that if you really are writing in K&R style, then declarations should come before assignments.

                                                                                        char *sep = "_";
                                                                                        size_t sec_size = strlen(toc_var_name) + strlen(section) + strlen(sep) +
                                                                                                          1; // +1 for the NUL character
                                                                                        char *sec_var_name = malloc(sec_size);
                                                                                        char *sec_end = sec_var_name + sec_size - 1;
                                                                                        char *p = memccpy(sec_var_name, toc_var_name, '\0', sec_size);
                                                                                        if (!p) {
                                                                                          builtin_error("Unable to create section name");
                                                                                          return 0;
                                                                                        }
                                                                                        p = memccpy(p - 1, sep, '\0', sec_end - p + 2);
                                                                                        if (!p) {
                                                                                          builtin_error("Unable to create section name");
                                                                                          return 0;
                                                                                        }
                                                                                        p = memccpy(p - 1, section, '\0', sec_end - p + 2);
                                                                                        if (!p) {
                                                                                          builtin_error("Unable to create section name");
                                                                                          return 0;
                                                                                        }
                                                                                        

                                                                                        This is not very idiomatic. A more typical formulation would be

                                                                                        static const char fmt[] = "%s_%s";
                                                                                        size_t sec_size = snprintf(NULL, 0, fmt, toc_var_name, section) + 1; /* +1 for the NUL character */
                                                                                        char *sec_var_name = xmalloc(sec_size);
                                                                                        
                                                                                        snprintf(sec_var_name, sec_size, fmt, toc_var_name, section);
                                                                                        

                                                                                        Note also the use of xmalloc, which bash appears to use a lot, and which just dies if you run out of memory. And of course, C++ style comments are far too modern for bash ;)

                                                                                        ini.so: libinih.o ini.o
                                                                                        	$(CC) -o $@ $^ $(LDFLAGS)
                                                                                        
                                                                                        sleep.so: sleep.o
                                                                                        	$(CC) -o $@ $^ $(LDFLAGS)
                                                                                        

                                                                                        Note that you can reduce this to something like

                                                                                        %.so: %.o
                                                                                        	$(CC) -o $@ $^ $(LDFLAGS)
                                                                                        
                                                                                        ini.so: libinih.o
                                                                                        

                                                                                        and similarly, you can reduce

                                                                                        libinih.o: inih/ini.c
                                                                                            $(CC) $(CFLAGS) $(INIH_FLAGS) -o libinih.o inih/ini.c
                                                                                        

                                                                                        to

                                                                                        ini/ini.o: CFLAGS += $(INIH_FLAGS)
                                                                                        

                                                                                        (but of course you have to update your dependencies).

                                                                                        1. 4
                                                                                          (variable_context > 0) && (global_vars == false)
                                                                                          

                                                                                          Note that && has lower precedence than both > and ==, so this would typically be written

                                                                                          variable_context > 0 && global_vars == false

                                                                                          For the sake of readability I would keep the parentheses. Precedence rules are complicated.

                                                                                          Everybody understands the version with parentheses. Not everybody (especially novices) can correctly parse the version without parentheses.

                                                                                          1. 1

                                                                                            Precedence rules are complicated.

                                                                                            Not really. The precedence rules are specifically designed to make writing expressions like the above easy. The only time when you really need parentheses is for bitwise operations which have lower precedence than comparisons for some reason, or when you want to mix operators with the similar precedence. That is, I would not write

                                                                                            a && b || c
                                                                                            

                                                                                            instead of

                                                                                            (a && b) || c
                                                                                            

                                                                                            even though they are equivalent. However, it reduces the burden on the reader to use less parentheses where possible.

                                                                                          2. 1

                                                                                            In the spirit of education, a few nits:

                                                                                            Thanks for the careful code review…

                                                                                            A pointer to void can be implicitly converted to and from any pointer to object type… so it doesn’t need to be explicitly casted to char *.

                                                                                            fixed, I was unaware, thanks.

                                                                                            return (EX_USAGE); EX_USAGE is just a plain old number, so you don’t need any parentheses. In general, macros should include their own parenthesis (unless you have a truly sadistic coding style, or are doing something funny with blocks).

                                                                                            fixed thanks

                                                                                            sleep(secs); Note that sleep can fail, so you might want to stick this in a loop (yes, I know this was probably abbreviated for the sake of example, but you handled all the other errors).

                                                                                            I was trying to handle all errors, so thanks for spotting this, fixed.

                                                                                            Note that with C99, you can use designator expressions to initialize your structs like

                                                                                            struct builtin sleep_struct = {
                                                                                                .name      = "sleep",         /* Builtin name */
                                                                                                .function  = sleep_builtin,   /* Function implementing the builtin */
                                                                                                .flags     = BUILTIN_ENABLED, /* Initial flags for builtin */
                                                                                                .log_doc   = sleep_doc,       /* Array of long documentation strings. */
                                                                                                .short_doc = "sleep NUMBER",  /* Usage synopsis; becomes short_doc */
                                                                                            };
                                                                                            

                                                                                            but, alas, it appears that bash does not use this style.

                                                                                            Those are nice, fixed

                                                                                            (variable_context > 0) && (global_vars == false) Note that && has lower precedence than both > and ==, so this would typically be written

                                                                                            variable_context > 0 && global_vars == false In addition, I believe that although variable_context is an int, it should always be positive or zero (see e.g. execute_function). And any comparison to zero like x == 0 may be replaced by !x, so this could also be rewritten like

                                                                                            variable_context && !global_vars

                                                                                            This last form seems really nice, so I switched to it.

                                                                                            SHELL_VAR *toc_var = NULL; Note that if you really are writing in K&R style, then declarations should come before assignments.

                                                                                            I was actually trying to write in a modern style, but I still have quite a bit to learn about what all that entails. The K&R bits slipped in through my copy pasta ignorance.

                                                                                            char *sep = "_";
                                                                                            size_t sec_size = strlen(toc_var_name) + strlen(section) + strlen(sep) +
                                                                                                              1; // +1 for the NUL character
                                                                                            char *sec_var_name = malloc(sec_size);
                                                                                            char *sec_end = sec_var_name + sec_size - 1;
                                                                                            char *p = memccpy(sec_var_name, toc_var_name, '\0', sec_size);
                                                                                            if (!p) {
                                                                                              builtin_error("Unable to create section name");
                                                                                              return 0;
                                                                                            }
                                                                                            p = memccpy(p - 1, sep, '\0', sec_end - p + 2);
                                                                                            if (!p) {
                                                                                              builtin_error("Unable to create section name");
                                                                                              return 0;
                                                                                            }
                                                                                            p = memccpy(p - 1, section, '\0', sec_end - p + 2);
                                                                                            if (!p) {
                                                                                              builtin_error("Unable to create section name");
                                                                                              return 0;
                                                                                            }
                                                                                            

                                                                                            This is not very idiomatic. A more typical formulation would be

                                                                                            static const char fmt[] = "%s_%s";
                                                                                            size_t sec_size = snprintf(NULL, 0, fmt, toc_var_name, section) + 1; /* +1 for the NUL character */
                                                                                            char *sec_var_name = xmalloc(sec_size);
                                                                                            
                                                                                            snprintf(sec_var_name, sec_size, fmt, toc_var_name, section);
                                                                                            

                                                                                            Note also the use of xmalloc, which bash appears to use a lot, and which just dies if you run out of memory. And of course, C++ style comments are far too modern for bash ;)

                                                                                            I really struggled with the right way to do string concatenation with modern C. I tried using snprintf but clang-tidy flags it with this warning:

                                                                                            • “Call to function ‘snprintf’ is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as ‘snprintf_s’ in case of C11”

                                                                                            Which led me to this blog post:

                                                                                            The posts advocates for using memccpy, which I thought was worth trying, though I found the api hard to love.

                                                                                            ini.so: libinih.o ini.o
                                                                                              $(CC) -o $@ $^ $(LDFLAGS)
                                                                                            
                                                                                            sleep.so: sleep.o
                                                                                              $(CC) -o $@ $^ $(LDFLAGS)
                                                                                            

                                                                                            Note that you can reduce this to something like

                                                                                            %.so: %.o
                                                                                              $(CC) -o $@ $^ $(LDFLAGS)
                                                                                            
                                                                                            ini.so: libinih.o
                                                                                            

                                                                                            and similarly, you can reduce

                                                                                            libinih.o: inih/ini.c
                                                                                                $(CC) $(CFLAGS) $(INIH_FLAGS) -o libinih.o inih/ini.c
                                                                                            

                                                                                            to

                                                                                            ini/ini.o: CFLAGS += $(INIH_FLAGS)
                                                                                            (but of course you have to update your dependencies).kjj
                                                                                            

                                                                                            I still have much to learn about creating concise Makefiles, thanks for advice, fixed.

                                                                                            Thanks again for the careful feedback.

                                                                                            1. 2

                                                                                              Have you considered using asprintf for allocating strings?
                                                                                              This removes the need to compute the length, allocate memory, and copy individual strings.
                                                                                              The bash source’s autoconf script checks for asprintf so there aren’t any compatibility issues.
                                                                                              I think it would be something like the following untested code.

                                                                                              asprintf(&sec_var_name, "%s%s%s", toc_var_name, sep, section);
                                                                                              
                                                                                              1. 1

                                                                                                I had not, though that seems like a good option, I wonder if the C standards org ever considered including them in the next revision.

                                                                                                1. 1

                                                                                                  Other than malloc(), calloc() and realloc(), I don’t recall any standard C function [1] that will allocate memory, and I suspect that’s intentional.

                                                                                                  [1] I haven’t fully read through the C11 standard, so there may be new functions that do allocate memory as part of their function.

                                                                                                  1. 1

                                                                                                    yeah, that is evidently the rub, they mentioned that difficulty in the review of snprintf_s and related functions.

                                                                                                    1. 1

                                                                                                      I think str(n)dup is being added for C23.

                                                                                                      1. 1

                                                                                                        good point, it seems they are open to some functions which allocate memory, https://en.wikipedia.org/wiki/C2x, I wonder what the rationale is on which ones are allowed.

                                                                                                2. 2

                                                                                                  Don’t let clang-tidy scare you off from using snprintf(). All snprintf_s() does is error out at runtime if the format string is NULL or any string passed in for a “%s” specifier is NULL and may restrict the size of the output buffer [1]. The code Forty_Bot provided is fine.

                                                                                                  [1] The size parameter for snprintf_s() is of type rsize_t, which may not exceed RSIZE_MAX (which may be smaller than SIZE_MAX, the maximum of size_t).

                                                                                                  1. 1

                                                                                                    Thanks for the additional info, I wish clang-tidy had more information on its warnings, something akin to shellcheck’s SCXXXX codes and wiki would be much appreciated.

                                                                                              2. 1

                                                                                                Ah! Classic GNU.