1. 16

    Helpful colleagues and time.

    But, also, the ability to examine & run bits in isolation, ideally, the desktop workstation.

    1. 21

      yes and no?

      The software engineering industry learned nothing.

      If you hire a couple people and give them a tiny budget and an impossible deadline, you probably should not look elsewhere when the project flames out. It’s a good lesson for political operators and an object lesson in project management and funding.

      Engineers could have foretold this result without even blinking. So, the software engineering industry qua software engineers knows what they are doing, they didn’t need to learn anything, this article is a zero entropy source. Experienced executives often know the score here too, btw - they tend to have a good feel for what makes a satisfactory tradeoff in their industry of bugs/features/delivery/risk (higher than engineers would like, but its still there).

      there is a large literature on software management, quality, planning, etc.

      and basically if you hire 4 people (3 super junior) and underpay them, with an accelerated budget, you should not expect to succeed.

      this is known.

      we the industry are not bad at this.

      we are not paid to do it right, and most of us really care about paying the rent.

      ed: we are bad at other things. but this is just boring old project management failure.

      1. 1

        vs 2000?

        • information.

        • faster computers.

        • much more F/LOSS

        • accessible programming languages

        1. 8

          Love the article. Gotta call this one out, though:

          “But who has time for that? We haven’t seen new OS kernels in what, 25 years? It’s just too complex to simply rewrite by now. “

          That’s been BS. OpenBSD is the longest-running proof. DragonflyBSD a more recent one. Minix 3 kind of halted but got massive deployment by Intel. The Solaris 10 rewrite cost almost $300 million but had huge benefits. Microsoft made their own implementation of Linux that will be important for their long-term competitiveness. Lots of smaller projects such as Genode and Redox OS ongoing.

          They all prove you definitely can rewrite or clean-slate some of this software. Whether you should is another matter. For legacy rewrites, modern tooling for code analysis/transformation and equivalence testing makes it easier than it used to be, too.

          1. 12

            TempleOS should probably be on that list too.

            1. 6

              Definitely. It’s both an example of clean-slate and in its own category altogether.

              The bigger slip was not mentioning the Oberon System(s). Heck, that even started when Wirth was shown a personal computer at Xerox PARC, not allowed to buy it, and then (Lilith project) they made their own hardware/OS/language/everything.

            2. 3

              seL4 too, as an OS kernel.

              the way I see it is that there was an evolutionary explosion and then collapse for the personal computer market.

              1. 2

                That’s a good point on PC market. That brings us to MorphOS and Haiku of Amiga and BeOS legacies. Maybe throw ReactOS in there for Windows.

                1. 2

                  Yeah. Looking backwards in time from today produces sort of a narrowed vision of what actually occurred, since a lot of options failed.

                  I went to White Sands Missile range’s public aircraft museum once - its this small place far away from anywhere, in New Mexico. There are a ton of designs that are wiiild from the 40s-60s. For whatever reason, they didn’t pan out. You’d have to be an educated person in the field to actually know they existed, because they are not directly inferrable from the designs you see on an airfield today.

                  The IoT/embedded field has a lot wider varieties right now than the PC market. Phones, oddly, don’t. Not perfectly sure why that is, myself. Best guess is locked bootloaders and focus on consumers, rather than developers.

                  1. 2

                    Phones, oddly, don’t

                    It could be because stuff around EM emissions is so fraught with regulation, it’s simpler to build on existing, approved systems. Even if these are technically outside the phone’s operating system itself, it’s non-trivial to write a driver for, say, a UMTS chip in an OS that’s not “popular”.

            1. 4

              metacomment: Fascinating to see lobsters taken over by people who are fully homo economicus, and equate good software with business value.

              The answer is really to be looking to create legal liabilities for defective software. If the software doesn’t perform to spec, lawsuits civil and up to criminal should be applied. Otherwise you’re in a barely controlled race to the bottom for commodity goods.

              1. 2

                As a first-order analysis tool when examining human behavior,the homo economicus view is generally very helpful. Care must be taken to see it as descriptive, not prescriptive.

                create legal liabilities for defective software.

                Another approach to creating better quality software would be to enforce a professional standard on software developers (akin to lawyers, MDs, and engineers in some jurisdictions). It’s quite possible that enforcing legal sanctions would quickly lead to such a situation.

                In both cases, it would be more expensive and time consuming to develop software.

                1. 1

                  In both cases, it would be more expensive and time consuming to develop software.

                  Given that software is so intertwined with our lives, I think not being able to deliver ultra-cheap junk to users is perfectly fine.

                2. 1

                  create legal liabilities for defective software

                  How authoritarian.

                  1. 1

                    much like legal liabilities for engineering, yes, definitely. /sarcasm

                1. 3

                  I am reminded of Cloudformation.

                  The right solution for Cloudformation has been to use something like troposphere (https://github.com/cloudtools/troposphere), because declarative languages do not have adequate expressive power for expressing the needs of the system; they are excellent descriptors, but systems demand more expressivity which, if a template system is used, has to be rebaked in at other levels.

                  With k8s, if I was starting anew, I’d be looking at using Helm’s ability to describe an application, then writing my own object graph that would render into k8s as needed, leveraging helm’s versioning/rollback.

                  1. 5

                    I suspect that most discrimination that happens against older programmers is not, in fact, age discrimination, but is, in fact, ‘wisdom discrimination’.

                    Rang true to me.

                    1. 12

                      weak men … already know you won’t find time to watch it

                      My stopwatch estimates I spent a minute and thirty-eight seconds reading this blog post. one sixtieth of the time, roughly, of Blow’s talk. My time and my attention are far more valuable than Yet Another Frigging Talk/Podcast. Fortunately, OP implicitly recognized this and wrote text, the appropriate medium for serious thinking and concepts.

                      That said.

                      Software is a house of cards because our economic system does not reward or prize proper reinvention, and American consumers reject, with passion, (remember Windows 8’s rollout?) substantial change in how their world works, demonstrating a lived conservativism for how their tools work.

                      The entire stack is built on these notions of backwards compatibility teetering on adhoc processors and systems from the 70s. Then the Web is rolled in, and now we’re building on a language designed in 2 weeks, just to make the monkey dance.

                      Yet, it makes money, reliability is tolerable, and profits continue flowing.

                      To really fix the situation, you’re looking at scrapping (in the end), everything from the x86 interface on up, and butchering so many sacred cows it’d be a revolution in the religion of software dogma. Costs for the end consumer & businesses would, in the medium term, probably rise 10-50x, since no more commodity would exist for the system.

                      1. 8

                        our economic system does not reward or prize proper reinvention … Costs for the end consumer & businesses would, in the medium term, probably rise 10-50x, since no more commodity would exist for the system.

                        Not such a rousing case for “proper”. As always, when people look at legacy, they see the ugly surface and ignore the deep value below the surface.

                        Certainly the pile of hacks is nasty and I don’t like it, but to wag one’s finger about “proper” ways and claim that a working system is a failure of economics(?), makes no sense.

                        butchering so many sacred cows

                        Legacy is the opposite of a sacred cow, it actually provides value, and is detested rather than worshipped. The Proper Way is the sacred cow, a false ephemeral idol that yields vaporware and lofty claims.

                        By the way, proper engineering considers real-world constraints including time/financial budget, effort vs payoff (leverage), and, yes, existing investments.

                        Meanwhile https://urbit.org has actually done what you suggest: reimplement the entire stack. Have you tried it? Or too busy on the legacy stack? :)

                        1. 0

                          I’m familiar with the dark enlightenment’s creation, thanks.

                          The Proper Way is the sacred cow, a false ephemeral idol that yields vaporware and lofty claims.

                          oh, go away.

                          ed: That is demonstrably false; a genuine false centrism that propagates terrible ideas and prioritizes legacy in the name of value. I simply refuse to engage with that kind of ahistoricity and bad philosophy of science.

                          1. 3

                            prioritizes legacy in the name of value.

                            If value isn’t a worthwhile goal, you may excuse the passers-by on your street corner for being confused about your meaning.

                      1. 29

                        I worked at large companies with user-facing products similar to what the author referenced - not Apple, but Skype, Microsoft and Uber. I worked or observed the team closely on OSes like XBox One and Windows 8, similar to the MacOs example. I think the example is overly dramatic.

                        In software - especially with BigTechCo - the goal is not to ship bug-free software. It’s to ship software that supports the business goal. Typically this means gaining new customers and reducing user churn. Ultimately, the end goal of publicly traded companies is to provide shareholders value. And Apple is dam good at this, generating $50B in profit on $240B in revenues per year.

                        All the examples in this blog post are ones that won’t result in user churn. The Catalina music app having a buggy section? Big deal, it will be fixed in the next update. The Amazon checkbox issue? Same thing: it will be prioritised and fixed sometime. They are all side-features with low impact. The team might have known about it already. Or - more likely - this team did not spend budget on thorough testing, as what they were building isn’t as critical as some other features.

                        The Skype app was full of smaller bugs like this: yet it dominated the market for a long time. When it failed, it was not for this. Catalina likely spent resources on making sure booting was under a given treshold and updates worked flawless. Things that - if they go wrong - could lead to loss of customers, gaining fewer new ones. So things that would directly impact revenue.

                        Finally, a (very incorrect) fact:

                        Lack of resources? This is Apple, a company that could’ve hired anyone in the world. There are probably more people working on Music player than on the entire Spotify business. Didn’t help.

                        This is plain false and very naive thinking. Teams are small at Apple and the music player team for Catalina is likely 10-15 people or less, based on my experience. While Apple could hire an army for an app like this, then they would not be the $1T company they are today. They have that valuation because they are very good at consistently generating high profits: for every $10 of revenue, they generate $2 of profit. They hire the number of people needed to make a good enough product and don’t spend money just because they have it.

                        What did change is Apple used to have a huge budget for manual testers: it was insane, compared to other companies. Due to rationalising - publicly traded company et al - the testing department is likely smaller for non-critical apps. Which puts them in-line or slightly above with the rest of their competitors.

                        I am not saying that bugs in software are great. But consumer-facing software development is more about iteration, speed and launching something good enough, than it is about perfection. It’s what makes economic sense. For other industries, like air travel or space, correctness is far more important, and it comes at the expense of speed and iteration.

                        It’s all trade-offs.

                        1. 12

                          It’s to ship software that supports the business goal.

                          This is really the fundamental miss of the author. Author doesn’t understand that (1) crap happens and (2) the level of quality required for a satisficed user is lower than he thinks.

                          Teams are small at Apple and the music player team for Catalina is likely 10-15 people or less, based on my experience.

                          Also, I can’t lay my head on a citation, but I think that it’s been studied that smaller teams produce better quality software (up to a point, ofc).

                          1. 3

                            All the examples in this blog post are ones that won’t result in user churn. The Catalina music app having a buggy section? Big deal, it will be fixed in the next update. The Amazon checkbox issue? Same thing: it will be prioritised and fixed sometime. They are all side-features with low impact. The team might have known about it already. Or - more likely - this team did not spend budget on thorough testing, as what they were building isn’t as critical as some other features.

                            The inability to open the iTunes Store might be bad for sales, so they’ll probably want to fix that one. But yes, as long as the basic features are working, these bugs are fine, on some level. This is how it is.

                            I think he is trying to highlight something on a more fundamental level: it should not be so easy to write these kinds of bugs. The developers should have to go out of their way to write them. But with the tools they have been given, it seems they have to work very hard to avoid writing bugs. It is like they have been given hammers that by their nature have a tendency towards hitting thumbs and sometimes manage to hit both your thumbs at the same time.

                            Let’s turn it around. Suppose software had a fundamentally good and auspicious nature. Suppose also that your product owner was a tricky fellow who wanted to add some bugs in your program. He comes up with a user story: as a user, sometimes I want to have an item be selected, but not highlighted, so as to further my confusion. I think the result of this would be a commit with some kind of conditional statement, table-driven code or perhaps an extra attribute on the list items that activates the bug path. The point being that you would need to add something to make it buggy. With the tools the Catalina music app team had, they very likely did not have to add anything at all to get those bugs.

                            The instances of bugs he brings up suggests to me that the tools involved were not used for their intended purpose. They were used to create simulacrum software. The Amazon checkboxes probably get their state from a distributed system where they “forgot” to handle multiple pending state changes. They could instead have used a design where this would never be an issue at all. If it had been designed properly, they would indeed have needed to add code to get it that buggy. And the buggy list items are probably not even in lists, but merely happen to sometimes visually resemble lists. And so on.

                            It is not good that this sort of thing happens regularly. One example from my own experience: the Swedish Civil Contingencies Agency (MSB) has an app that alerts you to important events. I cannot count how many times it has lost its settings and has defaulted to alerting about everything that happens everywhere. I have uninstalled that app. When the war arrives, I’ll be the last one to know.

                            1. 4

                              Teams are small at Apple and the music player team for Catalina is likely 10-15 people or less, based on my experience.

                              Yes, this accords with my experience. I would be surprised if it were that many people; the number of people who were working on the iTunes client was shockingly small, and they’d never grow the team just for Music.

                              1. 3

                                Based on my experience at Apple, I’d be surprised if the Music app was an actual team. Much more likely it was a few folks from another team that was tasked with creating it as a part-time project and wasn’t their primary project. Or, it could’ve been 2 folks who were fairly junior and tasked with writing it with occasional assistance.

                                In my experience teams and projects were sorely understaffed and underfunded (unless it was wasteful projects like the doomed self-driving car, in which case they were showered with far too much money and people). It was just shocking to work at a company that had so much excess cash and couldn’t “afford” to add people to projects that could really use them.

                              2. 2

                                All the examples in this blog post are ones that won’t result in user churn. The Catalina music app having a buggy section? Big deal, it will be fixed in the next update. The Amazon checkbox issue? Same thing: it will be prioritised and fixed sometime. They are all side-features with low impact. The team might have known about it already. Or - more likely - this team did not spend budget on thorough testing, as what they were building isn’t as critical as some other features.

                                One bug like this would not make me “churn”, but two or three would. I no longer use Chrome, nor iTunes, nor iOS, because of exactly this type of drop in quality. I no longer even bother with new Google products, because I know that they’re more likely than not to be discontinued and dropped without support. I no longer consider Windows because of all the dark patterns.

                                I am on the bleeding edge relative to less techical users, but I am also a lazy software dev, meaning I hate tinkering just to make something work. I’ll probably go with GNU for my next computer. And a year or two later, I bet so will my neighbor who just uses email and my friend who just needs to edit photos and my other friend who just writes papers.

                                1. 7

                                  I no longer use Chrome, nor iTunes, nor iOS, because of exactly this type of drop in quality . . . I hate tinkering just to make something work.

                                  I’ll probably go with GNU for my next computer.

                                  🤨

                                  1. 1

                                    Not sure if you intended for that to show up as “missing Unicode glyph”, but it works.

                                    You’ve got a point there.

                                    Until now, I have been using macOS for the hardware support, a few niche apps for stuff like playing music, and a GNU VM (Fedora LXDE) for dev which has proven to be a low-maintenance setup all around.

                                    1. 2

                                      The “missing Unicode glyph” is the Face with one eyebrow raised emoji and shows up for me using Chrome on iOS and Windows.

                                      1. 2

                                        And FF on Android

                              1. 2

                                This has to do with judgement, maturity, and learning. Someone has judge when to go deep, and to go broad.

                                One day, Python will be the COBOL or Java of its day. Unwanted, unpopular. There will be meaningfully better tools.

                                But if you don’t know what you’re doing in your current system of choice, you’re going to have a bad time.

                                Knowledge gives you the edge; wisdom lets you know where to cut.

                                1. 1

                                  This has to do with judgement, maturity, and learning. Someone has judge when to go deep, and to go broad.

                                  Yes, and good judgement stems, at least in part, from maturity.

                                  When I formed the destructive habit I document in the article, I was very young and fresh out of the chute. Only decades later can I see the big picture for what it is.

                                  Knowledge gives you the edge; wisdom lets you know where to cut.

                                  Makes me think of “Measure twice. Cut once.” :)

                                1. 1

                                  The tension between power users who love digging in and using their tools vs people who just want a tool to be a thing without observing its subtety rears its head again.

                                  1. 2

                                    travis, gitlab for hobby work.

                                    jenkins for day job.

                                    if I was looking for a long term day job thing, I’d be aimed at tekton.

                                    1. 6

                                      A major reason I haven’t gotten into security, despite my continued professional interest and engagement with it, is that I see much of industry security as either snake oil or “very fancy crypto”, with effective security not something sought after or sold.

                                      When you can have major user identity breaches and there are no consequences except “tiny pay out for identity theft security”, really, why bother working in that area? >.<

                                      1. 4

                                        From my perspective as a software tools guy, Python 3 has never been useful. I don’t use Unicode at all, as a rule. I’m using it now when I do Python, because Py2 is going out of date, particularly the libraries. But it’s thoroughly an uninteresting shift that has not brought me any value.

                                        1. 16

                                          You don’t really control if your programs use unicode, your users do. Things like people’s names, URLs, filenames, addresses, etc will contain unicode.

                                          1. 1

                                            You don’t really control if your programs use unicode, your users do. Things like people’s names, URLs, filenames, addresses, etc will contain unicode.

                                            These have been irrelevant for my work so far. My interactions with unicode have purely been unrelated to work.

                                            1. 1

                                              None of my users use Unicode. And I don’t use it. It’s nice to be future ready when one finally does. But I don’t think it’s a driving reason to change a bunch of stuff.

                                              I use python3 because that’s when I started using stuff. I think it’s good to design for Unicode, but it I had started with 2, I would stay there.

                                              I have little snippets of java and JavaScript and whatever that still run 20 years later. I would be annoyed if they stopped working because of something I viewed as not required.

                                              1. 5

                                                I keep track of every submission to Hacker News, Lobste.rs, and /r/Programming, and based on the titles alone, the percentage of titles with codepoints higher than 127 are

                                                • HN: 15.97%
                                                • Lobste.rs: 5.34%
                                                • Proggit: 7.20%

                                                All of these sites are explicitely English-speaking, all are based in the US, but they still have a significant amount of Unicode in their content.

                                                1. 2

                                                  None of my users use Unicode. And I don’t use it. It’s nice to be future ready when one finally does. But I don’t think it’s a driving reason to change a bunch of stuff.

                                                  Maybe it’s not for you, but in general unicode in Python 2 was a fertile source of bugs. I regularly encountered it in both other people’s programs and my own. Even though I tried to do the right thing, it was often hard and simple mistakes slip in.

                                                  And yeah, I agree it’s annoying and that for some people it’s just useless “churn” and that sucks :-( But for a lot of us there is real value, too.

                                              2. 2

                                                it’s thoroughly an uninteresting shift

                                                Doch, Unicode ist unglaublich nützlich!

                                              1. 2

                                                This looks like the real deal for a real tablet computer.

                                                I am good to buy it - once it’s shipping (I know how manufacturing issues are prevalent with new lines, hehe).

                                                1. 2

                                                  Excellent write-up.

                                                  I think the other key benefit is in process management/governance relate to the multirepo “explosion of small repos” problem. That’s a process problem generated by a technical design.

                                                  The author’s remark on the fact that it’s not really a small/midsize project problem is onpoint as well. It will probably be either an “open source” qua kubernetes (i.e., essentially big corp sharing IP, but publically), or proprietary.

                                                  ClearCase has some absolutely brilliant ideas, hidden behind years of neglect and designs for the late 80s. :-)

                                                  1. 2

                                                    At root, this gets at whether you’re a technician, an engineer, or a scientist, along with the level of play one brings to the arena. The author is arguing primarily for the technician/no-play perspective; a high focus on utilitarianism and some deprecation of general learning.

                                                    They are not wrong with how misleading C, assembly, and the simplified systems can be though. The author is, I think, targeting early-career people, who don’t have the context to understand the deeper why of the systems they are interacting with.

                                                    tidbit: my compiler’s class was in 2005, and it was taught in C, using flex/yacc as the lexer/parser. Compilers classes hadn’t all phased out of C by the mid-90s. This was a terrible disservice to the students. We should have used, e.g., Perl.

                                                    1. 4

                                                      Ruby is similar to Perl in that it suffers from “too many ways to do something”. Take for example “append to array”:

                                                      a3.push(n1, n2)
                                                      a4.concat(a1, a2)
                                                      a5.insert(-1, n1, n2)
                                                      a6 = [*a6, n1, n2]
                                                      a7 = a7 + a1 + a2
                                                      a8 << n1
                                                      

                                                      Do we really need 6 ways to do this? Compare with Go, which has one method (or two, depending on how you count:

                                                      a2 = append(a2, n1, n2)
                                                      a3 = append(a3, a1...)
                                                      

                                                      Another example is “invoke function”, where Ruby has at least 5 methods:

                                                      n1 = f1(20)
                                                      n2 = f2.call(20)
                                                      n3 = 20.then(&f2)
                                                      n4 = method(:f1).call(20)
                                                      s1 = 20.send(:to_s)
                                                      

                                                      Also, Go is a compiled language and Ruby is interpreted. I cant see a situation where one replaces the other. They are used (or should be used) for different things.

                                                      1. 2

                                                        Something about the semantics and stylistics of Go draws Python and Ruby people to it. I don’t personally understand it, but I have witnessed it.

                                                        1. 11

                                                          The draw is how frictionless the development feels. In the same way that Python and Ruby feel frictionless Go also feels frictionless. The benefit is that Go tends to stay frictionless in the maintenance period for languages as well which is a property Ruby and Python have tended not to have over time.

                                                          1. 2

                                                            In comparison to Python; it excels at the things Python is bad at. Deployment is excellent. It’s easy to cross compile. It’s easy to compile your application to one statically compiled executable. The performance is excellent for things that Python are not fast enough to be used for. You get types, so creating larger applications that don’t collapse under its own weight is much easier than in Python. You can use types in Python, and write more tests than code, but large projects in Python are still painful. And you get the advantage of not having an ex-co-worker writing a function that generates classes, 4 years ago, and now you have to dig five layers deep for every line of source code you wish to understand completely. Or so I heard, from a friend.

                                                            Go has minimalistic syntax and makes everything that has to do with other programmers much easier.

                                                        1. 39

                                                          An interesting parallel, which works for the author, but doesn’t hold universally. Go and Ruby are fundamentally different.

                                                          Unlike Rust where the compiler constantly shouts “fuck you” even though you are trying to do your best to serve their majesty and the rules they dictate, Ruby never gets in your way.

                                                          This sentence is bad in many ways. First of all: it is toxic. Maybe error messages weren’t that useful when the author tried Rust as they are nowadays, but I doubt that. Failure to understand WHY things fail should be a priority to any developer. And having things actually fail loudly as early as possible should be considered a huge benefit. And then saying “Ruby never gets in your way” is plainly wrong. I would let “never gets in my way” slide, of course.

                                                          1. 23

                                                            “Ruby never gets in your way”

                                                            Ruby gets in the way, but later on. Whether it’s the rare exception that must be tracked down or the large refactor that demands a whole-system understanding of the system, Ruby is not any less in the way than Rust or Java; it gets in the way of getting things done, but in a different manner and at different times.

                                                            1. 8

                                                              Ruby gets in the way, but later on.

                                                              Very much this. I’m sure it’s possible in other languages but I’ve never seen people deliberately create ungreppable methods in any other idiom, such as

                                                              %i(foo bar baz).each do |sym|
                                                                define_method("prefix_#{sym}_suffix") do
                                                                  # method body
                                                              
                                                              1. 7

                                                                C++ preprocessor/template dark magic can get horrible quickly. It’s not just that people actively subvert auto-complete or grep. It’s also that these people never document these types of things, and they often do it when there’s an easier, much less bad way to accomplish what they’re trying to do.

                                                                1. 3

                                                                  I generally make strong recommendations to the teams I’m on to keep things greppable, so this is more about best practices than the language itself being significantly flawed. (I understand the argument that foot guns shouldn’t be there to begin with.) define_method can be useful if there is a large number of things you want to iterate over and make methods for. Having said that, I basically never use define_method, myself.

                                                                  1. 4

                                                                    Common Lisp - but the debugger system will let you look the method up, and likely the method arises from a macro call-site which you can find from the method invocation etc.

                                                                    1. 2

                                                                      I would say the difference is that Common Lisp developers typically try to avoid doing crazy macro stuff if they can avoid it. I suspect most Ruby devs are the same, but there seems to be a vocal minority who love using metaprogramming as much as possible.

                                                                      1. 4

                                                                        There’s a similar minority in the CL community. Metaprogramming pushes happy buttons for a bunch of people.

                                                                        That doesn’t make it any more supportable, mind you.

                                                                        1. 2

                                                                          Back in the day it seemed mostly relegated to high end dev shops. I heard multiple stories about these folks creating metaprogramming monstrosities. Sure it worked, was BDD-tested, etc, but unless you were already familiar with it, the code was pretty hard to touch.

                                                                          1. 2

                                                                            I’m not sure I buy the implicit argument here that all code must be maximally accessible.

                                                                            Metaprogramming is just build-time code execution that is able to impact code at runtime. It is a skill you can learn like any other.

                                                                            1. 3

                                                                              Sure, but it requires additional thought to read, and makes things much less discoverable by tools (such as grep/ack/ag/etc, as mentioned earlier).

                                                                              Accessibility is a virtue when the code is going to be read by people other than the original author.

                                                                              1. 2

                                                                                I’m not a big fan of it in general, but metaprogramming done well makes code easier to read and understand.

                                                                                For example, autowrap is used to create Common Lisp bindings to C and C++ libraries. All of the work is done with a “c-include” macro, which does exactly what it sounds like, and is much easier to read and understand than bindings written by hand. There’s a real life example in the GDAL bindings I’m making. A single macro call generates a few thousand lines of CFFI glue code and definitions.

                                                                                Poor discoverability might depend on the implementation. Bindings from autowrap are tab completable in the REPL and everything.

                                                                      2. 2

                                                                        Dynamic programming? It’s possible in python too. Useful for generating unit tests methods.

                                                                        1. 2

                                                                          Dynamic programming is something quite different. You’re looking for the term “metaprogramming”.

                                                                          1. 1

                                                                            thanks for catching

                                                                        2. 2

                                                                          Oh it’s perfectly doable with PHP’s __call :) https://www.php.net/manual/en/language.oop5.overloading.php#object.call

                                                                          1. 2

                                                                            I’ve switched opinions on this so many times I’ve lost count.

                                                                            However, I remain convinced that:

                                                                            • metaprogramming is an extremely sharp tool, which can end up hurting you later
                                                                            • sharp tools aren’t inherently bad

                                                                            Programming will always have ways to shoot yourself in the foot. The best we can do is make those ways explicit, contained, and able to be reasoned about in a sane way.

                                                                            1. 2

                                                                              I think we can do better. We can eliminate footguns where it makes sense. I have some examples, not all will agree they are footguns. Python eliminated switch statement. Go made it so the default is to not fall through. also Go doesnt have a ?: syntax. Further, Go only allows booleans with conditionals. So you cant do something like:

                                                                              n1 := 10
                                                                              if n1 {
                                                                                 println("something")
                                                                              }
                                                                              

                                                                              At first this was annoying, but it make sense. By only allowing this:

                                                                              n1 := 10
                                                                              if n1 != 0 {
                                                                                 println("something")
                                                                              }
                                                                              

                                                                              You dont have to think about or answer the question “what fails the condition”? Ruby only fails with nil and false. JavaScript fails with anything falsy. You just avoid that problem alltogether at the cost of a little terseness.

                                                                        3. 9

                                                                          I also find it weird because as a n00b Rust developer, I find the compiler absolutely lovely. It feels like my pal, cheering me on and giving helpful suggestions. (I come from a Java and Javascript world, where compiler messages can be pretty… terse.)

                                                                          Yeah, Rust can be difficult to write, but the compiler’s error messages sure make it pleasant.

                                                                          1. 8

                                                                            “This A -> B looks incorrect, did you mean A -> C?” changes stuff “This A -> C looks incorrect, did you mean A -> B?”

                                                                            Not debating the point you’re trying to make but the author sounds like many people who were only starting with Rust. I can 100% feel the sentiment that’s made in the article, and I’m a huge Rust fan and would love to use it more - but right now I’m at a stage where giving up in frustration sometimes happens. And this has nothing to do when it was. You call it toxic, I can wholeheartedly agree with “been there, seen that”.

                                                                            1. 3

                                                                              This “try {A, B, C} forever” type of errors happens very often with Rust noobs (to clarify: that’s a learning curve, not a dig at someone being a novice) who try to do something that is impossible to do, or impossible to prove to the borrow checker. Unfortunately, the compiler is incapable of understanding that it’s not a problem with a particular line of code, but the whole approach. For example, use of references as if they were pointers digs a deep hole the compiler won’t get you out of (it’ll try to annotate lifetimes instead of telling you to stop using references and use owned types), e.g. don’t write a linked list, use VecDeque.

                                                                              The “toxic” bit was about rather harsh framing of the issue. Although, I don’t blame anyone for not liking Rust’s strictness. It has its uses, just like Go/Ruby more lax approaches.

                                                                              1. 2

                                                                                Congratulations on drafting an even more smug answer to the thread.

                                                                                I think you’re missing the point and you also don’t need to explain the exact issue at hand, I was just citing an example of a clearly not impossible thing (the one where I encountered this the last time was simply about getting some form of text into a function, and of course it was a String vs slice problem but it wasn’t obvious at the time).

                                                                                And yes, I do think that I prefer a screen full of old Clojure’s unhelpful stack trace than rustc telling me something completely wrong while trying to be helpful. At least then I’m not led on the wrong path because I usually trust my compiler.

                                                                                1. 4

                                                                                  I don’t see how @kornel was being smug here. He’s saying that if a beginner tries something that the borrow checker considers impossible, it will cycle between several incorrect errors instead of raise the actual issue. Is it the “noobs”?

                                                                              2. 3

                                                                                but right now I’m at a stage where giving up in frustration sometimes happens

                                                                                It’s interesting that so many people are running into this. I learned the rules of the borrows checker, knew moves from C++ already, and that was pretty much it. Sure, I sometimes run into problems with the borrows checker (less than when I started writing Rust), but multiple immutable xor single mutable is easy to appease. Lifetime issues can be more difficult, but typically reasoning about the lifetimes solves them pretty quickly.

                                                                                I wonder if it has to do with different problem domains that people are trying to tackle (I mostly work on machine learning and natural language processing, have not done much web backend programming with Rust), or the languages that they come from – I used C++ for many years prior, so perhaps I had to reason about ownership anyway (compared to e.g. someone who comes from GC’ed languages)?

                                                                                I am not saying that Rust’s learning curve is not steep or that I am some genius that understands Rust better (most definitely not). But more that it would be interesting to investigate which problem domains or programming language backgrounds make it easier/harder to pick up Rust.

                                                                                1. 5

                                                                                  I think the experience of Rust coming from C++ is vastly different than coming from Ruby or Python. Those latter languages shield the programmer from a number of things that need to be thought about with a systems-level language. Confronting those things for the first time is fraught.

                                                                                  1. 2

                                                                                    Users of GC’ed languages sometimes have to reason about ownership and lifetimes too. However, they are not punished anywhere as badly as C++ users for failing to do it correctly. They just trap the error, log it, do their best to recover from it, and move on.

                                                                                    It seems disheartening to explicitly write code whose sole purpose is to recover from errors that you will inevitably make, though.

                                                                                    1. 2

                                                                                      Indeed, it’s the unforgiving strictness of Rust ownership that gets people by surprise, even if they know it at a conceptual level.

                                                                                      The second problem is that with GC design patterns can reference any data from anywhere, e.g. it’s natural to keep mutual parent-child relationships between objects. The borrow checker wants data to be structured as a tree (or DAG), and it comes as a shock that more complex relationships need to be managed explicitly (with refcounting, arenas, etc.)

                                                                                    2. 2

                                                                                      No idea. But I only started doing C++ after Rust and I haven’t had any of these problems with move semantics. But I wouldn’t claim I’d never created a bug because of it :P Rust was in learning in my spare time, for C++ I had a whole team to review my code and help me out.

                                                                                      Also I’m not saying I ran into this constantly - just that it happened several times. And I think my code tends to attract these kinds of problem, if it’s not “5% preparing data into proper data model, then 95% working with it” but being exactly on the surface.. like when i wrote my IRC bot - it’s 90% string handling and a little bit of networking.

                                                                                  2. 4

                                                                                    I’ve seen somebody saying something among the lines of “C is harder than Go because Go’s compiler will scream at me if I have an unused import and C’s doesn’t”. I can’t say I understand this mentality.

                                                                                    1. -2

                                                                                      Unlike Rust where the compiler constantly shouts “fuck you” even though you are trying to do your best to serve their majesty and the rules they dictate, Ruby never gets in your way.

                                                                                      I wonder if comments like these come from the kind of people who were throwing a hissy fit as a kid when their teacher told them to do their homework.

                                                                                      1. 16

                                                                                        While the original article is needlessly hostile, so is this response. For good or for bad, programming is a gratification-driven activity. It is not hard to see why more people find it gratifying to write a program that runs, rather than a program that type checks.

                                                                                        Besides, on a purely intuitive level, type errors are not necessarily the easiest way to understand why a program is flawed. Errors, just like anything else, are best understood with examples. Watching your program fail on a concrete input provides that example.

                                                                                        (Admittedly, fishing for counterexamples in a huge state space is not an approach that scales very well, but what use is scaling if your target audience does not want to try the alternative you suggest even in the tiniest cases?)

                                                                                        To illustrate the power of modeling concurrency with types, one has to give concrete examples of idiomatic programs written in non-typeful concurrent languages that contain subtle bugs that would have been caught by Rust’s type checker.

                                                                                    1. 5

                                                                                      After a life period where most of my home doings were oriented around parenting and early childhood, I’m starting to pick back up the hacking again. Right now I’m hacking on pmetrics: https://gitlab.com/pnathan/pmetrics , an event oriented structured “logging” system designed for small-midsize shops. When done, it should solve certain persistent problems that occur regularly with observability and are difficult to address without hauling in a big vendor with its entanglements. If anyone wanted to play early adopter on it, I’m keen to hear feedback.