Threads for bhansconnect

  1. 6

    The TLDR is that the FixedBufferAllocator that Zig provides is fairly badly named. It’s a bump allocator that essentially doesn’t support deallocation (it has a super minor carve out for the very real free(alloc(...)) scenario that most general mallocs also used to track).

    While memory errors are an inherent problem of unsafe languages like Zig, in this particular case the “leak” is library issue. Happily this seems like something that can be fixed without breaking source compatibility, and as Zig is not an intended to be used as a system language (yet?) there aren’t any ABI issues to be concerned with increasing the size of the FixedBufferAllocator struct to have metadata to properly support deallocation. Of course the problem here is that existing users may be assuming bump allocation for perf or other reasons so it seems like fixing FBA should also include introducing BumpAllocator or some such.

    1. 3

      Zig is not an intended to be used as a system language (yet?)

      What do you mean by this?

      1. 2

        Zig hasn’t reached stable release yet. Andrew has stated that semver 1.0.0 will mark such a time. That doesn’t stop people from writing great software in Zig, e.g. River, Bun, etc but it does mean that major breaking changes could still occur.

        So I assume that because it’s not officially stable yet it’s likely not intended (in the strictest sense) to be used as a systems language yet (see aforementioned Bun, River etc showing people still using Zig to great effect).

        1. 2

          Sorry, I mean system language in the “as the OS provided API” sense. Step one of that is having a stable ABI that allows a binaries to be built targeting the system that doesn’t need to be rebuilt when the OS updates. The problem is that ABI stability isn’t something that is trivial to ensure - especially with many modern language features - so requires substantial amounts of engineering time that isn’t generally as fun as implementing new features.

          1. 2

            Zig has 0 problem exporting or consuming a C ABI.

            Furthermore, some operating systems such as Linux have a stable syscall ABI and do not even require usage of a stable language ABI.

            1. 1

              The operating system is not just the kernel ABI. It’s all of the system libraries. I get that there’s an expectation of rebuilding binaries for OS updates on linux and linux derived systems, but for others (windows, android - a linux derived system - the various Darwin systems) the assumption is recompilation is not needed.

              Zig doesn’t make an ABI stability guarantee I can see, and the comments in https://github.com/ziglang/zig/issues/3786 indicate even something as basic as the optional type has no guaranteed stable ABI right now. One of the comments goes so far as to say that they don’t think ABI stability is possible.

              To be a language that can be used to provide OS APIs (not just the kernel ABI) the language features must have a stable ABI, otherwise as noted in the above proposal the solution is for all libraries to just provide a C interface and then Zig binary calling a Zig library requires and unsafe transition into and out of C, losing the few Zig safety guarantees along the way.

      1. 31

        Wow, that poor straw man, he looks like he’s in a lot of pain right now.

        1. 31

          I basically agree with you, but I’ll point out that this is a “weak man” argument, in that he’s criticizing an actually existing set of guidelines. They don’t do justice to the general viewpoint he’s arguing against, but they’re not a made up position–someone really was offering code that bad!

          It’s a remarkable thing that a lot of “clean code” examples are just terrible (I don’t recognize these, but Bob Martin’s are quite awful). I think a significant problem is working with toy examples–Square and Circle and all that crap (there’s a decent chance that neither the readers nor the authors have ever implemented production code with a circle class).

          Conversely, I’d like to see Muratori’s take on an average LOB application. In practice, you’ll get more than enough mileage out of avoiding n+1 queries in your DB that you might never need to worry about the CPU overhead of your clean code.

          Edit: clarifying an acronym. LOB = Line of Business–an internal tool, a ticket tracker, an accounting app, etc. something that supports a company doing work, but is typically not at huge scale.

          1. 16

            Conversely, I’d like to see Muratori’s take on an average LOB application. In practice, you’ll get more than enough mileage out of avoiding n+1 queries in your DB that you might never need to worry about the CPU overhead of your clean code.

            I have not enough upvotes to give.

            It’s really not that performance isn’t important - it really really is! It’s just that, what goes into optimizing a SQL query is totally different than what goes into optimizing math calculations, and games have totally different execution / performance profiles than business apps. That context really matters.

            1. 7

              But you don’t understand! These guys like Muratori and Acton are rockstar guru ninja wizard god-tier 100000000x programmers who will walk into the room and micro-optimize the memory layout of every object in your .NET web app.

              And if you dare to point out that different domains of programming have different performance techniques and tradeoffs, well, they’ll just laugh and tell you that you’re the reason software is slow.

              Just don’t mention that the games industry actually has a really terrible track record of shipping correct code that stays within the performance bounds of average end-user hardware. When you can just require the user to buy the latest top-of-the-line video card three times a year, performance is easy!

              (this is only half joking – the “you’re the reason” comment by Acton gets approvingly cited in lots of these threads, for example)

              1. 7

                When you can just require the user to buy the latest top-of-the-line video card three times a year, performance is easy!

                You cite Acton who, working at Insomniac, primarily focused on console games–you know, where you have a fixed hardware budget for a long time and can’t at all expect upgrades every few months. So, bad example mate.

                And if you dare to point out that different domains of programming have different performance techniques and tradeoffs, well, they’ll just laugh and tell you that you’re the reason software is slow.

                Acton’s been pretty upfront about the importance of knowing one’s domain, tools, and techniques. The “typical C++ bullshit” he’s usually on about is caused by developers assuming that the compiler will do all their thinking for them and cover up for a lack of basic reasoning about the work at hand.

                1. 3

                  primarily focused on console games

                  And yet is held up again and again as an example of how the games industry is absolutely laser-focused on performance. Which is a load of BS. The games industry is just as happy as any other field of programming to consume all available cycles/memory, and to keep demanding more. And the never-ending console upgrade treadmill is slower than the never-ending PC video card upgrade treadmill, but it exists nonetheless and is driven by the needs of game developers for ever-more-powerful hardware to consume.

                  Acton’s been pretty upfront about the importance of knowing one’s domain, tools, and techniques.

                  The “you’re the reason why” dunk from Acton was, as I understand it, a reply to someone who dared suggest to him that other domains of programming might not work the same way his does.

                  1. 9

                    The “you’re the reason why” dunk from Acton was, as I understand it, a reply to someone who dared suggest to him that other domains of programming might not work the same way his does.

                    He was objecting to someone asserting that Acton was working in a very narrow, specialised field. That in most cases performance is not important. This is a very widespread belief, and a very false one. In reality performance is not a niche concern. When I type a character, I’d better not notice any delay. When I click on something I’d like a reaction before 100ms. When I drag something I want my smooth 60 FPS or more. When I boot a program I’d better take less than a second to boot unless it has a very good reason to make me wait.

                    Acton was unkind. I feel for the poor guy. But the attitude of the audience member, when checked against reality, is utterly ridiculous, and deserves to be publicly ridiculed. People need to laugh at the idea that performance does not matter most of the time. And people who actually subscribed to this ridiculous beliefs should be allowed to pretend they didn’t really believe it.

                    Sufficient performance rarely requires actual optimisations, but it always matters.

                    1. 5

                      He was objecting to someone asserting that Acton was working in a very narrow, specialised field. That in most cases performance is not important.

                      Well. This:

                      As I was listening to your talk, I realise that you are working in a different kind of environment than I work in. You are working in a very time constrained, hardware constrained environment.

                      [elided rant] Our company’s primary constraint are engineering resources. We don’t worry so much about the time, because it’s all user interface stuff. We worry about how long it takes a programmer to develop a piece of code in a given length of time, so we don’t care about that stuff. If we can write a map in one sentence and get the value out, we don’t care really how long it takes, so long —

                      Then Mike Acton interrupted with

                      Okay, great, you don’t care how long it takes. Great. But people who don’t care how long it takes is also the reason why I have to wait 30 seconds for Word to boot for instance.

                      Is how you chose to describe it last time we went round and round about this. And, yeah, my takeaway from this was to form a negative opinion of Acton.

                      So I’ll just paste my own conclusion:

                      Meanwhile: I bet Mike Acton doesn’t work exclusively in hand-rolled assembly. I bet he probably uses languages that are higher-level than that. I bet he probably uses tools that automate things in ways that aren’t the best possible way he could have come up with manually. And in so doing he’s trading off some performance for some programmer convenience. Can I then retort to Mike Acton that people like him are the reason some app he didn’t even work on is slow? Because when we reflect on this, even the context of the quote becomes fundamentally dishonest – we all know that he accepts some kind of performance-versus-developer-convenience tradeoffs somewhere. Maybe not the same ones accepted by the person he was trying to dunk on, but I guarantee there are some. But haggling over which tradeoffs are acceptable doesn’t let him claim the moral high ground, so he has to dunk on the idea of the tradeoff itself.

                      So again: “People like you are the reason that it takes 30 seconds to open Word” is not a useful statement. It’s intended solely as a conversation-ending bludgeon to make the speaker look good and the interlocutor look wrong. There’s no nuance in it. There’s no acknowledgment of complexity or tradeoffs or real-world use cases. Ironically, in that sense it almost certainly violates some of his own “expectations” for “professionals”. And as I’ve explained, it’s inherently dishonest!

                      1. 5

                        And in so doing he’s trading off some performance for some programmer convenience. Can I then retort to Mike Acton that people like him are the reason some app he didn’t even work on is slow?

                        According to his talks, he 100% uses higher-level languages where it makes sense–during the programming. The actual end-user code is kept as fast as he can manage. It’s not a sin to use, say, emacs instead of ed provided the inefficiency doesn’t impact the end-user.

                        the interlocutor look wrong.

                        The interlocutor was wrong, though. They said:

                        If we can write a map in one sentence and get the value out, we don’t care really how long it takes,

                        That’s wrong, right? At the very least, it’s incredibly selfish–“We don’t care how much user time and compute cycles we burn at runtime if it makes our job at development time easier”. That’s not a tenable position. If he’d qualified it as “in some percent of cases, it’s okay to have thus-and-such less performance at runtime due to development speed concerns”, that’d be one thing…but he made a blanket statement and got justifiably shut down.

                        (Also: you never did answer, in the other thread, if you have direct first-hand experience of this stuff beyond Python/Django. If you don’t, it makes your repeated ranting somewhat less credible–at least in my humble opinion. You might lack experience with environments where there really is a difference between costs incurred during development time vs compile time vs runtime.)

                        1. 2

                          It’s not a sin to use, say, emacs instead of ed provided the inefficiency doesn’t impact the end-user.

                          And yet when someone came forward saying they had a case where inefficiency didn’t seem to be impacting the end-user, he didn’t sagely nod and agree that this can be the case. Instead he mean-spiritedly dunked on the person.

                          Also: you never did answer, in the other thread, if you have direct first-hand experience of this stuff beyond Python/Django.

                          Of what stuff? Have I written C? Yes. I don’t particularly like or enjoy it. Same with a variety of other languages; the non-Python language I’ve liked the most is C#.

                          Have I written things that weren’t web apps? Yes, though web apps are my main day-to-day focus at work.

                          If you don’t, it makes your repeated ranting somewhat less credible–at least in my humble opinion.

                          Your repeated attempts at insulting people as a way to avoid engaging with their arguments make your comments far less credible, and I neither feel nor need any humility in telling you that.

                          Anyway, I’ve also pointed out, at length, multiple times, why the games industry is very far from being a paragon of responsible correctness-and-performance-focused development, which would make your entire attempt to derail the argument moot.

                          So. Do you have an actual argument worthy of the name? Or are you just going to keep up with the gatekeeping and half-insulting insinuations?

                          1. 2

                            And yet when someone came forward saying they had a case where inefficiency didn’t seem to be impacting the end-user, he didn’t sagely nod and agree that this can be the case.

                            Check your transcript, ypu’re putting words into the dude’s mouth. The dude said, specifically, that they didn’t care at all about UI performance. Not that the user didn’t care–that the developers didn’t care. He was right to be slapped down for that selfishness.

                            Your repeated attempts at insulting people as a way to avoid engaging with their arguments make your comments far less credible, and I neither feel nor need any humility in telling you that.

                            As I said in the other thread, it has some bearing here–you’ve answered the question posed, so I can discuss accordingly. If you get your feathers ruffled at somebody wondering about the perspective of somebody who only lists python and webshit on their profile in a discussion on optimizing compiled languages and game development, well, sorry I guess?

                            I’ve also pointed out, at length, multiple times, why the games industry is very far from being a paragon of responsible correctness-and-performance-focused development

                            I think you’ve added on the “correctness” bit there, and correctness is something that has a large enough wiggle room (do we mean mathematically sound? do we mean provable? do we mean no visible defects? do we mean no major defects?) that I don’t think your criticism is super helpful.

                            The performance bit, as you’ve gone into elsewhere, I understand as “Some game developers misuse tools like Unity to make slow and shitty games, so clearly no game developer cares about performance”. I don’t think that’s a fair argument, and I especially think it’s incorrect with regards specifically here to Acton who has built an entire career on caring deeply about it.

                            Also, you didn’t really answer my observation about development time inefficiency versus runtime inefficiency and how it applied in Acton’s case.

                            Anyways, this subthread has gotten long and I don’t think we’re much farther in anything. Happy to continue discussion over DMs.

                            1. 3

                              If you get your feathers ruffled at somebody wondering about the perspective of somebody who only lists python and webshit on their profile

                              You’re the one who can’t even refer to web development without needing to use an insulting term for it.

                              a discussion on optimizing compiled languages and game development

                              Well, no. The claim generally being made here is that people like Acton uniquely “care about performance”, that they are representative of game developers in general, and thus that game developers “care about performance” while people working in other fields of programming do not. And thus it is justified for people like Acton to dunk on people who explain that the tradeoffs are different in their field (and yes, that’s what the person was trying to do – the only way to get to the extreme “doesn’t care” readings is to completely abandon the principle of charity in pursuit of building up a straw man).

                              Which is a laughable claim to make, as I’ve pointed out multiple times. Game dev has a long history of lackluster results, and of relying on the hardware upgrade treadmill to cover for performance issues – just ship it and tell ’em to upgrade to the latest console or the latest video card!

                              And it is true that the performance strategies are different in different fields, but Acton is the one who did a puffed-up “everyone who doesn’t do things my way should be fired” talk – see the prior thread I linked for details – which tried to impose his field’s performance strategies as universals.

                              Like I said last time around, if he walked into a meeting of one of my teams and started asking questions about memory layout of data structures, he wouldn’t be laughed out of the room (because that’s not how my teams do things), but he would be quietly pulled aside and asked to get up to speed on the domain at hand. And I’d expect exactly the same if I walked into a meeting of a game dev team and arrogantly demanding they list out all the SQL queries they perform and check whether they’re using indexes properly, or else be fired for incompetence.

                        2. 4

                          [quote] Is how you chose to describe it last time we went round and round about this.

                          Ah, thanks for the link. My only choice there was the elision, it was otherwise an exact transcript.

                          People like you are the reason that it takes 30 seconds to open Word” is […] intended solely as a conversation-ending bludgeon to make the speaker look good and the interlocutor look wrong.

                          I agree partially. It is a conversation-ending bludgeon, but it was not aimed directly at the interlocutor. Acton’s exact words were “But people who don’t care how long it takes is also the reason why I have to wait 30 seconds for Word to boot for instance.” He was ridiculing the mindset, not the interlocutor. Which by the way was less unkind than I remembered.

                          I know it rubs you the wrong way. I know it rubs many people the wrong way. I agree that by default we should be nice to each other. I do not however extend such niceties to beliefs and mindsets. And to be honest the mindset that by default performance does not matter only deserves conversation-ending bludgeons: performance matters and that’s the end of it.

                          1. 3

                            And to be honest the mindset that by default performance does not matter

                            Except it’s a stretch to say that the “mindset” is “performance does not matter”. What the person was very clearly trying to say was that they work in a field where the tradeoffs around performance are different. Obviously if their software took, say, a week just to launch its GUI they’d go and fix that because it would have reached the point of being unacceptable for their case.

                            But you, and likely Acton, took this as an opportunity to instead knock down a straw man and pat yourselves on the back and collect kudos and upvotes for it. This is fundamentally dishonest and bad behavior, and I think you would not at all enjoy living in a world where everyone else behaved this way toward you, which is a sure sign you should stop behaving this way toward everyone else.

                            1. 3

                              You’re entitled to your interpretation.

                              Other Lobsters are entitled to the original context, so here it is: Mike Acton was giving a keynote at CppCon. After the keynote it was time for questions, and one audience member came with this intervention. Here’s the full transcript:

                              As I was listening to your talk, I realise that you are working in a different kind of environment than I work in. You are working in a very time constrained, hardware constrained environment.

                              As we all know the hardware, when you specify the hardware as a platform, that’s true because that’s the API the hardware exposes, to a developer that is working inside the chip, on the embedded code, that’s the level which is the platform. Ideally I suspect that the level the platform should be the level in which you can abstract your problem, to the maximum efficiency. So I have a library for example, that allows me to write code for both Linux and Windows. I have no problem with that.

                              Our company’s primary constraint is engineering resources. We don’t worry so much about the time, because it’s all user interface stuff. We worry about how long it takes a programmer to develop a piece of code in a given length of time, so we don’t care about all that stuff. If we can write a map in one sentence and get the value out, we don’t care really how long it takes, as long as it’s… you know big O —

                              Acton then interrupts:

                              Okay, great, you don’t care how long it takes. Great. But people who don’t care how long it takes is also the reason why I have to wait 30 seconds for Word to boot for instance.

                              But, you know, whatever, that’s your constrain, I mean, I don’t feel that’s orthogonal to what we do. We worry about our resources. We have to get shit out, we have to build this stuff on time, we worry about that. What we focus on though is the most valuable thing that we can do, we focus on understanding the actual constraints of the problem, working to actually spend the time to make it valuable, ultimately for our players, or for whoever your user is. That’s where we want to bring the value.

                              And performance matters a lot to us, but I would say, any context, for me personally if I were to be dropped into the “business software development model” I don’t think my mindset would change very much, because performance matters to users as well, in whatever they’re doing.

                              That’s my view.

                              1. 4

                                Thanks for including the full transcript. I didn’t realize the questioner was a “Not a question, just a comment” guy and don’t think we should judge Mike Acton based on how he responded. Comment guys suck.

                                1. 1

                                  This is a guy who thought “Everyone Watching This Is Fired” was a great title for a talk. And not just that, a talk that not only insisted everyone do things his way, but insisted that his way is the exclusive definition of “professional”.

                                  So I’ll judge him all day long.

                                2. 2

                                  Thanks for quoting extensively.

                                  I worked with GUIs in a past life, and debugged performance problems. I’ve never seen a 30-second wait that was primarily due to user interface code, or even a five-second wait.

                                  One can alwys discuss where in the call stack the root of a problem is. The first many-second delay I saw (“people learn not to click that button when there are many patients in the database”) was due to a database table being sorted as part of processing a button click. IMO sorting a table in UI code is wrong, but the performance problem happened because the sorting function was somewhere between O(n²) and O(n³): A fantastic, marvellous, awesome sorting algorithm. I place >90% of the blame in the sorting algorithm and <10% in the UI code that called the function that called the sorting marvel.

                                  In my experience, delays that are primarily due to UI code are small. If the root reason is slow UI code, then the symptoms are things like content is updated slowly enough to see jerking. Annoying, sure, but I agree that quick development cycles can justify that kind of problem.

                                  The many-second delays that Acton ascribes to slow UI code are more typically due to UI code calling some sparsely documented function that turns out to have problems related to largish data sets. Another example I remember is a function that did O(something) RPCs and didn’t mention that in its documentation. Fine on the developers’ workstations, not so fine for production users with a large data set several ms away.

                                  I’m sure Mike Acton conflated the two by mistake. They’re not the same though.

                                  1. 1

                                    I realise that neither the audience member nor Mike Acton mentioned GUI specifically. Acton merely observes that because some people don’t care enough about performance, some programs take an unacceptably long time to boot. I’m not sure Acton conflated anything there.

                                    I might have.

                                    As for the typical causes of slowdowns, I think I largely agree with you. It’s probably not the GUI code specifically: Even Electron can offer decent speed for simple applications on modern computers. It’s just so… huge.

                                    1. 2

                                      I realise that neither the audience member nor Mike Acton mentioned GUI specifically.

                                      Opps, the commenter did briefly mention “it’s all user interface stuff”. I can’t remember my own transcript…

                                  2. 1

                                    Acton is still taking the least charitable possible interpretation and dunking on that. And he’s still doing it dishonestly – as I have said, multiple times now, it’s an absolute certainty that he also trades off performance for developer convenience sometimes. And if I were to be as uncharitable to him as he is to everyone else I could just launch the same dunk at him, dismiss him as someone who doesn’t care about performance, say he should be fired, etc. etc.

                                    1. 2

                                      As I said, you are entitled to your interpretation.

                          2. 2

                            And the never-ending console upgrade treadmill is slower than the never-ending PC video card upgrade treadmill, but it exists nonetheless and is driven by the needs of game developers for ever-more-powerful hardware to consume.

                            If you compare the graphics of newer games to older ones, generally the newer ones have much more detail. So this improved hardware gets put to good use. I’m sure there are plenty of games that waste resources, too, but generally speaking I’d say things really are improving over time.

                            1. 1

                              Are the games getting more fun?

                              1. 3

                                I haven’t touched a new game in 10, 15 years or so, but I hear games like The Last of Us win critical acclaim, probably due to their immersiveness which is definitely (also) due to the high quality graphics. But that wasn’t really the point - the point was that the hardware specs are put to good use.

                              2. 1

                                I dunno, I feel like games from ten years ago had graphics that were just fine already. And really even 15 years ago or more, probably. We’re mostly getting small evolutionary steps — slightly more detail, slightly better hair/water – at the cost of having to drop at least hundreds and sometimes thousands of dollars on new hardware every couple of years.

                                If I were the Mike Acton type, I could say a lot of really harsh judgmental things about this, and about the programmers who enable it.

                          3. 5

                            These guys like Muratori and Acton are rockstar guru ninja wizard god-tier 100000000x programmers who will walk into the room and micro-optimize the memory layout of every object in your .NET web app.

                            Muratori is on the record saying, repeatedly, that he’s not a great optimiser. In his Refterm lecture he mentions that he rarely optimises his code (about once a year).

                          4. 4

                            It’s just that, what goes into optimizing a SQL query is totally different than what goes into optimizing math calculations, and games have totally different execution / performance profiles than business apps.

                            The specifics differ (e.g., avoiding L2 cache misses vs. avoiding the N+1 problem), but the attitude of the programmers writing those programs matters also. If their view is one where clean code matters, beautiful abstractions are important, solving the general problem instead of the concrete instance is the goal, memory allocations are free, etc. they will often miss a lot of opportunities for better performance.

                            I will also add that code that performs well is not antithetical to code that is easy to read and modify. Fully optimized code might be, but we can reach very reasonable performance with simple code.

                            1. 4

                              I think it’s subsequently been deleted, but I remember a tweet that showed a graph, clarity on the y axis, performance on the x axis.

                              1. Near the origin: initial code; poorly written, and slow.
                              2. Up and to the right: first passes of optimization often make the code nicer, better structured, remove duplication, pointless objects.
                              3. Back down and to the right, near the x-axis: chasing harder to find optimizations, adding complexity for the sake of performance. But much faster than the starting phase. “PM should start to worry here”
                              4. Sharply down and a little more to the right: “lovecraftian madness, terrifying code distorted for the sake of eking out the last drop of performance improvement.”
                            2. 1

                              ave

                              I would say that is most cases it still fundamentally boils down to how data is laid out, accessed, and processed. Whether an SQL query or later processing of data loaded from an SQL query, focus on data layout and accesses is the core of a lot of performance.

                          5. 11

                            I’m normally a fan of Casey’s work, but yeah this one’s not great. The clean code vs fast code battle doesn’t really help the article, and since he “only” demonstrates a 15x speedup it doesn’t actually address the problem we face today.

                            If all the software I used was 15x slower than optimal, almost everything I did on my computer would complete instantly, but in reality I would guess the majority of the software I use ranges from 1000x to tens of millions of times too slow. Here are some examples I got annoyed by in the last few days:

                            • Warm starting a terminal emulator that has high performance as a selling point takes 2 seconds (not including shell startup time! imagine if initialising an empty text box took 2s everywhere else!!)
                            • For reference Terminal.app takes 3s cold/1s warm
                            • Typing in the same terminal emulator on Linux has enough latency to reduce my typing accuracy
                            • Cold starting lldb on a Metal hello world app takes 29 seconds to launch the application
                            • Warm starting lldb on the Metal hello world app takes 6 seconds
                            • Cold switching between categories in the macOS system preferences app takes 2-5s
                            • Stepping over code that takes less than 1 nanosecond to execute in the VSCode debugger has perceptible latency

                            (These measurements were taken by counting seconds in my head because even with 50% error bars they’re still ridiculous, on a laptop whose CPU/RAM/SSD benchmark within 10x of the best desktop hardware available today)

                            1. 1

                              That sound ridiculously slow, and to me mostly related to macOS. I’m on a decent laptop, using Ubuntu and do no experience such crazy delays.
                              Which “fast” terminal emulator are you using? I’m using alacritty’s latest version, and can’t notice any delay in either startup or typing latency (and I’m usually quite sensitive to that). Even gnome-terminal starts reasonably fast, and it’s not known for being a speed beast.
                              For the lldb test, I get <1s both cold and warm, no noticeable difference between the two.

                              1. 2

                                A nonzero amount of it is on macOS yes, e.g. the first time you run lldb/git/make/etc after a reboot it has to figure out which of the one version of lldb/git/make/etc you have installed it should use, which takes many seconds. But it is at least capable of warm booting non-trivial GUI apps in low-mid hundreds of ms so we can’t put all the blame there.

                                Which “fast” terminal emulator are you using?

                                It’s alacritty. I’ve tried it on all 3 major OS with mixed results. On Windows startup perf is good, typing latency is good but everything is good at 280Hz, and it’s so buggy as to be near unusable (which makes it the second best Windows terminal emulator out of all the ones I’ve tried). On macOS startup perf is bad but typing latency is fine. On Linux startup perf is fine but typing latency makes it unusable.

                          1. 1

                            Elm is presumably not on the list since it compiles to JS and would therefore be more-or-less indistinguishable from the plain JS baseline?

                            1. 3

                              All of these options compile to JS.

                              1. 1

                                My bad. I thought everything on that list but Reason compiled to wasm. The Elm omission seems rather glaring then, doesn’t it?

                                1. 2

                                  There’s no ClojureScript, Scala.js, F#, derw, js_of_ocaml, LunarML, Gleam, and many other FP options either.

                              2. 2

                                It might do some tricks to get better optimization and be a bit faster, but that can be hit or miss depending on the specific code. I’m not sure why it isn’t included. Presumably the author didn’t know about it?

                              1. 4

                                I remember running into this a long while ago. I wrote 2 versions in Roc compiled to wasm. It was significantly faster than any of the current implementations. The Roc version took approximately 0.1 ms to run. So about 10x faster than the JavaScript version. That said, performance.now() which I used for timing only measure to about 0.1 ms on my browser. So sometimes the roc function measures as taking zero time. Though on average, it was just under 0.1 ms.

                                This is the actual core of the Roc code (though it probably needs to be updated to compile): https://github.com/bhansconnect/functional-mos6502-web-performance/blob/master/implementations/roc-effectful/emulator.roc

                                Might have to revive this to put exact stats on this thread.

                                1. 8

                                  Modern CPUs and GPUs have hardware support for sin, cos and tan calculations. I wonder if trig functions that operate on turns rather than radians can really be faster if they aren’t using this hardware support. I guess it depends on the specific processor, and maybe also on which register you are operating on (since intel has different instruction sets for different size vector registers).

                                  If you are programming a GPU, you are generally using a high level API like Vulkan. Then you have the choice of using the SIN instruction that’s built into the SPIR-V instruction set, vs writing a turn-based sin library function using SPIR-V opcodes. I wouldn’t expect the latter to be faster. Maybe you could get some speed using a lookup table, but then you are creating memory traffic to access the table, which could slow down a shader that is already memory bound.

                                  1. 4

                                    A lot of code avoids using hardware sin and cos because they are notoriously slow and inaccurate. As such, it ends up using software emulated sin, cos, etc. So turns definitely shouldn’t be worse.

                                    Maybe this is changing, but historically on CPUs, using the sin instruction is not a great idea.

                                    1. 1

                                      I wonder if trig functions that operate on turns rather than radians can really be faster if they aren’t using this hardware support.

                                      CUDA has sincospif and sincosf, only the latter has its precision affected by –use_fast_math, so maybe all “builtin” functions still do the conversion to turns in code before accessing hardware?

                                      https://docs.nvidia.com/cuda/cuda-math-api/group__CUDA__MATH__SINGLE.html#group__CUDA__MATH__SINGLE_1g9456ff9df91a3874180d89a94b36fd46

                                    1. 5

                                      Interesting read, I definitely agree with the general sentiment.

                                      TLDR: In Roc, which has approximately 5 to 10 big contributors, most of the issues mentioned in this article don’t arise too often/have ok solutions. Definitely not as flexible as being solo, but surprisingly decent. With correctly set expectations and some minor cost, I think going from solo to a small team is not a big deal. With more growth though, these issues definitely become worse.


                                      As one of the large contributors to Roc, I have a few comments on the potential issues you mentioned and how they played out for me while contributing to Roc.

                                      Working alone means you don’t have to share the context of your changes with others… But if there’s a change I wish to make, I can do so without consulting others.

                                      100% agree. All of my hobby projects that are open source are this way. Sure someone can depend on it if they want, but I can and will change it whenever I am working on it. It is up to others to keep up if they want to use the project. The project is primarily for me. It is open to other just in case it happens to be useful for them.

                                      That being said, a lot of main Roc contributors essentially own a section of Roc (I mostly deal with the x86/arm dev backend and the surgical linker). As such, they are the go to person on that section of code. Due to how the compiler is structured, a contributor can do a lot without consulting anyone (except for code reviews because we require them). In many parts of the code, there is almost no chance of a PR collision. Still not as good as a solo project on these ergonomics, but surprisingly not bad.

                                      You’re able to keep to your own schedule, and adjust it as necessary.

                                      This is hit or miss. I think that Roc has done a good job at setting expectation. All of the main developers know that others are working in their spare time. It may not be possible for them to get to a feature that you want. You might have to just wait for them or figure it out yourself. People also have work they find interesting. If the feature you want is drab work, it might be a bit before someone wants to deal with it. I think overall the expectations are low here and the core contributors are all friendly so it doesn’t really lead to issues in practices, but as Roc gets closer to 0.1 with some real users, this is likely to get a lot more complex.

                                      Shared goals have to be established through long conversations and RFCs.

                                      We have live text chat with discussions that works pretty well. We also do a lot of video chats which work around these communication issues quite well for us. On top of that, working on Roc is ultimately accepting that Richard is currently the BDFN (benevolent dictator for now), and you are buying into his vision. You mostly have to convince him. That may sound bad to a number of people, but it does deal with this issue quiet well. Still more restricted, but mostly works fine so far. Again, growth and more users will likely eventually strain this.

                                      a creator with a strong opinion and direction leads to a purer, more cohesive creation

                                      Totally agree, which is why it is a BDFN. Does have some overhead compared to a solo project, but not bad.

                                      I can take this collective knowledge on a specific feature and implement it myself, and that’s likely the best way for a new language.

                                      I think this depends a lot on the language. Many languages have so many pieces that a single person can not implement them all in a reasonable amount of time. Roc would probably never get finished without the community of contributors. It has a lot of different things to handle (some using novel PhD algorithms). Just so much work for one person to cover.

                                      1. 4

                                        a lot of main Roc contributors essentially own a section of Roc

                                        This was how a lot of Elm community projects were run - under my banner, for some time, fell json-to-elm, elm-webpack-loader, elm-test, and the elm-community org on Github. It definitely makes sense when a project is scaling up - like you say, there’s a lot of different topics that might require someone to do a deep dive that wouldn’t be possible if there was one person doing it alone. But I’ve also seen the benefits with Derw of doing much of that myself - for example, having written the testing framework and the CLI interface to Derw means that I know how to make those pieces fit together nicely, without the need for convincing others.

                                        On top of that, working on Roc is ultimately accepting that Richard is currently the BDFN (benevolent dictator for now), and you are buying into his vision

                                        This is how I think languages, and a lot of projects, have to work. There’s a reason why we don’t just build everything based on groupthink, and in my experience groups tend to establish weaker visions than if they had someone who could guide the conversations and debates. In the case of Roc, I know from experience that Richard is excellent at both listening to people and enabling them to do great things.

                                        1. 4

                                          Roc has a very simple story: like Elm, but compiles to native executables. It’s easier to organize a group of people if the mission is crystal clear. Again, Linux was: “like Unix on a PC, except GPL”. Those projects had lots of collaborators very early on.

                                          If you are doing an exploratory research project, creating a new category of software without a lot of hard constraints, then it makes sense to do cathedral style development with a single person or a very few close collaborators, until the design is solid and the requirements are crystal clear. Then it’s time to open up to bazaar style collaboration.

                                          Smalltalk was an extremely ambitious & original project. It started as a series of exploratory prototypes designed and implemented by Alan Kay from 1967-1971 (from Flex, through some intermediates, to Smalltalk-71). Then it became a cathedral-style project at Xerox PARC with a small, close group of collaborators from 1972-1980, undergoing massive design changes. It was released in 1980, at which point the design became more or less frozen. Now Smalltalk is a bazaar.

                                          I understand that Rust was a personal hobby project by Graydon Hoare from 2006-2009, then in late 2009 Mozilla got interested and assigned a team of developers to the project, switching to cathedral mode until there was an MVP solid enough to release. Now Rust has a huge community of contributors.

                                          1. 1

                                            If you are doing an exploratory research project, creating a new category of software without a lot of hard constraints, then it makes sense to do cathedral style development with a single person or a very few close collaborators, until the design is solid and the requirements are crystal clear. Then it’s time to open up to bazaar style collaboration.

                                            Thanks for this framing. It more clearly names and tags something my intuition has suspected for a while, but I’ve been leery that it was just finding excuses for how I already enjoy working.

                                        1. 1

                                          I have switch between a few backup tools over the years. Probably have used restic the most. Need to set that up again. Haven’t since reinstalling my machine. No critical data, but definitely some data worth backing up.

                                          1. 1

                                            So many questions.

                                            Can this only be used in the browser? Like, can I run the fuzzer on a React app via CLI? Is this React-only? Can I add properties to check invariants of the UI?

                                            This looks super cool. I’m just trying to evaluate how I’d use it on a project.

                                            1. 1

                                              It is someone else project, but to my understanding: it is mostly a proof of concept. It is react only. It can be run locally, but will still launch a browser window to execute in. No idea on invariants.

                                            1. 2

                                              I don’t see the problem. If the change I make turns out to be big, I split the “narrative” into steps and put each one into its own commit, so the branch ends up with two or three of them. The reviewer can follow the story by starting at the first one and reviewing them in order.

                                              Sometimes I do use “stacked branches” though. When I need the code of the first for the next ticket/feature/task. This is only for me, so that I can continue with my work. By the time the second PR gets to the reviewer, the first should have been merged already and the reviewer would never know it started out as a stacked on the first, because I would rebase before submitting it.

                                              1. 1

                                                The issue is the code review interface and unit. Thinking about a tool like github PRs, it is extremely in convent to follow the flow of commits. It is really just made to show you the final diff. Generally speaking the finally diff is too large and not focused enough. As such, you are stuck reviewing something with the problems mentioned in this article unless you dis. Lot more manual work to look at each individual commit. That said, even if you look at each individual commit, you can’t leave review comments in them you have to go back and leave comments in the full PR, where context is lost.

                                                This is definitely why I prefer stacked diffs/commits as the review unit instead.

                                              1. 5

                                                I produce stacked PRs constantly and I think they’re valuable. I love that we’re seeing more and more tooling to support them.

                                                But I think articles like this are pretty one-sided and don’t acknowledge the risks and costs of stacked PRs. They aren’t 100% positive with zero downsides. So here are some reasons you shouldn’t go all-in on stacked PRs.

                                                1. They often replace the “500 lines = looks good” problem with the “10 lines = looks good” problem. I can split a large change up into a sequence of small pieces, each of which is a correct, self-contained piece of code that quickly passes review. They can even be reviewed in parallel by different reviewers to really maximize my velocity. And when I’m done, I will have solved the problem in an awful way that makes no sense when you look at it as a whole. No reviewer could evaluate the whole change because I doled it out one tidbit at a time.
                                                2. They take more work to produce. The article makes the point that reviewer effort goes up more than linearly with increases in PR size, which may be true. But author effort goes up more than linearly with decreases in PR size. Breaking your 500-line change into fifty 10-line changes will probably get you lightning-fast reviews, but is it worth an extra two days of your time to save two hours of your reviewers’ time? (You might not even notice, because splitting up the change feels like two days of useful, productive work, whereas twiddling your thumbs for a couple extra hours feels frustrating and useless.)
                                                3. They can cause a feedback loop where the cultural norm shifts toward ever-smaller changes without any regard to the impact on high-level review quality or author effort.
                                                4. They can mask underlying problems with performance evaluation and prioritization. This one is a little fuzzy but I’ve seen it in real life. If a team treats code review as a first-class responsibility on a cultural level, and the company rewards timely, thoughtful code reviews with promotions and raises, the “my 500-line change waited for review for over a week and then got rubber-stamped” problem basically never comes up. And as an added benefit, code quality and knowledge sharing goes up because people take code review very seriously. But on most teams, stacked PRs are a technical hack to cope with the organizational problem that code review isn’t truly valued and is a distraction from the work that is truly valued.
                                                1. 1

                                                  Exactly this.

                                                  Anyone who hates stacked PRs is the because they aren’t using tools like git-machete. After I discovered this, it literally opened the workflow right up. It’s all I use now.

                                                  1. 1

                                                    I think 10 lines of code is really the issue here. It is generally speaking not large enough to tell a cohesive part of a narrative. I think the goal is generally around 100 to 200 lines, but that is heavily heavily context and complexity dependent.

                                                  1. 1

                                                    I love stacked PR like everyone else. It shows you step-by-step how a problem can be solved. But as time goes by, I increasingly find there are extra work for stacked PRs, and wondering if there are better ways. Let me give you an example of how I work:

                                                    1. I started by doing it end-to-end to verify for a sub-problem, or happy-path, or toy-example, whatever you call it, worked. This will be my “WIP” or “RFC” PR;
                                                    2. After people looked over, I start to refine that “WIP” PR, handling more edge cases, run fuzzers, make sure the dependencies I introduced is sensible, write more unit tests to cover my ass;
                                                    3. Break down what I have in 2 into several “stacked PRs”, maybe data models first, then executors / services, and then hook it up to the rest. These PRs will reference back to the previous “WIP” PR to give people an overview.

                                                    Reviewers obviously are extremely happy about this. However, moving from 2 to 3 is a lot of work (probably in itself would take a day to write good commit message, split files etc, without considering the back-and-forth incorporating review feedbacks). On the other hand, my mental model cannot go directly from 1 to 3 (skipping 2) somehow (it just doesn’t work for me, I cannot have a good idea everything is in the right place without seeing it in the right place first). Git probably could be one of the culprits why there are more work than needed though.

                                                    1. 2

                                                      Note: this depends a lot on what you consider a PR. When I say PR here, I am thinking about something like a GitHub PR with many commits in it.

                                                      I definitely prefer stacked diffs/stacked commits. They are a lot easier to manage with the right tools and each diff maps directly to a code review. With stacked PRs, you have multiple commits that make up 1 PR. This leads to more things to mess with/go wrong. It also makes reordering and splitting messier in my experience.

                                                      That all being said, stacked diffs simply are not supported that well on sites like GitHub. To get that, each PR would be a single commit that you always amend and then force push. That would be much worse than stacked PRs.

                                                      1. 1

                                                        It’s more work to produce stacked PRs, no question about it.

                                                        One thing I’ve found helps is to start off with a stack of empty changes from the get-go. For a given kind of coding task, I can usually take a pretty good guess at how I’ll ultimately want the stack to be organized. Then as I work on the problem, I am constantly switching branches so that I introduce a given piece of code in the right step in the sequence. I don’t always guess the stack structure exactly right, but it’s much easier to split a particular stack entry in two as soon as I see it’s needed than it is to wait until the end and split up the entire change all at once.

                                                        This gets much more feasible with a tool like Graphite that automates the “reparent all the child changes onto the latest version of the ancestor” process; otherwise you’re constantly having to run git rebase and it gets error-prone and hard to keep track of.

                                                        Even with good tools, you still have the added overhead of having to think about where in the stack to put a given piece of code. But doing it this way ends up being less total effort than splitting a change up after the fact, in my experience.

                                                      1. 2

                                                        Sounds interesting, but i definitely don’t grok it. I get the problems it points out, but definitely don’t get how it concretely fixes them. Probably just need to mess with it to understand better.

                                                        Also, being able to swap any commit order easily sounds like an anti-feature to me. I think history should not be edited generally speaking. Making that easy sounds concerning.

                                                        1. 4

                                                          Also, being able to swap any commit order easily sounds like an anti-feature to me.

                                                          This is not what Pijul does. If you want a strict linear ordering in Pijul, you can have it.

                                                          1. 1

                                                            Again, not claiming to grok Pijul at all, but isn’t that specifically the feature emphasized here: https://pijul.org/manual/why_pijul.html#change-commutation

                                                            I get that you don’t have to use a feature even if it exists, but having a feature means someone might use it even if that is a bad idea.

                                                            I probably just don’t understand the feature.

                                                            1. 2

                                                              The commutation feature means that rebase and merge are the same operation, or rather, that it doesn’t matter which one you do. Patches are ordered locally in each Pijul repository, but they are only partially ordered globally, by their dependencies.

                                                              What Pijul gives you is a datastructure that is aware of conflicts, and where the order of incomparable patches doesn’t matter: you’ll get the exact same snapshot with different orders.

                                                              You can still bisect locally, and you can still get tags/snapshots/version identitifiers.

                                                              What I meant in my previous reply is that if your project requires a strict ordering (some projects are like that), you can model it in Pijul: you won’t be able to push a new patch without first pushing all the previous ones.

                                                              But not all projects are like that, some projects use feature branches and want the ability to merge (and unmerge) them. Or in some cases, your coauthors are working on several things at the same time and haven’t yet found the time to clean their branches, but you want to cherrypick one of their “bugfix” patches now, without (1) waiting for the bugfix to land on main and (2) without dealing with an artificial conflict between your cherry-picking and the landed bugfix in the future.

                                                              1. 1

                                                                That makes a lot of sense. Thanks for the details!

                                                        1. 2

                                                          This is a really good read of how easy it is to fall short especially when expanding in terms of breath and complexity.

                                                          1. 3

                                                            I really like this line:

                                                            Look at that, you can see that image if you want by clicking.

                                                            In my mobile browser, the link is just text. I can’t click it. So, no, links don’t just work when in plain text. Sure I can copy it, but that is less convenient.

                                                            1. 7

                                                              Will come back to fully read. Was longer than I expected. So far, it seems to be making arbitrary claims and distinctions though. Feels like a pretty unrooted opinion piece. Hoping this gets better when I get a chance to fully read it.

                                                              1. 5

                                                                I read it to the end. Unfortunately it doesn’t get better.

                                                              1. 2

                                                                I think it’s interesting to have a pretty simple, strict, pure language in the ML family. I found that talk enjoyable. It’s nice to see row polymorphism both for records and for sum types, and it might even work if their error messages are good enough.

                                                                However, they seem to be making the catastrophic mistake of trying to write their own editor (with structural editing?). Focusing on a good LSP experience would be much better. This indicates to me that their priorities are wrong if they look to ship something useful that people might try (and maybe adopt). I hope they realize their mistake and make that a side project or something like that.

                                                                1. 1

                                                                  Why is that a catastrophic error? Isn’t it, in the worst case, some sunk cost for something that gets tossed?

                                                                  1. 2

                                                                    Sure, it’s a grave mistake for them only if they care about adoption! Opportunity cost and all that :-). The first thing many people will ask is “is there a LSP server?”, and if their answer is “no, just use our editor” people will just shrug and lost interest.

                                                                    1. 1

                                                                      I would assume if the editor fails that badly, an LSP would be made. Also, I bet long term even if they don’t make the LSP, someone will.

                                                                      Anyway, adoption isn’t everything. If the editor works for them and is nice, why does it matter what others think?

                                                                1. 3

                                                                  Interesting, looking forward to having a bit more documentation to read about this as it progresses.

                                                                  As an aside, is NoRedInk still invested in the Elm ecosystem?

                                                                  1. 7

                                                                    Heavily. Elm is for frontend. Roc technically can be used for frontend, but will probably never give as nice of an experience as elm. Elm is hyper focused on frontend web which makes it amazing for that use case. Roc is more targeted towards backend, system apps, cli, etc.

                                                                  1. 1

                                                                    Ooh it’s public!

                                                                    Edit: looks like it’s in Rust now aww

                                                                    1. 8

                                                                      looks like it’s in Rust now aww

                                                                      Look at the last entry in the FAQ. The compiler has always been written in Rust, the runtime is written in Zig. To see some of the Zig source code, look in crates/compiler/builtins/bitcode.

                                                                      1. 2

                                                                        would be interesting to read the story behind the rust rewrite

                                                                        1. 8

                                                                          As far as I know, the compiler has always been written in Rust! You can write platforms in whatever language you like, though.

                                                                          1. 1

                                                                            oh, interesting; when they first announced the language i assumed it started life as a straight-up fork of the haskell-based elm compiler

                                                                            1. 5

                                                                              Always rust, though parts of it were translated very directly from the elm compiler. So some sections the code may be quite similar.

                                                                      1. 4

                                                                        I feel the core issue with the article is that it is asking if we can “replace” C. When you look across many languages in programming history, they don’t tend to die, they don’t tend to be replaced. Slowly, they will be used less as other languages are picked for x or y reason, but that is very different than replacing the language as a whole. Replacing C as a whole will probably never happen. Picking up a project her and a project there, that is more possible, but C will likely live on for an extremely long time.

                                                                        There are counter examples, like objective c to swift, but that is in a very specific scope with a lot of company control.

                                                                        1. 33

                                                                          The problem is that C have practically no checks, so any safety checks put into the competing language will have a runtime cost, which often is unacceptable. This leads to a strategy of only having checks in “safe” mode. Where the “fast” mode is just as “unsafe” as C.”

                                                                          So, apparently the author hasn’t used Rust. Or at least hasn’t noticed the various benchmarks showing it to be capable of getting close to C performance, or in some cases outpacing. Also that because of Rust’s safety, it’s much easier to write working parallelised code v.s. C and so you can get a lot of improvements that way.

                                                                          I’ve written a lot of C over the years (4 years of embedded software PhD), and now never want to go back now I’ve seen what can be done with Rust.

                                                                          1. 12

                                                                            The author notes that he does not consider Rust a C alternative, but rather a C++ alternative:

                                                                            Like several others I am writing an alternative to the C language (if you read this blog before then this shouldn’t be news!). My language (C3) is fairly recent, there are others: Zig, Odin, Jai and older languages like eC. Looking at C++ alternatives there are languages like D, Rust, Nim, Crystal, Beef, Carbon and others.

                                                                            Now, you could argue that it’s possible to create a C-like language with a borrow checker. I wonder what that would look like.

                                                                            1. 20

                                                                              C++ is a C alternative. The author dismisses rust without any explanation or justification (I suspect it’s for aesthetic reasons, like “the language is big”). For a lot of targets (non embedded), rust is in fact a valid C alternative, and so is C++.

                                                                              1. 8

                                                                                For a lot of targets, especially embedded, Rust is an amazing C alternative. Granted, currently it’s primarily ARM Cortex M that has 1st class support but I find your remark funny how from my perspective embedded is probably the best application of Rust and its features. Representing HW peripherals as type-safe state machines that won’t compile if you missuse them? Checked. Concurrency framework providing sane interrupt handling with priorities that is data race and deadlock free without any runtime overhead that won’t compile if you violate its invariants?. Checked. Embedded C is a joke in comparison.

                                                                              2. 13

                                                                                Adding a borrow checker to C requires adding generics to C, at which point it would be more C++-like than C-like. The borrow checker operates on types and functions parameterized by lifetimes, so generics is not optional, even if you do not add type generics.

                                                                                1. 6

                                                                                  Also, not adding type generics is going to make the safety gained from the borrow checker a lot less useful, because now instead of writing the unsafe code for a Vec/HashMap/Lock/… once (in a library) and using it a gazillion times, you write it once per type.

                                                                                  1. 3

                                                                                    Isn’t this more or less what Cyclone is?

                                                                                    1. 4

                                                                                      Yes it is. It is why Cyclone added polymorphic functions, polymorphic data structures, and pointer subtyping to C, check Cyclone user manual. Not because they are cool features, but because they are required for memory management.

                                                                                    2. 1

                                                                                      Even though types and functions are parameterized by lifetimes, they do not affect codegen. So it should be possible to create a “C with borrowchecker”.

                                                                                      1. 1

                                                                                        I don’t understand how codegen matters here. Clang and rustc share codegen… If C-like codegen (whatever that is) gives C-with-borrow-checker, C-with-borrow-checker is rustc.

                                                                                        1. 1

                                                                                          ops, I guess I should’ve said “lifetimes do not affect monorphisation”

                                                                                          1. 1

                                                                                            This is still a mysterious position. You seem to think C++-ness of templates comes from monomorphisation code generation strategy, but most would say it comes from its frontend processing. Monomorphisation is backend implementation detail and does not affect user complexity, and as for implementation complexity it is among the simpler one to implement.

                                                                                            1. 1

                                                                                              the whole point was that generics are not needed for a language to have a borrowchecker.

                                                                                              if you call a generic function twice with different types, two function code is generated. if you call a generic function twice with different lifetimes, only one function code is generated.

                                                                                              borrowchecker is anmotation + static analysis. the generated code is the same. the same is not true for generics or templates.

                                                                                              1. 1

                                                                                                If you think this, you should write a paper. Yes, borrow checker is a static analysis. As it currently exists, it is a static analysis formulated to work on generics. As far as I know, no one knows how to do the same without generics.

                                                                                    3. 5

                                                                                      There was some recent discussion on Garnet which is an attempt to make a smaller version of Rust.

                                                                                      1. 5

                                                                                        This is the correct position on Rust.

                                                                                        1. 32

                                                                                          I disagree. To me Rust is a great C replacement, and Rust is incompatible with C++ both technically and philosophically. I’ve used C for two decades. I’ve never liked C++, but really enjoy Rust. I’ve written a C to Rust transpiler and converted projects from C to Rust.

                                                                                          C programs can be converted to Rust. C and Rust idioms are different, but language features match and you can refactor a C program into a decent Rust program. OTOH C++ programs can’t be adapted to Rust easily, and for large programs it’s daunting. It’s mostly because Rust isn’t really an OOP language (it only has some OOP-like syntax sugar).

                                                                                          I think people superficially see that both Rust and C++ are “big” and have angle brackets, and conclude they must be the same. But Rust is very different from C++. Rust doesn’t have inheritance, doesn’t have constructors, doesn’t have move constructors, doesn’t use exceptions for error handling. Rust’s iterator is a completely different beast than C++ iterators. Rust’s macros are closer to C++ templates than Rust’s generics. Lots of Rust’s language design decisions are at odds with C++’s.

                                                                                          Rust is more like an ML language with a C subset, than C++.

                                                                                          1. 9

                                                                                            To me Rust is a great C replacement

                                                                                            When you say “replacement”, what do you mean, exactly? For example, could C++ or Ada be great C replacements?

                                                                                            I think some of the disagreements about Rust - and the whole trope about Rust being a “big” language - come from different people wanting different things from their “C replacement”. A lot of people - or maybe just a particularly vocal group of people on Internet forums - seem to like C not just because it can be used to write small, fast, native code, but because they enjoy the aesthetic experience of programming in C. For that sort of person, I think Rust is much more like C++ than C.

                                                                                            Rust is very different from C++. Rust doesn’t have inheritance, doesn’t have constructors, doesn’t have move constructors, doesn’t use exceptions for error handling.

                                                                                            Modern C++ (for some value of “modern”) doesn’t typically have inheritance or exceptions either. I’ve had the misfortune to program in C++ for a couple of decades now. When I started, it was all OO design - the kind of thing that people make fun of in Enterprise Java - but these days it’s mostly just functions in namespaces. When I first tried Rust, I thought it was just like C++ only I’d find out when I screwed up at compile-time rather than with some weird bug at run-time. I had no trouble with the borrow checker, as it just enforced the same rules that my team already followed in C++.

                                                                                            I’ve never liked C++ because it’s too complicated. Nobody can remember the whole language and no two teams use the same subset of the language (and that applies to the same team at two different times too, as people leave and join). People who program alone, or only ever work in academia in small teams, might love the technical power it offers, but people who’ve actually worked with it in large teams, or long-lived teams, in industry, tend to have a dimmer view of it. I can see why people who have been scarred by C++ might be put off by Rust’s aesthetic similarity to it.

                                                                                            1. 2

                                                                                              I’ve never liked C++ because it’s too complicated. Nobody can remember the whole language and no two teams use the same subset of the language (and that applies to the same team at two different times too, as people leave and join).

                                                                                              I think that’s a correct observation, but I think that’s because C++ standard library and the language itself has over 3 decades of heavy, wide industry use across much of the depth and breadth of the software development, generating demands and constraints from every corner of the industry.

                                                                                              I do not think we had ‘solved’ the problem of theoretically plausible definition of the minimal, but sufficient set of language features + standard library features, that will be enough for 30+ years of use across everything.

                                                                                              So all we have right now is C++ as a ‘reference stick’. If a newbie language compares well to that yardstick, we hale it. But is that the right yard stick?

                                                                                              1. 1

                                                                                                I definitely don’t use C for an “aesthetic experience” (I do not enjoy aesthetics of the preprocessor, spiral types, tediousness of malloc or header files). I would consider C++ also a C replacement in the same technical sense as Rust (native code with minimal runtime, C ABI, ±same performance), but to me Rust addresses C’s problems better than C++ does.

                                                                                                Even though C++ is approximately a C superset, and Rust is sort-of a C superset too, Rust and C++ moved away from C in different directions (multi-paradigm mainly-OOP with sugar vs ML with more explicitness and hindsight). Graphical representation:

                                                                                                Rust <------ C ----> C++
                                                                                                

                                                                                                which is why I consider Rust closer to C than C++.

                                                                                                1. 2

                                                                                                  Sorry for the obvious bait, but if you don’t like C, why do you use it? :-). If you’re looking for a non OOP language that can replace C, well, there’s a subset of C++ for that, and it’s mostly better: replace malloc with smart pointers, enjoy the RAII, enjoy auto, foreach loops, having data structures available to you, etc.

                                                                                                  1. 5

                                                                                                    In my C days I’ve been jealous of monomorphic std::sort and destructors. C++ has its benefits, but they never felt big enough for me to outweigh all the baggage that C++ brings. C++ still has many of C’s annoyances like headers, preprocessor, wonky build systems, dangerous threading, and pervasive UB. RAII and smart pointers fix some unsafety, but temporaries and implicit magic add new avenues for UAF. So it’s a mixed bag, not a clear improvement.

                                                                                                    I write libraries, and everyone takes C without asking. But with C++ people have opinions. Some don’t like when C++ is used like “C with classes”. There are conundrums like handling constructor failures given that half of C++ users bans exceptions and the other half dislikes DIY init patterns. I don’t want to keep track of what number the Rule of $x is at now, or what’s the proper way to init a smart pointer in C++$year, and is that still too new or deprecated already.

                                                                                          2. 1

                                                                                            Now, you could argue that it’s possible to create a C-like language with a borrow checker. I wonder what that would look like.

                                                                                            Zig fits in that space, no?

                                                                                            1. 8

                                                                                              Zig is definitely more C-like, but it doesn’t have borrow checking. I think Vale is closer. There’s MS Checked-C too.

                                                                                              But I’m afraid that “C-like” and safe are at odds with each other. I don’t mean it as a cheap shot against C, but if you want the compiler to guarantee safety at compilation time, you need to make the language easy to robustly analyze. This in turn requires a more advanced static type system that can express more things, like ownership, slices, and generic collections. This quickly makes the language look “big” and not C-like.

                                                                                              1. 7

                                                                                                I don’t think Zig has borrow checking?

                                                                                                1. 7

                                                                                                  It doesn’t. As I understand, its current plan for temporal memory safety is quarantine. Quarantine is a good idea, but if it was enough, C would be memory safe too.

                                                                                                  Android shipped malloc with quarantine, and here is what they say about it:

                                                                                                  (Quarantine) is fairly costly in terms of performance and memory footprint, is mostly controlled by runtime options and is disabled by default.

                                                                                            2. 2

                                                                                              Ada has made the same claims since 1983, and hasn’t taken over the world. Maybe Rust will do better.

                                                                                              1. 1

                                                                                                I think the big point here is that the author is talking in hypotheticals that don’t always pan out in practice. Theoretically, a perfectly written c program will execute faster than a rust program because it does not have safety checks.

                                                                                                That being said, in many cases those limited safety checks actually turn out to be a miniscule cost. Also, as you mentioned rust may unlock better threading and other performance gains.

                                                                                                Lastly, i think it is important to note that often software architecture, algorithms, and cache friendliness will matter much much more than rust vs c vs other low level languages.