Threads for maxholl

  1. 6

    A somewhat bad alternative is that you put all the strings into a single allocation, with the strings immediately after the struct which points to them.

    But yeah in general the clean thing to do is for every allocation to come with its own matching free()-like function. This lets you support patterns more complicated than just a tree of pointers, like refcounted COW subfields.

    Edit to add: this is one place where RAII is pretty clearly a language improvement.

    1. 5

      RAII isn’t a requirement here, the only requirement is destructors. Whether they’re called based on lexical scope or some other notion of reachability, being able to return an object that knows how to clean itself up is very useful.

      Weirdly, this is even more important for kernel code than userspace, where you often have multiple memory allocators and objects own different kinds of non-memory resources, yet kernel programmers are the ones that tend to be most resistant to using languages with native support for this kind of thing.

      1. 1

        RAII isn’t a requirement here, the only requirement is destructors

        Sure. It’s just a really nice way to do destructors. IMO it’s very nearly the one and only nice thing they C++98 had.

      2. 1

        I don’t see how RAII is an improvement. The original call becomes the constructor, but then you still have to provide a destructor as a separate function. Yes, they’re nicely bundled together as methods on the same type, but if you decide to not provide a destructor because the return type doesn’t need one, you still can’t add one if you want to change the type in the future.

        1. 2

          I suspect that the @0x2ba22e11 is conflating an language mechanism with one of the abstractions that you can build on top. RAII is an idiom. The underlying language mechanism is types that have destructors that are called implicitly when the object goes out of scope. RAII depends on this for automatic storage but it’s useful in this context for structure fields. If the object were defined with fields of this kind of type then it would end up with a synthesised destructor that managed all of the cleanup. A C++ version of getaddrinfo might look something like this:

          struct addrinfo
          {
              int              ai_flags;
              int              ai_family;
              int              ai_socktype;
              int              ai_protocol;
              std::vector<sockaddr> ai_addr;
              std::string ai_canonname;
          };
          
          std::list<addrinfo> getaddrinfo(const std::string_view node,
                                          const std::string_view service,
                                          const std::optional<addrinfo> hints);
          

          The returned list is automatically destroyed when it goes out of scope (if it isn’t moved elsewhere). That’s the RAII bit. The list’s destructor explicitly calls the destructor on the elements. The destructor on addrinfo is synthesised and calls the destructor on std::vector, which calls the (trivial) destructor on sockaddr, and calls the destructor on on std::string for the canonical name.

          This makes it easy to create APIs that return deep structures. There’s no explicit cleanup in the caller, which means that you at least maintain source compatibility if you add deeper copies.

          If you think that you might want to add extra fields later then you can declare an out-of-line destructor, delete the and move constructors, and make the constructor private. This prevents any code other than yours from being able to allocate instances of this structure (so you can prevent them from existing on the stack or in globals) and means that anything that destroys them must call into your code (so if there are fields that it doesn’t know about then this remains fine). That’s a bit more work and it would be nice to have some built-in support.

          In particular, it’s easy in C++ to make a class that can be allocated only on the heap and only by unique pointers by making the constructor private but making std::make_unique a friend but you can’t do that with std::shared_ptr. Instead, you need to do a horrific dance where you have a public factory method that has a private subclass of the class that has a public constructor (which can call the private constructor because it’s declared inside a method of the class) and pass that as the type for std::make_shared.

          1. 1

            Just because you don’t write it, doesn’t mean it does not exist. The destructor is still part of the API. As you mentioned, there is a large difference between

            extern ~addrinfo();
            ~addrinfo() = default;
            ~addrinfo() = delete;
            

            The first is always a function call (most likely virtual because libc is usually linked dynamically), even if the destructor itself is a no-op. The second requires exposing more internal details, which makes compatible library upgrades more difficult and requires recompilation on library updates. The third disallows any destructor in the future.

            If your language is built to support proper deintialization, RAII adds nothing in the first and last case. In the second case, RAII allows for automatic building of the destructor, but as I stated above, I consider this a very bad choice for library interfaces.

            1.  

              Just because you don’t write it, doesn’t mean it does not exist. The destructor is still part of the API. As you mentioned, there is a large difference between

              Hence my comment about an explicit out-of-line destructor. This will ensure that it is not inlined into the caller and so its behaviour can be changed later without breaking the ABI.

          2. 1

            ? If you add a destructor and then the downstream consumer code is recompiled, it’ll automatically add calls to your destructor.

            IMO constructors are boring (ordinary functions work fine), the exciting/useful part of RAII is the destructors.

            ABI compatibility is just a (good) reason to always always always include a constructor.

            1.  

              Adding a public constructor allows the class to be constructed on the stack and in globals. This means that its layout (or, at least, size and the offsets of all public fields) is now part of your ABI. See my post above for how to avoid this: you need a private constructor and factory methods.

              The worst thing that C++ inherited from C is lack of awareness of the underlying memory type. I wish it were possible to write a C++ constructor with an explicit type of the memory provided so that you can have different constructors for stack, heap, global, or even guaranteed-to-be-zero memory.

        1. 2

          You can reasonably package only object libraries. If the API (header) changed, all dependent objects have to be rebuilt. For rust, this also means invalidation on compiler update. With that out of the way, a dependency update should only rebuild the dependency once and then relink all depending libraries/applications. And today, linking a binary is freaking cheap.

          1. 3

            I’d like to chip in: “A Programming language” -Kenneth E. Iverson “How Do Committees Invent?” -Melvin E. Conway

            1. 2

              The conway one is already below mythical mal month. I’ll check the other one thanks!

              1. 1

                You’re right, my bad. Missed it.

            1. 1

              My biggest gripe with git is diff and merge. They operate on lines, but code has a syntax. Many times have I had unnecessary merge conflicts or merge downright silently breaking code. Janestreet has shown syntaxes correlate with word boundaries and treesitter can parse many languages directly, but the line-based approach is deeply rooted in git.

              1. 4

                You can replace git’s default merge tool with any you like, so this isn’t really a git problem.

              1. 2

                Alright, this is cool, don’t get me wrong, but: Why e-ink? I too want a frontlighted panel for programming, but I feel like this usecase woul be better served by an e-paper LCD panel. Like Pebble had, but several years newer. My old laptop had a cheaper screen which was decently readable in direct sunlight with the backlight off. A grayscale trasflective panel would IMO be significantly better, as they don’t suffer from e-ink reponse times.

                1. 2

                  You’re referring to the Sharp memory lcd panels. I don’t recall them ever selling a large panel let alone with something close to hiDPI. You’d probably need a fairly substantial volume expectation for them to consider making such a thing.

                  Compare to the 13.3 eink grayscale panel IIRC has been around in some form for nearly a decade.

                1. 6

                  There are multiple points here I disagree with:

                  1. Go’s and Zig’s defer are rather different beasts Go runs defered statements at the end of the function, Zig at the end of scope. ant to lock a mutex insife a loop? Can’t use Go defer for that..
                  2. destructors can’t take arguments or return values While most destructions only release acquired resources, passing an argument to a defered call can be very useful in many cases
                  3. hidden code all defer code is visible in the scope. Look for all lines starting with defer in the current scope and you have all the calls. Looking for destructors means looking how drop is implemented for all the types in the scopes.
                  1. 11

                    Go’s and Zig’s defer are rather different beasts Go runs defered statements at the end of the function, Zig at the end of scope. ant to lock a mutex insife a loop? Can’t use Go defer for that..

                    This distinction doesn’t really matter in a language with first-class lambdas. If you want to unlock a mutex at the end of a loop iteration with Go, create and call a lambda in the loop that uses defer internally.

                    destructors can’t take arguments or return values

                    But constructors can. If you implement a Defer class to use RAII, it takes a lambda in the constructor and calls it in the destructor.

                    hidden code all defer code is visible in the scope

                    I’m not sure I buy that argument, given that the code in defer is almost always calling another function. The code inside the constructor for the object whose cleanup you are defering is also not visible in the calling function.

                    1. 4

                      hidden code all defer code is visible in the scope

                      I’m not sure I buy that argument, given that the code in defer is almost always calling another function. The code inside the constructor for the object whose cleanup you are defering is also not visible in the calling function.

                      The point is that as a reader of zig, you can look at the function and see all the code which can be executed. You can see the call and breakpoint that line. As a reader of c++, it’s a bit more convoluted to breakpoint on destructors.

                      1. 2

                        you can look at the function and see all the code which can be executed.

                        As someone that works daily with several hundred lines functions, that sounds like a con way more than a pro.

                      2. 1

                        But constructors can.

                        This can work sometimes, but other times packing pointers in a struct just so you can drop it later is wasteful. This happens a lot with for example the Vulkan API where a lot of the vkDestroy* functions take multiple arguments. I’m a big fan of RAII but it’s not strictly better.

                        1. 1

                          At least in C++, most of this all goes away after inlining. First the constructor and destructor are both inlined in the enclosing scope. This turns the capture of the arguments in the constructor into local assignments in a structure in the current stack frame. Then scalar replacement of aggregates runs and splits the structure into individual allocas in the first phase and then into SSA values in the second. At this point, the ‘captured’ values are just propagated directly into the code from the destructor.

                        2. 1

                          If you want to unlock a mutex at the end of a loop iteration with Go, create and call a lambda in the loop that uses defer internally.

                          Note that Go uses function scope for defer. So this will actually acquire locks slowly then release them all at the end of function. This is very likely not what you want and can even risk deadlocks.

                          1. 1

                            Is a lambda not a function in Go? I wouldn’t expect defer in a lambda to release the lock at the end of the enclosing scope, because what happens if the lambda outlives the function?

                            1. 1

                              Sorry, I misread what you said. I was thinking defer func() { ... }() not func() { defer ... }().

                              1. 2

                                Sorry, I should have put some code - it’s much clearer what I meant from your post.

                        3. 5

                          The first point is minor, and not really changing the overall picture of leaking by default.

                          Destruction with arguments is sometimes useful indeed, but there are workarounds. Sometimes you can take arguments when constructing the object. In the worst case you can require an explicit function call to drop with arguments (just like defer does), but still use the default drop to either catch bugs (log or panic when the right drop has been forgotten) or provide a sensible default, e.g. delete a temporary file if temp_file.keep() hasn’t been called.

                          Automatic drop code is indeed implicit and can’t be grepped for, but you have to consider the trade-off: a forgotten defer is also invisible and can’t be grepped for either. This is the change in default: by default there may be drop code you may not be aware of, instead of by default there may be a leak you may not be aware of.

                          1. 3

                            destructors can’t take arguments or return values. While most destructions only release acquired resources, passing an argument to a deferred call can be very useful in many cases.

                            Yes, more than useful:

                            • Zero-cost abstraction in terms of state: A deferred call doesn’t artificially require objects to contain all state needed by their destructors. State is generally bad, especially references, and especially long lived objects that secretly know about each other.
                            • Dependencies are better when they are explicit: If one function needs to run before another, letting it show (in terms of what arguments they require) is a good thing: It makes wrong code look wrong (yes, destruction order is a common problem in C++) and prevents it from compiling if you have lifetimes like Rust.
                            • Expressiveness: In the harsh reality we live in, destructors can fail.

                            I think the right solution is explicit destructors: Instead of the compiler inserting invisible destructor calls, the compiler fails if you don’t. This would be a natural extension to an explicit language like C – it would only add safety. Not only that: It fits well with defer too – syntactic sugar doesn’t matter, because it just solves the «wrong default» problem. But more than anything, I think it would shine in a language with lifetimes, like Rust, where long lived references are precisely what you don’t want to mess with.

                            1. 2

                              You could run an anonymous function within a loop in Go, just to get the per-loop defer. Returning a value in a defer is also possible.

                              func main() (retval int) {
                                  for {
                                      func() {
                                          // do stuff per loop
                                          defer func() {
                                              // per loop cleanup
                                          }()
                                      }()
                                  }
                                  defer func() {
                                      retval = 42
                                  }()
                                  return
                              }
                              
                            1. 1

                              You can also achieve this with zig cc by specifying glibc version in the target zig cc -target x86_64-linux-gnu.2.28 also works for C++ and all supported target platforms.

                              1. 5

                                Extrapolating from the benchmarks game seems like a stretch too far for me. Firstly, the quality of implementation varies greatly between languages. I see no reasonable reason for such a large difference between JS and TS. Secondly, are the majority of cloud applications doing computations in that language? I would expect most applications to either be working a lot with databases, or to wrangle network IO. Both would see a significant reduction in language dependence, as computations are done in C/C++ DB/OS anyway.

                                1. 33

                                  TextMate and Transmit being “better than anything apt-get could give me” sounds rather amusing, considering that both vim and emacs have exploded in popularity in the last decade by taking in lots of those TextMate users :) And TextMate 2 is GPL, haha.

                                  Some programmers use proprietary software, sure. Outside of game dev though they more often than not absolutely despise it.

                                  The number one practical problem with using closed proprietary software is that you’re literally going back to basically where Stallman started – with that printer driver that he couldn’t just fix because it was closed.

                                  doing open source because that’s what we want to do, what we’re reinforced in doing, all the while invoking generalized, thoroughly hypothetical “users” to lend importance and nobility to our hobbies, compulsions, and fancies.

                                  I generally don’t think that much about users when working on hobby projects and don’t care about nobility. My projects are open source just because why the hell would I keep them to myself? They would just go to waste there and probably be lost forever. The public is the best backup/preservation system there is. If I helped someone by doing that, well, that’s just a nice bonus to me, not the goal.

                                  1. 22

                                    My reference to better-than-apt referred to that earlier time, when TextMate was hot new stuff. The day I bought my license, there wasn’t any comparable substitute in open source. And certainly nothing like the filesystem-based integration with Transit.

                                    Folks cloned the snippet system for Vim and Emacs pretty quickly, at least partway. But that wasn’t really even half the TextMate proposition. It was just the most visible bit, from the screencasts. It took a long time before anyone really went after the bundle system, the configuration flow, the UI, and the command namespace. When they did, they actually maintained TextMate bundle compatibility—direct clone. Eventually, Atom. More or less a GitHub founder’s pet project, so I’m told.

                                    I’m back on Debian now. UltiSnips for Vim, in the terminal. But the kind of work I do has changed. And it’s been 17 years. If Allan Odgaard had kept up pace with new ideas, rather than diverting into a big, HURD-style rewrite, I wonder where editors would be today.

                                    I think it’s fair to set games somewhat apart as its own world. It’s not really the same industry. Ditto film. But my experience doesn’t track yours beyond that. People despise bad software. Despite the hype, open ain’t always better. Mako Hill had a good bit on that.

                                    As for what people despise, I wouldn’t take that as any indication. Twitter’s full of folks venting about open source, too. Especially when they’re made to use it. That’s not to draw any equivalence. It’s just to say samples of grousing don’t tell us much.

                                    The Stallman printer story is canon. But I don’t think that makes it real. Most people, especially these days, don’t want to dive into an unfamiliar system for a feature add. They want to call somebody who knows the system, who’s committed to deliver. Or, failing that, drive-by some GitHub repo with a #feature issue, and miraculously see it closed by a PR on the double.

                                    For good, closed software done right, the end user experience is actually better. You ask, someone capable responds. No one calls you a noob, tells you RTFM, or throws the work to open a pull request back on you. The work gets done by the person best positioned to do it. The experience of the software honors the value of your time.

                                    I have more than a thousand GitHub repos, and have publicly referred to the platform as a shellmound. I’m not sure my random doodle repos counts as open source, for any meaningful sense of the term. GitHub don’t even insist on a license for free storage, as long as you’re comfortable hanging your laundry in public.

                                    1. 17

                                      The Stallman printer story is canon. But I don’t think that makes it real. Most people, especially these days, don’t want to dive into an unfamiliar system for a feature add.

                                      Counterpoint: For a developer or company with little money, which has been the case for the better part of my career (notably excluding my few years at Microsoft), if they want a feature, fix, or integration for an open-source dependency, they can make it happen given enough time, effort, and skill, but with a closed-source dependency, they’re stuck, unless perhaps they’re good at reverse engineering. That’s a big reason why I prefer open source for libraries, unless there just isn’t a good open-source solution for the problem at hand (e.g. speech synthesis or optical character recognition). Maybe I’m better than most at diving into an unfamiliar codebase and bending it to my will. I find it entirely plausible that Stallman was, or would have been, good at that too.

                                      1. 8

                                        Arguably, the possible spaces of software and the way it was interacted with and written were smaller, so Stallman probably would have been good at it. He was a systems programmer, in a systems programming environment, who wanted to hack on a systems progam/driver/firmware.

                                        That doesn’t necessarily mean that most or even all developers can or should be able to say, step out of systems programming and instantly know how to fix a React bug that is plaguing them. Software is more diverse and more specific, and programming systems are more layered and orientated towards the problem, and out of that comes subfields that are non-transferrable even if some of the basic concepts are.

                                        1. 10

                                          I think the problem Stallman faced was that it was illegal to fix the driver. You technically don’t have to have knowledge to fix a React problem, it’s enough you can find someone who can and is allowed to (for payment, if need be).

                                          FLOSS development doesn’t have to be a net drain on money. The FSF’s standard answer is “support and training”, and that’s fine as far as it goes, but it’s really hard to realize in an age where a project without good docs seldom gets traction and many developers choose burnout handling issues rather than asking for money.

                                      2. 8

                                        As for what people despise, I wouldn’t take that as any indication. … The Stallman printer story is canon. But I don’t think that makes it real. Most people, especially these days, don’t want to dive into an unfamiliar system for a feature add.

                                        I think that, for folks who agree with these points, this makes sense and contextualizes the rest of the post. But not everybody will agree with this. Personally, I have written a Free Software driver in anger, regretting my hardware purchase and petitioning the vendor for documentation. It was a choice made not just from ability, but from desperation.

                                        For good, closed software done right, the end user experience is actually better.

                                        And for bad closed software done wrong? It can destroy hardware and hide malware, just to pick on one particularly rude vendor. Note that I would not be able to publish this sort of code in the Free Software ecosystem and convince people to use it, because folks would complain that the software doesn’t provide any functionality to the users. And this is a foundational weakness of proprietary software vendors: they are incentivized to make harmful software experiences due to privileged legal status. (Or, to be blunt: if I published this sort of malware, I would be arrested and jailed for crimes.)

                                        1. 4

                                          Haven’t done any real systems hacking in a long while. I did when I was young, and had more time. I definitely had to do some driver and firmware work, to get Linux working with various things. I’m not sure if I was angry going into those projects, but I remember being pretty frustrated coming out of them!

                                          A lot of that came about from buying cheap hardware. I remember one Acer laptop in particular, the one I chose for college: great specs, great price, terrible build quality, total bodge-job, component- and firmware-wise. I eventually bought the MacBook, and came back to Linux on ThinkPads. I pay a premium for ThinkPads, but I get what I pay for, Linux hardware support very much included. It makes way more sense than spending hours brushing up and hacking patches.

                                          As for bad proprietary software: oh yeah, it’s out there. But the idea that software vendors have some inherently privileged legal position doesn’t fly. They have copyright, and can require customers to buy licenses. But the terms of those licenses can and do vary. I have advised on several software license deals, for substantial money, on terms that basically boiled down to Apache 2.0 plus a payment clause, limited to just the paying customer.

                                          If all you’re doing is buying licenses on take-it-or-leave-it terms, or negotiating with highly aggressive companies much bigger than yours, yeah, you’re likely to see terms that strongly favor the vendor. That’s how leverage works. But proprietary software sales get done on sane, fair terms all the time. Perpetual licenses. Modification rights. Meaningful warranties. Accountable maintenance commitments. Sometimes the less impressive product wins out, because the terms offered for it are better.

                                        2. 7

                                          Most people, especially these days, don’t want to dive into an unfamiliar system for a feature add

                                          I think that is observably untrue, given the massive number of extensions available for all of the popular editors, for platforms such as GitHub, or even MS Office. The key point to remember is that being able to modify the behaviour of a program does not necessarily depend on its source being available. Somewhat counter-intuitively, it’s often easier in proprietary programs because they’re forced to maintain (and document) stable interfaces for third-party extensions, whereas open source projects can often just tell people to go and hack on the source directly. Specifically on editors, Vim, Emacs, and VS Code all have extension ecosystems that are significantly larger than the core product, which exist because people who were missing a feature decided to dive into an unfamiliar ecosystem and add it. VS Code and Emacs both did well by making that system less unfamiliar to their early userbase by building it on top of a language (Lisp, TypeScript) that this audience used.

                                        3. 22

                                          considering that both vim and emacs have exploded in popularity in the last decade by taking in lots of those TextMate users :) And TextMate 2 is GPL, haha.

                                          Some programmers use proprietary software, sure. Outside of game dev though they more often than not absolutely despise it.

                                          Get out of your bubble. Outside that bubble a lot of developers use JetBrains IDEs or Visual Studio. Visual Studio Code has gained a lot of traction in recent years, but initially mostly because Code was much better for web development than the competition and it is free. Not because it is open source [1].

                                          In the most recent Stack Overflow developer survey, Visual Studio Code is used by 71.07% of developers, Visual Studio by 32.92%, IntelliJ by 29.69%. The most popular fully (?) open source editor is actually Notepad++ with 29,09%. And vim takes the next place at 24,82%, but I wouldn’t be surprised that is because people use vim when doing quick edits remote Linux machines. Emacs dangles somewhere at the bottom of the list with only 5,25%, surpassed by many proprietary applications like Xcode, PyCharm, or Sublime Text.

                                          The number one practical problem with using closed proprietary software is that you’re literally going back to basically where Stallman started – with that printer driver that he couldn’t just fix because it was closed.

                                          I agree that this is a major downside of closed source software. But most people want to fix the things they are working on, not their tools.

                                          [1] https://underjord.io/the-best-parts-of-visual-studio-code-are-proprietary.html

                                          1. 10

                                            proprietary applications like […] PyCharm

                                            PyCharm/IntelliJ IDEA are Apache2 and most devs use the free software version available at https://github.com/JetBrains/intellij-community

                                            1. 2

                                              PyCharm/IntelliJ IDEA are Apache2 and most devs use the free software version

                                              I have no reason to doubt this, but are you aware of any usage statistics? I’m curious whether “most devs” is more like “just over half” or “over 99%.”

                                              Anecdotally, every company I’ve worked at where the devs used JetBrains IDEs has paid for the commercial version, but totally willing to believe those companies aren’t typical.

                                              1. 1

                                                From my anecdotal experience, no company has paid for the commercial JetBrains version, even when most of the devs use it. It might very well be a cultural thing.

                                                1. 3

                                                  At the two companies where I used it, we paid for the whole toolbox subscription for every developer who wanted it. Most of us used IDEA Ultimate and one or more of CLion, AppCode or PyCharm Professional. Many of us also used DataGrip.

                                                  I still maintain a subscription for my consulting work now, too.

                                                  1. 1

                                                    Can’t speak for PyCharm particularly but every other flavor of IDE based on IntelliJ, I only know of companies who have paid, be it PHPStorm or more likely Ultimate (if working on many languages).

                                              2. 5

                                                I don’t really disagree with you, but your arguments seem kind of weak.

                                                Get out of your bubble.

                                                In the most recent Stack Overflow developer survey […]

                                                What now? It’s just another bubble.

                                                [1] https://underjord.io/the-best-parts-of-visual-studio-code-are-proprietary.html

                                                I haven’t ever used any of those “best parts” and never seen anyone using them.

                                                1. 2

                                                  I haven’t ever used any of those “best parts” and never seen anyone using them.

                                                  I’ve used the remote stuff. That and PlatformIO are the only things that ever cause me to use VS Code over emacs or one of the JetBrains tools.

                                                  The extension marketplace is proprietary too, and I’d call it one of the best parts of VS Code.

                                              3. 10

                                                Some programmers use proprietary software, sure. Outside of game dev though they more often than not absolutely despise it.

                                                I mean, there’s Vivado, and then there’s Visual Studio, just like there’s, I dunno, a steaming, rotten pile of horse shit and then there’s pancakes.

                                                There are many bubbles in the tech world and game dev is only one of them. I worked in an embedded shop where the state of Linux was that we had one colleague who tried Fedora and he thought it was like a beta or something because he couldn’t put things on the desktop and, in his own words, he expected some things not to work as well as Windows but that was years away from being useful. The thought of writing code in a text editor after finishing college, where they put you through the ritual of compiling stuff you wrote in vim by writing gcc incantations, seemed about as foreign to these guys as the idea of sailing to America on a steam ship.

                                                There are plenty of bubbles where programmers use both kinds of tools, too. Way back when I was doing number crunching, everyone at the lab was using emacs, vim, nedit or notepad, but also Matlab, which everyone hated for various reasons but nowhere near as much as they hated, say, gdb. We worked a lot with another research group at another university where it was the other way around: someone had figured Octave was good enough and they had just one Matlab installation for things Octave really couldn’t do or to port their code, and used the money they saved on Matlab licenses to buy a bunch of Windows and Visual Studio licenses.

                                                I don’t have much in the way of numbers here but you shouldn’t take a tweet about Vivado as the standard about what people think about closed source tools. Vivado is successful because it barely works and there are no alternatives that work (for any non-hobbyist definition of “works”) – it’s successful largely by vendor lockdown.

                                                1. 4

                                                  Speaking of bubbles, I have literally not even heard of Vivado before today.

                                                  1. 3

                                                    Well, what can I say, not every case of acute horse diarrhea deserves to be famous :-D.

                                                    But seriously, this is one of the things I love about our work. Vivado is huge. I suspect it’s already old enough that virtually everyone who finishes an EE or CompEng degree has seen it at least once (it’s about ten years old, I think, and it replaces another suite called ISE which was also all sorts of horrible in its own way). It’s very likely that it’s been involved in one way or another in dozens of stories that popped up here on lobste.rs, like stories about RISC-V or cryptography or FPGA implementations of classical systems. And it’s perfectly possible for someone to be an excellent programmer and earn a living writing code and never hear about it. No matter how good you are at anything computer related, there’s always something out there big enough that it’s got tens of thousands of people behind it that you had no idea about.

                                                    If you keep an open mind, computer engineering will surprise you in all sorts of fresh ways, all the time – not all of them good but oh well. Dogmatism is fun and it feels like you’re right all the time but it’s really fscking boring. I’ve done it and it sucks.

                                                  2. 1

                                                    By reputation, another reason Vivado is successful is because all the other FPGA toolchains are reportedly even worse. Don’t get me wrong, Vivado is a tyre fire that segfaults, but the others are apparently even less reliable.

                                                    e.g. I’ve heard that at some shops, people writing FPGA code targeting competing FPGAs will actually write and debug their entire project on Xilinx FPGAs with Vivado and then port the code to the other toolchain for the originally intended target FPGA.

                                                    1. 6

                                                      I’ve done too little real work on Altera’s stuff to have had first-hand experience but from the little I’ve done I can certainly say Quartus sucked at least as much as Vivado back when I last touched it (2015-ish?). Maybe it’s changed in the meantime but somehow I doubt it :-D. I heard Lattice’s stuff is tolerable but I never tried it. I did try Microsemi’s Libero thing though and it makes Vivado feel like Visual frickin’ Studio. Porting designs between toolchains is not quite like porting program code and it sounds like a really bad idea on paper but, indeed, given how bad some of these tools are, I can imagine it’s just the only way to do things productively sometimes.

                                                      But it’s really a lot more complicated than how good the toolchain is. A big – probably the biggest – reason why Vivado and Quartus are so successful is simply that Xilinx and Altera… well, Intel, are really successful. The next version of Vivado could suck ten times as bad as the current one and it would barely put a dent in their market share, just because it’s what you use to do things with Xilinx’ devices. Developer souls are a lot more fungible than silicon.

                                                      Also, being so big, they both have roots in the academic world and they run deep, and it’s simply what lots and lots of people learn in school. They’re not very good but as long as the bitstream flows out, who you gonna call?

                                                      A competitor with better development tools could, in theory, steal a chunk of this market, but that chunk would be exactly as big as how many programmable logic devices they can sell, and there are constraints – both technical (as in, performance) and non-technical – that dictate that long before anyone even considers the quality of development tools. The usual retort is that developer tools are important for productivity. And they are, but the best designs, from the most productive teams, won’t be worth crap if there are no physical devices for them, or if these devices cannot be obtained on time and in the required quantity and at the right price and so on.

                                                      I also secretely suspect that it’s just a case of the audience of these tools being a little more tolerant to weirdness and poor quality. I mean, they taught me how a transistor works in my second year of uni. I then spent the other half of my undergrad years (and I could’ve spent a whole two years of masters’ on that, too) learning about all the ways in which it doesn’t quite work exactly like that. Software segfaulting under your nose is just a tiny drop of weird shit in a very big bucket of weird shit which people way smarter than you reverse-engineered out of nature. Your entire job revolves around manipulating all sorts of weird but inevitable things. So you just learn to chalk up the fact that Vivado crashes if you have more than 64 mux instances whose names start with the letter X under “inevitable” [1], somewhere between “all P-N junctions have leakage current” and “gate capacitance varies with gate oxide thickness”. What’s one more?

                                                      [1] Note that this is something I just made up on the spot but it does sound like something Vivado would do…

                                                      1. 2

                                                        A competitor with better development tools could, in theory, steal a chunk of this market, but that chunk would be exactly as big as how many programmable logic devices they can sell

                                                        You don’t exactly need to sell programmable logic devices to steal a chunk of FPGA development tool market. Symbiflow has already done that with Lattice’s FPGAs, and are slowly starting to bite into Xilinx FPGAs as well. Quicklogic just released their new FPGA product without their own software, just by giving Symbiflow their bitstream generator and FPGA definitions. There are signs that the new Renesas FPGAs are using Yosys (part of Symbiflow) too.

                                                        The reason why these closed software tools are so entrenched is that they are tied in with hardware. And as of now, open source hardware is a lot more niche thing, than open source software. With time, that will probably change, but even then, the software landscape in places dealing with hardware will progress faster. Just remember, how a while ago working on microcontrollers almost always meant dealing with vendor-specific IDEs. That is basically gone now. With time, that will happen with most FPGAs as well.

                                                        1. 1

                                                          I haven’t tried Symbiflow since 2020 and it’s a project I’m really cheering for, so I’m not going to say anything bad or unenthusiastic about it. But it’s worth keeping in mind that the iCE40 line (the only one that it supports well enough to be usable for real life-ish projects) has a very straightforward, wrinkle-free architecture that lends itself easily to reverse-engineering. Even though the underlying technology is physically the same, as in, it’s based on the same technology (FPGA), the market Symbiflow can realistically target is practically different from the one where people are using Vivado and complaining about it. Its relative success in this field is important and useful, not to mention liberating to a lot of people (including yours truly) but I wouldn’t be too quick to generalize it.

                                                          This approach will probably find some success at the low-power, low-cost end of the spectrum, where not only are customers rather unwilling to pay the licensing costs, but some companies, especially fabless vendors like Quicklogic, would be reasonably happy to be rid of software development cost. But this part of the FPGA market works in entirely different ways than the part of the FPGA market that’s bringing the big bucks for Xilinx (and Altera) and where the people who associate the word Vivado with an overwhelming feeling of dread work. For one, it’s not a field where architecture and fabrication technology are important competitive advantages, so there’s not that much value in keeping it closed and keeping the software tied to it.

                                                  3. 11

                                                    Some programmers use proprietary software, sure. Outside of game dev though they more often than not absolutely despise it.

                                                    I don’t know, I would far rather use Visual Studio than i.e. whatever editor + gdb again.

                                                    1. 6

                                                      You’re right that systems programmers and hardware devs prefer small and lean tools, which are often released as open source, but walk into any corporate dev house and you’ll see a lot of Visual Studio and IntelliJ with Xcode and other IDEs sparkled in. The sole exception to this are web devs, whose first usable tool was VScode.

                                                      If you have the skills and time to hack on your tools, open source is better, but for most people it’s just a better proposition to pay someone else for their tools and use the saved time to do their job and make that money.

                                                      1. 4

                                                        Actually most devs use proprietary software, and kemitchell even mentions this in the post. He switched to Mac, but Windows is still the most popular platform even amongst devs[1]. I suspect the Stack Overflow survey results are even skewed and that Linux likely has less market share than they found.

                                                        https://insights.stackoverflow.com/survey/2021#section-most-popular-technologies-operating-system

                                                        1. 2

                                                          I hope I made the point that devs do use proprietary software. It’s not true that devs just won’t use closed code. But I don’t have data to support the claim that most devs do. I suppose you could get there by arguing Windows and OS X are closed, and the number of folks on Linux is small. I’ve enjoyed looking at the Stack Overflow survey, but I have no idea how representative that is.

                                                          For what it’s worth, when it comes to laptops and desktops, I’m back on Linux again. I did switch to Mac when I was younger. And I did my programming career that way, before going to law school.

                                                          1. 1

                                                            I have a trifecta! I have a Linux laptop (which I used when I first joined the company to do development), a Mac laptop (which I also do development on) and a Windows laptop (because our Corporate Overlords are Windows only) that I don’t use, and can’t return (I’ve asked).

                                                          2. 2

                                                            But how much of that is because companies require Windows? When I was hired at my current company, half the employees used Macs (mostly developers) and the other half Windows. We then got bought out by an “enterprise” company, and they use Windows exclusively. They even sent me a Windows laptop to use, even though I use a Mac. The Windows laptop sits unpowered and unused, only to be turned on when I’m reminded that I have to update the damn thing.

                                                        1. 1

                                                          C really lacks defer. Not much more to be said here. The split resource allocation and release required by goto doesn’t work well for me, I find it too easy to get lost in.

                                                          1. 1

                                                            I’d also love a defer in C. It works well when you want to clean up something at the end of the current scope.

                                                            However, in my example, the _init and _shutdown functions are called at different times. If I defer in the init function, all the deferred cleanups will be run at the end of the init function - but I want them to run only in case of error! This could be managed by e.g. looking at rc. But then the cleanup code for _shutdown still has to be written, leaving the duplication mentioned in the problem statement.

                                                            Defer is much more general in Go because you can start a concurrent function with go. You can extend the lifetime/runtime of that function as desired and use defer to clean up when it has reached the end.

                                                            1. 1

                                                              Doesn’t Go’s defer run on end-of-function instead of end-of-scope? I’m thinking more of Zig, which also includes errdefer https://ziglang.org/documentation/0.9.0/#errdefer

                                                              1. 1

                                                                You are right, Go’s defer runs at the end of the function.

                                                                I just had a look at the Zig link. errdefer is a cool feature. This solves the first problem I state above - not running the cleanup code when returning success on init. It does not solve the problem of having to write a separate shutdown function that consists in essence of all the errdefered calls in reverse order.

                                                                The code presented in the article covers both needs. Regarding it being “easy get lost in”, there’s - unfortunately - a bit of discipline required to get it right. This is the bad news. The good news is that it’s not the kind of absolute discipline you need for, say, memory management. To check whether you got it right, you read the function from both ends at the same time (e.g. in two editor panes) and verify that the order of gotos matches the order of the labels and release actions.

                                                          1. 0

                                                            Claiming that no framework is developed with performance is ignorance at best and denial for the sake of argument at worst. Svelte, Elm and others on the frontend and Hyper and Actix on the backend have made their names on combining developement and runtime efficiency.

                                                            1. 1

                                                              It calls itself a semantic search tool, yeat as far as I can see, it only does syntax analysis. Am I missing something, or is the tool straight up misleading?

                                                              1. 1

                                                                Yeah, I agree that this at minimum requires clarification. On the other hand, you can go surprisingly far using similar approaches: https://web.stanford.edu/~mlfbrown/paper.pdf.

                                                              1. 6

                                                                I have been on the lookout for an indentation based language to replace Python for some time now as an introductory language to teach students. Python has too many warts (bad scoping, bad implementation of default parameters, not well-thought-out distinction between statements and expressions, comprehensions are a language within the language that makes student’s life difficult, and so on.). Is Nim the best at this point in this space? Am I missing warts in Nim that makes the grass greener on the other side? Anyone who has experience with both Nim and Python, can you tell me what the trade-offs are?

                                                                1. 9

                                                                  I am uncomfortable with statements like (from this article) “if you know Python, you’re 90% of the way to knowing Nim.” The two languages are not IMO as similar as that. It’s sort of like saying “if you know Java, you’re 90% of the way to knowing C++.” Yes, there is a surface level syntactic similarity, but it’s not nearly as deep as with Crystal and Ruby. Nim is strongly+statically typed, doesn’t have list comprehensions, doesn’t capitalize True, passes by value not reference, has very different OOP, etc.

                                                                  That said, there’s definitely evidence that Nim has a smooth learning curve for Pythonistas! This isn’t the first article like this I’ve read. Just don’t assume that whatever works in Python will work in Nim — you don’t want to be like one of those American tourists who’s sure the locals will understand him if he just talks louder and slower :)

                                                                  So yes, Nim is excellent. It’s quite easy to learn, for a high performance compiles-to-machine-code language; definitely easier than C, C++ or Rust. (Comparable to Go, but for various reasons I prefer Nim.) When programming in it I frequently forget I’m not using a scripting language!

                                                                  1. 2

                                                                    Thank you for your perspective. Much appreciated.

                                                                    1. 1

                                                                      passes by value not reference

                                                                      The terminology here is very muddied by C, so forgive me if this sounds obvious, but do you mean that if you pass a data structure from one function to another in Nim, it will create a copy of that data structure instead of just passing the original? That seems like a really odd default for a modern language to have.

                                                                      1. 4

                                                                        At the language level, it’s passing the value not a reference. Under the hood it’s passing a pointer, so this isn’t expensive, but Nim treats function arguments as immutable, so it’s still by-value semantically: if I pass an array or object to a function, it can’t modify it.

                                                                        Obviously you don’t always want that. There is a sort-of-kludgey openarray type that exists as a parameter type for passing arrays by reference. For objects, you can declare a type as ref which makes it a reference to an object; passing such a type is passing the object by reference. This is very common since ref is also how you get dynamic allocation (with GC or more recently ref-counting.) It’s just like the distinction in C between Foo and *Foo, only it’s a safe managed pointer.

                                                                        This works well in practice (modulo some annoyance with openarray which I probably noticed more than most because I was implementing some low-level functionality in a library) … but this is going to be all new, important info to a Python programmer. I’ve seen this cause frustration when someone approaches Nim as though it were AOT-compiled Python, and then starts either complaining or asking very confused questions on the Nim forum.

                                                                        I recommend reading the tutorial/intro on the Nim site. It’s well written and by the end you’ll know most of the language. (Even the last part is optional unless you’re curious about fancy stuff like macros.)

                                                                        (Disclaimer: fate has kept me away from Nim for about 6 months, so I may have made some dumb mistakes in my explanation.)

                                                                        1. 4

                                                                          Gotcha; I see. I wonder if it’d be clearer if they just emphasized the immutability. Framing it in terms of “by value” opens up a big can of worms around inefficient copying. But if it’s just the other function that’s prevented from modifying it, then the guarantee of immutability isn’t quite there. I guess none of the widely-understood terminology from other languages covers this particular situation, so some new terminology would be helpful.

                                                                    2. 5

                                                                      Python has too many warts (bad scoping, bad implementation of default parameters

                                                                      I don’t want to sound like python fanboy, but those reasons are very weak. Why do you need to explore the corner cases of scoping? Just stick to a couple of basic styles. Relyokg on many scoping rules is a bad idea anyways. Why do you need default parameters at all. Many languages have no support for default parameters and do fine. Just don’t use them if you think their implementation is bad.

                                                                      Less is more. I sometimes flirt with the idea of building a minimal indendtation based language with just a handful of primitives. Just as a proof of concept of the practicallity os something very simpl and minimal.

                                                                      1. 7

                                                                        At least for python and me, it’s less a matter of exploring the corner cases in the scoping rules and more a matter of tripping over them involuntarily.

                                                                        I only know three languages that don’t do lexical scoping at this point:

                                                                        1. Emacs lisp, which does dynamic scoping by default for backwards compatibility but offers lexical scoping as am option and strongly recommends lexical scoping for new code.

                                                                        2. Bash, which does dynamic scoping but kind of doesn’t claim to be a real programming language. (This is wrong but you know what I mean.)

                                                                        3. Python, which does neither dynamic nor lexical scoping, very much does claim to be a real programming language, and has advocates defending its weird scoping rules.

                                                                        I mean, access to variables in the enclosing scope has copy on write semantics. Wtf, python?

                                                                        (Three guesses who started learning python recently after writing a lexically scoped language for many years. Thank you for indulging me.)

                                                                        1. 4

                                                                          It is weirder than copy on write. Not tested because I’m on my iPad, but given this:

                                                                          x = 1
                                                                          def f(cond):
                                                                             if cond:
                                                                                x
                                                                             x = 2
                                                                          

                                                                          f(false) does nothing, but f(true) will thrown an undefined variable exception.

                                                                          1. 4

                                                                            I think you need nonlocal x but I don’t quite get why this is weird/nonlexical.

                                                                            It has lexical scoping but requires you mark variables you intend to modify locally with ‘nonlocal’ or ‘global’ as a speed bump on the way to accidental aliasing. I don’t think I’d call puthon “not lexically scoped”

                                                                            1. 3

                                                                              Have you tried declaring a variable inside an if?

                                                                              if True:
                                                                                  X = 1
                                                                              print(X)
                                                                              
                                                                              1. 1

                                                                                Yeah, if doesn’t introduce scope. Nonlexical scope doesn’t IMO mean “there exist lexical constructs that don’t introduce scope”, it is more “there exist scopes that don’t match any lexical constructs”

                                                                                1. 2

                                                                                  I just learned the idea of variable hoisting thanks to this conversation. So the bizarre behavior with carlmjohnson’s example can be understood as the later assignment declaring a new local variable that comes into scope at the start of the function. Because python does block scope instead of expression scope.

                                                                                  I guess I’ve been misusing “lexical scope” to mean expression-level lexical scope.

                                                                                  I still find the idea of block scope deeply unintuitive but at least I can predict it’s behavior now. So, thanks!

                                                                                  1. 1

                                                                                    Yeah I’m not a huge fan either tbh, but I guess I’ve never thought of it as weird cause JavaScript has similar behavior.

                                                                              2. 2

                                                                                I agree. This is more of a quirk due to python not having explicit variable declaration syntax.

                                                                                1. 2

                                                                                  It’s multiple weird things. It’s weird that Python has† no explicit local variable declarations, and it’s weird that scoping is per function instead of per block, and it’s weird that assignments are hoisted to the top of a function.

                                                                                  † Had? Not sure how type declaration make this more complicated than when I learned it in Python 2.5. The thing with Python is it only gets more complicated. :-)

                                                                                  Different weird thing: nonlocal won’t work here, because nonlocal only applies to functions within functions, and top level variables have to be referred to as global.

                                                                            2. 3

                                                                              JavaScript didn’t have it it either until the recent introduction of declaration keywords. It only had global and function (not block) scope. It’s much trickier.

                                                                              But I am puzzled with why/how people stumble up on scoping problems. It doesn’t ever happen to me. Why do people feel the urge of accessing a symbol on a block outside the one when it was created? If you just don’t do it, you will mover have a problem, on any language.

                                                                              1. 1

                                                                                For me it’s all about closures. I’m used to using first class functions and closures where I suspect an object and instance variables would be more pythonic.

                                                                                But if you’re used to expression level lexical scope, then it feels very natural to write functions with free variables and expect them to close over the thing with the same name (gestures upward) over there.

                                                                                I’m curious, do you use any languages with expression level scope? You’re not the first python person I’ve met who thinks pythons scope rules make sense, and it confuses me as much as my confusion seems to confuse you.

                                                                                1. 2

                                                                                  I don’t need to remember complicated scoping rules because I don’t ever use a symbol in a block higher up in the tree than the one it is defined in. Nor do I understand the need to re-assign variables, let alone re-using their names. (Talking about python now). Which languages qualify having expression level scope? Is that the same as block scope? So… Java, modern JavaScript, c#, etc?

                                                                                  I am confused. What problems does python pose when using closures? How is it different than other languages in that respect?

                                                                                  1. 1

                                                                                    I use closures in python code all the time. I just tend not to mutate the free variable. If you do that, then you don’t need to reference the free variable as global or nonlocal. If I was mutating the state then I might switch over to an object.

                                                                            3. 5

                                                                              Nim is pretty strongly typed, that is certainly different from Python. I’m currently translating something with Python and Typescript implementations, and I’m mostly reading the Typescript because the typing makes it easier to understand. With Nim you might spend time working on typing that you wouldn’t do for Python (or not, Nim is not object oriented), but its worth it for later readability.

                                                                              1. 4

                                                                                Nim is less OO than Python, but more so than Go or Rust. To me the acid test is “can you inherit both methods and data”, and Nim passes.

                                                                                Interestingly you can choose to write in OO or functional style, and get the same results, since foo(bar, 3, 4) is equivalent to foo.bar(3, 4).

                                                                                IIRC, Nim even has multimethods, but I think they’re deprecated.

                                                                                1. 2

                                                                                  what? don’t you mean foo(bar, 3, 4) and bar.foo(3, 4)? AFAIK the last token before a parenthesis is always invoked as a function.

                                                                                  1. 1

                                                                                    Oops, you’re right!

                                                                                  2. 1

                                                                                    what? don’t you mean foo(bar, 3, 4) and bar.foo(3, 4)? AFAIK the last token before a parenthesis is always invoked as a function.

                                                                                2. 3

                                                                                  Latest release of Scala 3 is trying to be more appealing to Python developers with this: https://medium.com/scala-3/scala-3-new-but-optional-syntax-855b48a4ca76

                                                                                  So I guess could make it an option.

                                                                                  1. 2

                                                                                    Thanks!, this certainly looks interesting. Would it make an introductory language, though? By which I mean that I want to explain a small subset of the language to the pupil, and that restricted language should be sufficient to achieve fairly reasonable tasks in the language. The student should then be able to pick up the advanced concepts in the language by self exploration (and those implementations should be wart free. For example, I do not want to explain again why one shouldn’t use an array as a default parameter value in Python).

                                                                                    1. 2

                                                                                      There is no such thing as a programming language that is “wart free”, and while initially you want to present any language as not having difficulties or weirdness, in the long run you do need to introduce this to the student otherwise they will not be prepared for “warts” in other languages.

                                                                                  2. 1

                                                                                    Depending on what you’re trying to teach, Elm does fit your description of an introductory language for teaching students that uses indentation. I know there’s a school that uses Elm for teaching kids how to make games, so it definitely has a presedence for being used in education too. Though, of you’re looking to teach things like file IO, http servers, or other back end specific things then it’s probably a poor choice.

                                                                                  1. 2

                                                                                    ‘the DragonFlyBSD scheduler will use hyperthread pairs for correlated clients and servers’ out of curiosity, do any other operating systems do this as well?

                                                                                    1. 1

                                                                                      I wonder if this is a good idea. Hyperthreaded pairs share compute and provide a benefit only in case of an empty pipeline due to stalling memory operations. Maybe using nearby distinc cores instead would be better? Same CCD for AMD Nearby cores on the same ringbus for ringbus Intels same column/row for mesh Intels

                                                                                      1. 1

                                                                                        If async: shared L1 makes shmem ops much faster.

                                                                                        If sync: whenever server is working, that means client is waiting for it, and vice versa.

                                                                                    1. 1

                                                                                      I know Rust hasn’t gotten around to ABI stability yet, but when it does, inline functions exposed from a shared library are problematic. Since the function gets compiled into the dependent code, changing it in the library and swapping in the newer library (without rebuilding the dependent code) still leaves obsolete instances of the inline in the dependent code, which can easily cause awful and hard-to-diagnose bugs. (Obsolete struct member offsets, struct sizes, vtable indices…)

                                                                                      For comparison, Swift, which did recently gain ABI stability in 5.1, has some special annotations and rules covering module-public inline functions.

                                                                                      1. 4

                                                                                        The main problem with library boundaries is not inlined methods but heavy use of polymorphism (without dyn) in most Rust code, because polymorphism is easily accessible and static-dispatch by default. C++ has this issue too (there are even “header-only libraries”), despite virtual methods having dynamic dispatch only. Swift probably inherited Objective C’s tradition of heavy use of dynamic dispatch.

                                                                                        Some libraries intentionally limit use of static-dispatch polymorphism, for example Bevy game framework stated it as one of its distinguishing features (however the main concern there is compilation speed, not library updates).

                                                                                        1. 8

                                                                                          Swift probably inherited Objective C’s tradition of heavy use of dynamic dispatch.

                                                                                          Not really: Swift uses compiler heroics to blur the boundary between static and dynamic approaches to polymorphism. Things are passed around without boxing but still allow for separate compilation and ABI stability. Highly recommend

                                                                                          1. 3

                                                                                            Across ABI boundary it’s still dynamic dispatch. It’s “sized” only because their equivalent of trait objects has methods for querying size and copying.

                                                                                            1. 2

                                                                                              Hm, I think it’s more of a “pick your guarantees” situation:

                                                                                              • for public ABI, there are attributes to control the tradeoff between dynamism and resilience to ABI changes
                                                                                              • for internal ABIs (when you compile things separately, but in the same compilation session) the compiler is allowed, but not required, to transparently specialize calls across compilation unit boundaries.
                                                                                        2. 2

                                                                                          An interesting case-study here is Zig’s selfhosted compiler. Merging the compiler and linker it already allows for partial recompilation inside one compilation unit, including inlined calls.

                                                                                        1. 18

                                                                                          Does anyone else see this as a sign that the languages we use are not expressive enough? The fact that you need an AI to help automate boilerplate points to a failure in the adoption of powerful enough macro systems to eliminate the boilerplate.

                                                                                          1. 1

                                                                                            Why should that system be based upon macros and not an AI?

                                                                                            1. 13

                                                                                              Because you want deterministic and predictable output. An AI is ever evolving and therefore might give different outputs for given input over time. Also, I realise that this is becoming an increasingly unpopular opinion, but not sending everything you’re doing to a third party to snoop on you seems like a good idea to me.

                                                                                              1. 3

                                                                                                Because you want deterministic and predictable output. An AI is ever evolving and therefore might give different outputs for given input over time.

                                                                                                Deep learning models don’t change their weights if you don’t purposefully update it. I can foresee an implementation where weights are kept static or updated on a given cadence. That said, I understand that for a language macro system that you would probably want something more explainable than a deep learning model.

                                                                                                Also, I realise that this is becoming an increasingly unpopular opinion, but not sending everything you’re doing to a third party to snoop on you seems like a good idea to me.

                                                                                                There is nothing unpopular about that opinion on this site and most tech sites on the internet. I’m pretty sure a full third of posts here are about third party surveillance.

                                                                                                1. 2

                                                                                                  Deep learning models don’t change their weights if you don’t purposefully update it.

                                                                                                  If you’re sending data to their servers for copilot to process (my impression is that you are, but i’m not in the alpha and haven’t seen anything concrete on it), then you have no control over whether the weights change.

                                                                                                  1. 2

                                                                                                    Deep learning models don’t change their weights if you don’t purposefully update it.

                                                                                                    Given the high rate of commits on GitHub across all repos, it’s likely that they’ll be updating the model a lot (probably at least once a day). Otherwise, all that new code isn’t going to be taken into account by copilot and it’s effectively operating on an old snapshot of GitHub.

                                                                                                    There is nothing unpopular about that opinion on this site and most tech sites on the internet. I’m pretty sure a full third of posts here are about third party surveillance.

                                                                                                    As far as I can tell, the majority of people (even tech people) are still using software that snoops on them. Just look at the popularity of, for example, VSCode, Apple and Google products.

                                                                                                2. 2

                                                                                                  I wouldn’t have an issue with using a perfect boilerplate generating AI (well, beyond the lack of brevity), I was more commenting on the fact that this had to be developed at all and how it reflects on the state of coding

                                                                                                  1. 1

                                                                                                    Indeed it’s certainly good food for thought.

                                                                                                  2. 1

                                                                                                    Because programmers are still going to have to program, but instead of being able to deterministically produce the results they want, they’ll have to do some fuzzy NLP incantation to get what you want.

                                                                                                  3. 1

                                                                                                    I don’t agree on the macro systems point, but I do see it the same. As a recent student of BQN, I don’t see any use for a tool like this in APL-like languages. What, and from what, would you generate, when every character carries significant meaning?

                                                                                                    1. 1

                                                                                                      I think it’s true. The whole point of programming is abstracting away as many details as you can, so that every word you write is meaningful. That would mean that it’s something that the compiler wouldn’t be able to guess on its own, without itself understanding the problem and domain you’re trying to solve.

                                                                                                      At the same time, I can’t deny that a large part of “programming” doesn’t work that way. Many frameworks require long repetitive boilerplate. Often types have to be specified again and again. Decorators are still considered a novel feature.

                                                                                                      It’s sad, but at least, I think it means good programmers will have job security for a long time.

                                                                                                      1. 1

                                                                                                        I firmly disagree. Programming, at least as evolved from computer science, is about describing what you want using primitive operations that the computer can execute. For as long as you’re writing from this directions, code generating tools will be useful.

                                                                                                        On the other hand, programming as evolved from mathematics and programming language theory fits much closer to your definition, defining what you want to do without stating how it should be done. It is the job of the compiler to generate the boilerplate after all.

                                                                                                        1. 1

                                                                                                          We both agree that we should use the computer to generate code. But I want that generation to be automatic, and never involve me (unless I’m the toolmaker), rather than something that I have to do by hand.

                                                                                                          I don’t think of it as “writing math”. We are writing in a language in order to communicate. We do the same thing when we speak English to each other. The difference is that it’s a very different sort of language, and unfortunately it’s much more primitive, by the nature of the cognition of the listener. But if we can improve its cognition to understand a richer language, it will do software nothing but good.