1. 1

    my mac blocks launching the app, something about “apple thinks this is malicious”, anyone else seeing that?

    edit: to open totally unsigned apps on mac, you need to ctrl+click i guess?

    1. 1

      Yeah. The release notes have a paragraph on this but it’s located at the bottom which makes it a bit surprising upon first encounter.

    1. 6

      One highlight in this release for me was the inclusion of the JRE with the toolbox. In the past there were some compatibility issues with certain versions and installing a Java runtime was just one additional step. With Java 11 included, it should “just work” on the release supported platforms.

      1. 5

        Looking at the benchmark code, the size of work they’re doing is small enough that clock resolution can become an issue. And while they aren’t measuring the time to generate maps, they’re leaving a lot of garbage on the process heap between runs which will also likely be a problem. It makes more sense to generate the map and keys once to avoid measuring all of these other effects.

        Rewriting this benchmark locally with timer:tc/3 locally shows something closer to 7.9% overhead (with a null benchmark baseline that shows the looping takes roughtly 30% of the total time). It’s a wide enough difference from the posted article that I think it might be worth writing up a benchee example to get more accurate numbers.

        1. 8

          Using https://gist.github.com/strmpnk/8292f030e134ee7d985a08a7d87d8c66

          Name              ips        average  deviation         median         99th %
          Map.get         11.19       89.37 ms     8.63%       94.00 ms      110.00 ms
          Access           9.88      101.24 ms     8.40%       94.00 ms      125.00 ms
          
          Comparison:
          Map.get         11.19
          Access           9.88 - 1.13x slower +11.87 ms
          

          So the overhead seems much more substantial than this post claims but as all benchmarks go, it’s hard to represent real application code with trivial loops like this. Still, Access is far from free if you’re sure that your code path is monomorphic.

          1. 4

            It would be also worth testing the code with larger maps as maps with <33 keys are stored as a sorted list instead of real “hash map”.

            1. 1

              Thanks @strmpnk! I’m going to revisit this in a later blog post. I didn’t think about garbage I was putting on the heap when generating all the different maps. I’m going to take your benchmarking code as a starting point and also compare the performance of Map.get/2 to pattern matching on the map directly.

          1. 6

            I am considering switching to other languages for my next project or migrating slowly. Will have to wait and see I suppose.

            The main strengths of go (from my point of view) are good libraries and no annoying sync/async code barrier.

            The main weaknesses are a runtime that makes using C libraries hard, and some feeling of kludgyness because users can’t define things like try or w.e. themselves. ‘Go 2’ doesn’t really change anything.

            1. 4

              I consider myself relatively neutral when it comes to Go as a language. What really keeps me from investing much time or attention in it is how its primary implementation isolates itself so completely as if to compel almost every library to be rewritten in Go. In the short term this means it will boost the growth of a vibrant ecosystem but I fear a longer term world where the only reasonable way to interoperate between new languages and systems which don’t fit into Go’s model is to open a socket.

              While I don’t think we need to be alarmist about bloated electron apps but in general, we’re talking about many orders of magnitude in cost increase for language interoperation. This is not the direction we should be going and I fear Go has set a bad precedent with its answer to this problem. Languages will evolve and systems will too but if we have to climb higher and higher walls every time we want to try something new, we’ll eventually be stuck in some local optimum.

              I’d like to see more investment in languages, systems, and runtimes sitting between them that can respond to new ideas in the future w/o involving entirely new revisions of a language with specific features responding to specific problems. Perhaps some version of Go 2 will get there but at the moment it seems almost stuck on optimizing for today’s problems rather than looking at where things are going. Somewhere in there is a better balance and I hope they find it.

              1. 4

                Yeah - I really want to use either GNU guile or Janet to write http handlers for Go, with the current system it is not really possible to do it well.

                There are multiple implementations of Lua in Go for the same reasons, poor interop if you aren’t written in Go and want two way calls.

                1. 3

                  A crucial part of this is that go was explicitly, deliberately created as a language to write network servers in.

                  In that context, of course the obvious way to interop with a go program is to open a socket.

                  1. 2

                    Sure. Priorities make RPC look like their main goal but the cost of an RPC call is on an entirely different level than a function call and comes with a lot of complexity from accidental distributed system is now required to call some logic written in another language.

                    At a company where everything is already big and complex, this may seem like a small price but it’s becoming a cost today so we see people opting to write pure Go libraries and pass on shareable libraries or duplicating effort. In many cases this becomes a driver to kill diversity in technical choices that I talk about in my original comment above.

                    It’s an obvious problem but the Go team would rather drive people away from systems level interoperability for Go’s short term gains. They claim that it’s be too hard to support a real FFI option or that they are short on resources but other runtimes do a better job of this so it’s possible and secondarily, Go supposedly isn’t a Google project but a community, yet we see it clearly being managed from one side of this coin.

                    1. 1

                      In my experience, it’s the quality of the golang tools driving this.

                      For instance: I found it easier to port a (smallish) library to go and cross-compile the resulting go code, than to cross-compile the original c library.

                      I initially considered porting to rust, which is imo a delightful language, but even though cross-compilation is much easier in rust than in c (thanks to rustup), it doesn’t compare to go.

                      The process for c:

                      • For each target arch, research the available compiler implementations; packages are often unavailable or broken, so you’ll be trying to build at least a few from source, probably on an unsupported platform.

                      The process for rust:

                      • For each target arch, ask rustup to fetch the toolchain. It’ll tell you to install a bunch of stuff yourself first, but at least it tends to work after you do that.

                      The process for go:

                      • Set an environment variable before running the compiler.
                      1. 1

                        … so we see people opting to write pure Go libraries and pass on shareable libraries or duplicating effort. In many cases this becomes a driver to kill diversity in technical choices that I talk about in my original comment above.

                        It’s unclear to me why having another implementation of something instead of reusing a library reduces diversity rather than increasing it.

                        They claim that it’s be too hard to support a real FFI option or that they are short on resources but other runtimes do a better job of this so it’s possible

                        I’m personally a maintainer of one of the most used openssl bindings for Go, and I’ve found the FFI to be a very real option. That said, every runtime has it’s own constraints and difficulties. Are you aware of any ways to do the FFI better that would work in the context of the Go runtime? If not, can you explain why not? If the answer to both of those is no, then your statements are just unfounded implications and fear mongering.

                        Go supposedly isn’t a Google project but a community, yet we see it clearly being managed from one side of this coin.

                        And yet, I’m able to have changes included in the compiler and standard library, provide feedback on proposals, and have stewards of the language directly engage with my suggestions. My perspective is that they do a great job of listening to the community. Of course they don’t always agree with everything I say, and sometimes I feel like that’s the unattainable bar that people hold them to in order to say that it’s a community project.

                        1. 1

                          The specific issues with Go FFI interop is usually around dealing with structured data rather than buffers of bytes and integers. Data layout ABI options would be a big plus. Pinning shared data would also help tremendously in avoiding extra copying or marshaling that is required in many of these cases. On the other side, calling into Go could be made faster in a number of ways, particularly in being able to cache thread local contexts for threads Go doesn’t manage (these are currently setup and torn down for every call in this direction).

                          There are also plenty of cases where construction of movable types could be supported with proper callbacks provided but instead Go opts to disallow sharing any of these data types entirely.

                        2. 1

                          They claim that it’s be too hard to support a real FFI option or that they are short on resources but other runtimes do a better job of this so it’s possible and secondarily,

                          It’s been a while since I actively used and followed Go, isn’t the problem that they have to forgo the ‘run a gazillion Goroutines’ if they wanted to support real FFI? To support an extremely large number of goroutines, they need small, but growable stacks, which means that they have to do stack switching when calling C code. Plus it doesn’t have the same call conventions.

                          In many respects have designed themselves into a corner that it is hard to get out of, without upsetting users and/or breaking backwards compat. Of course, they may be happy with the corner that they are in.

                          That said, Go is not alone here, e.g. native function calls in Java are also expensive. It seems that someone has made an FFI benchmark ;):

                          https://github.com/dyu/ffi-overhead

                      2. 2

                        I generally symphatize with your main argument (personally, I also miss easier C interop, esp. given that it was advertised as one of the initial goals of the language) - but on the other hand, I don’t think you’re giving justice to the language in this regard.

                        Specifically, AFAIK the situation with Go is not really much different from other languages with a garbage collector - e.g. Java, C#, OCaml, etc, etc. Every one of them has some kind of a (more or less tricky to use) FFI interface to C; in case of Go it’s just called cgo. Based on your claim, I would currently assume you don’t plan to invest much time in any other GCed language either, is that right?

                        1. 2

                          I can’t speak to modern JVMs but OCaml and C# (.Net Cote and Mono) both have much better FFIs both in support for passing data around and in terms of performance costs. It’s hard to understate this but CGo is terribly slow compared to other managed language interop systems and is getting slower not faster over time.

                          I’ll let folks draw their own conclusions on whether this is intentional or just a limitation of resources but the outcome is a very serious problem for long term investments in a language.

                          1. 1

                            I think it’s important to quantify what “terribly slow” is. It’s on the order of ~100ns. That is more than sufficient for a large variety of applications.

                            It also appears from the implication that you believe it’s intentionally being slow. Do you have any engineering evidence that this is happening? In other words, are you aware of any ways to make the FFI go faster?

                            1. 1

                              Not in my experience. Other than trivial call with no args and return nothing, it is closer to 1 microsecond for Go calling out in many cases because of how argument handling has to be done and around 10 microseconds for non-Go code calling Go.

                              1. 1

                                It is indeed slower to call from C into Go for various reasons. Go to C calls can also be slower depending on how many arguments contain pointers because it has safety checks to ensure that you’re handling garbage collected memory correctly (these checks can be disabled). I don’t think I’ve ever seen any benchmarks place it at the microsecond level, though, and I’d be interested if you could provide one. There’s a lot of evidence on the issue tracker (here or here for example) that show that there is interest in making cgo faster, and that good benchmarks would be happily accepted.

                          2. 2

                            Every one of them has some kind of a (more or less tricky to use) FFI interface to C; in case of Go it’s just called cgo. Based on your claim, I would currently assume you don’t plan to invest much time in any other GCed language either, is that right?

                            LuaJIT C function calls are apparently as fast as from C (and under some circumstances faster):

                            https://nullprogram.com/blog/2018/05/27/

                      1. 20

                        1000 times yes. If Google wants their own libc that’s their business. But LLVM should not be part of their “manifest destiny”. The corporatization of OSS is a scary prospect, and should be called out loud and clear like this every time it’s attempted.

                        Otherwise more software infrastructure will go the way of the current near monoculture in web browsers.

                        1. 9

                          The corporatization of OSS is a scary prospect

                          The tool we have to prevent it is copyleft. Notice that Google are talking about making a whole new libc, not enclosing glibc. That’s a feature of copyleft.

                          Licences with copyleft terms still permit corporate uses and enable business cases: Google, FB, Amazon, and others all use copyleft software; Red Hat, Canonical, MuseScore, and others base their whole business model on it. But they prevent corporate enclosure, in that a business cannot control the software to satisfy their interests and ignore others. NeXT/Apple did not get to “direct” GCC for everyone in a way that made it work for them, and not for others. In fact, they eventually took their ball elsewhere.

                          1. 1

                            NeXT/Apple did not get to “direct” GCC for everyone in a way that made it work for them, and not for others. In fact, they eventually took their ball elsewhere.

                            Praise the Sun. LLVM is much better platform than GCC for compiler development. Much more open and free.

                          2. 4

                            The corporatization of OSS is a scary prospect

                            OSS is a corporatised term, you’ve been got from the get go.

                            1. 1

                              Isn’t LLVM already a google project?

                              1. 3

                                Apple was the first major corporate sponsor. Chris Lattner and Vikram Adve created its as students and later Lattner joined Apple to create a working group there.

                                1. 1

                                  I am aware of that, but I was under the impression that Google input already strongly drives optimization decisions and that Google engineers are prominent in LLVM development.

                                  1. 1

                                    Yes. It’d possibly be a fair point, though I still see other companies and organizations involved, I’m not sure to what extent this has an impact on priorities. I’d be curious to hear about recent trade-offs the project has made.

                            1. 9

                              What triggered me was the very first sentence in the quote:

                              Within Google, we have a growing range of needs that existing libc implementations don’t quite address.

                              What on Earth could be so specific about those needs that can’t be addressed by any existing implementation of the runtime used in myriads of projects big and small? Sounds like a post-rationalization of someone’s NIH syndrome.

                              1. 10

                                I interpreted that “growing range of needs” as “growing number of engineers at grade 8 who need new projects for the next performance summary cycle”.

                                1. 6

                                  Given the initial target proposed is x86_64, this very likely has to do with various kernel interface changes Google has tried. Some that they’ve talked about in years past are quite interesting but to take advantage of changes without expensive changes all software, it seems like they need a libc that is easier to customize. So the technical impetus is there (IMO of course, I don’t work there).

                                  Instrumentation of code and improvements around LLVM sanitizers and related fuzzing tools could also go hand-in-hand. I’m not sure what technically prevents this today outside of convenience and possibly the ability to build tools that are simpler because there are cheap assumptions that can be made.

                                  While I’m not sure where I fall with Rich’s third point, I can see the reasoning. Perhaps the real problem isn’t something to be solved as part of the LLVM project. While the Clang project is already a large umbrella and includes a C++ standard library implementation but that still relies on a system layer like libc below, so I’m not sure that some of the other dismissals really answer the issue.

                                  1. 1

                                    Sounds like a post-rationalization of someone’s NIH syndrome.

                                    Sounds like the entire history of Google to me.

                                  1. 18

                                    I’d really like to know why they seem to list requirements that seem squarely in musl libc’s core design goals yet post this like it’s a novel suggestion. Perhaps they have reasons for skipping on musl but it seems lazy or contemptuous to not at least mention why they would prefer to avoid existing glibc alternatives.

                                    1. 10

                                      The only thing that comes to mind is that google doesn’t own/control musl, so google’s proposed changes may not be accepted by musl. With their own libc, google can introduce things that other libc implementation would never merge.

                                      1. 7

                                        This is easy to say about any project but I found this post originally via twitter from the musl author: https://twitter.com/RichFelker/status/1143292587576635402

                                        There has likely been no discussion of what might be accepted. If the merge problem is really the case, it probably doesn’t belong in LLVM either. Good riddance throw-it-over-the-fence style OSS if you ask me. Google can keep it to themselves if they’re incapable of this kind of conversation as a corporation (not trying to take offense to developers who may be stuck between to hard places as employees).

                                        1. 6

                                          This is easy to say about any project

                                          Well, yeah, because it’s true a lot of the time. Happens all the time and it’s totally understandable. It really is not even remotely a stretch to imagine that the goals of MUSL wouldn’t align with the goals of Google.

                                          I once wondered whether I should try to contribute a faster version of memchr to MUSL, but just looking at the tickets on that project made me immediately reconsider. Which isn’t to say MUSL is bad, but it’s to say that MUSL clearly has a specific set of goals in mind, and they do not always line up with everyone else’s goals.

                                          1. 7

                                            There is actually a interesting mailing list thread with googlers on the musl mailing list from a few years back where they considered including musl in chromium, which failed at the end.

                                            TL;DR: Lawyers/Legal team had a problem with some files/headers that are in “public domain” and requested a re-license of those files.

                                            https://www.openwall.com/lists/musl/2016/03/15/1

                                            1. 6

                                              Interesting. I’ve been on the bad end of a bunch of Googlers and their licensing concerns too. Not a pleasant experience.

                                          2. 3

                                            This is easy to say about any project

                                            Well, yea, but it’s not every day a major company decides to go off and do their own implementation (or fork) of (insert thing here with a some widely-available OSS implementations), and google has a history of doing this (BoringSSL immediately comes to mind).

                                        2. 7

                                          Rich Felker (of musl) posted a follow-up in the thread, taking the viewpoint that: 1) LLVM shouldn’t build its own from-scratch libc, and preferably 2) shouldn’t ship a libc at all, whether a new one or musl or otherwise.

                                          1. 3

                                            isn’t musl linux-only?

                                            1. 2

                                              Not technically but it seems to have been designed with Linux in mind & using it with others kernels can require a lot of effort.

                                          1. 3

                                            The bi-directional 9P filesystem interop setup makes me wonder if there is a way to expose 9P support on the windows side for other user filesystem drivers. This could be a great feature for powerusers.

                                            1. 3

                                              Hells yeah. I’m going to try and find out tomorrow.

                                              1. 2

                                                Shocking, I totally forgot about this.

                                                I inquired further today, and unfortunately no, it’s not possible to register a random process as a 9P service.

                                            1. 5

                                              Maybe it’s just my naive preconceptions, but reading

                                              We have connected with so many command-line users who LOVE to customize their terminals and command-line applications.

                                              just feels very wrong from a corporation like Microsoft. The reason probably is that this portrayal of a “community” around a utility, a product basically, has a close to zero chance to be an authentic gesture from your loving OS-Patron (as if that were even the question), than being the result of a cold, statistical survey on how to emotionally engage consumers to consume optimally.

                                              But on the other hand, what else is there left for them to do?

                                              1. 9

                                                The reason probably is that this portrayal of a “community” […] has a close to zero chance to be an authentic gesture […] than being the result of a cold, statistical survey on how to emotionally engage consumers to consume optimally.

                                                I think your cynicism is unfounded and unhelpful. There is a community, and this is a big deal. People have wanted a modern, functional terminal for a long time. I know it’s still cool to hate on Microsoft in many places, but my observations are that the engineers in the company care a lot about contributing to community, and that management wants to do the right thing.

                                                1. 6

                                                  There might be a user base, but I am highly sceptical (although I also could be wrong) that this is a “community” of people enthusiastic and engaged in it’s very idea of the terminal. And this really isn’t generic hate of Microsoft (although I still don’t trust them), but rather a distaste of corporate “over-friendly-ness”, regardless of their intentions.

                                                  1. 1

                                                    Then douse your skepticism with the pure fresh water of statistics!

                                                    For example, this video from the Windows Developer group on Windows Terminal, released 4 days ago has seven hundred and thirty eight thousand views.

                                                    In what universe does that not count as a community?

                                                  2. 3

                                                    I agree, and reading this I think there’s a lesson here for all of us. We need to zoom out and try to be mindful of the fact that people’s needs, wants, preferences and limitations vary a LOT, and so does our ability to be aware of every community or body of users that exists.

                                                    I spoke with a gent yesterday who worked at a company that specialized in selling virtualized DEC Alpha/Tru64 and VMS environments. They have a LOT of customers!

                                                    We all get so hooked into our own little corner of the world that we make the mistake of thinking that corner is the entirety of the world, and it just isn’t.

                                                    As I said elsewhere, Windows Terminal will be a big deal for me because I use WSL in a work context for impermeable reasons I won’t go into again here, where having a ‘real’ terminal emulator will be a massive quality of life improvement.

                                                    1. 2

                                                      Yet all I see is another attempt at “Embrace Extend Extinguish”.

                                                      How long will it take before we see some command line tools which will require this terminal emulator to actually run?

                                                      I’ve been using bash on CygWin whenever I needed some terminal and user land tools on windows and I’m not intending to switch over from something that’s been around for decades, to something new that has only been around for less than a decade. The same goes for WSL.

                                                      1. 5

                                                        Yet all I see is another attempt at “Embrace Extend Extinguish”.

                                                        ¯\_(ツ)_/¯ See what you want to see. That may have been how things were done in the Ballmer era, but we’re a long ways past that now.

                                                        1. 3

                                                          You expect people to just brush over everything they’ve done and are doing and just “move on” and trust them? They did a complete 180 in the last couple of years, you can’t expect people to not be skeptical of them. I don’t believe they have good attention, time will tell. I disagree with all the unethical ways they collect data.

                                                          1. 2

                                                            That may have been how things were done in the Ballmer era, but we’re a long ways past that now.

                                                            Are we? I don’t think so. I see a terminal emulator with extra features like embedded pictures in the text output, without specifications and proper documentations. To me this is the entire office EEE debacle all over again but with a terminal.

                                                            1. 4

                                                              What specifications does a Windows terminal demand? What on earth are they even meant to be extinguishing? It’s a terminal, not some core part of WSL2 or whatever else people are trying to connect this to. When they released PowerShell were they EEEing themselves?! How is this different?

                                                              1. 2

                                                                See: Embrace, extend, and extinguish - Strategy

                                                                Note the careful wording used:

                                                                Embrace: Development of software substantially compatible with a competing product, or implementing a public standard.

                                                                A terminal emulator can be considered a public standard. This terminal is an implementation of that standard.

                                                                Extend: Addition and promotion of features not supported by the competing product or part of the standard, creating interoperability problems for customers who try to use the ‘simple’ standard.

                                                                Extinguish: When extensions become a de facto standard because of their dominant market share, they marginalize competitors that do not or cannot support the new extensions.

                                                                Integration with ssh, PowerShell, Ubuntu on WSL, emoji’s and icons are all promotional features which are not widely supported by the competition. The competition can’t even support them, because a terminal in text-mode on a UNIX-system simply does not have the capability of rendering bitmap graphics until some kind of window-manager is launched.

                                                                What specifications does a Windows terminal demand?

                                                                First: It’s not a “Windows Terminal”. We already have plenty of those. It’s a POSIX-compatible terminal with various extensions that cannot be supported on the other platforms. And it’s running on Windows: A platform which throughout it’s entire history, has been used and abused to gain a monopoly marketshare.

                                                                What on earth are they even meant to be extinguishing? It’s a terminal, not some core part of WSL2 or whatever else people are trying to connect this to.

                                                                It’s an attack on the whole UNIX-like opensource ecosystem which shows that the spirit of Ballmer is still very much alive: Ballmer: ‘Linux is a cancer’ - The Register

                                                                When they released PowerShell were they EEEing themselves?! How is this different?

                                                                This is different from PowerShell in the sense that PowerShell was never meant to be a terminal that could run UNIX-like tools. PowerShell’s main addition was that it opened up commandline access to a lot of the internals of Windows which were previously inaccessible through an easily scriptable commandline interface.

                                                                1. 1

                                                                  “The competition can’t even support them, because a terminal in text-mode on a UNIX-system simply does not have the capability of rendering bitmap graphics until some kind of window-manager is launched.” “ It’s a POSIX-compatible terminal with various extensions that cannot be supported on the other platforms. “

                                                                  These are good arguments for them potentially doing an EEE strategy. Gotta wonder what the actual risk is, though, if terminal use is so different between Windows and Linux platforms. Anyone intending on using Linux would just use a subset that works on both. If they don’t, they might not care about Linux compatibility. That would mostly be common in Windows shops. I’m curious what attack strategy you think they’ll do in the terminal that would convert more free Linux boxes into paid Windows boxes.

                                                                  1. 2

                                                                    The attack strategy is simple: They are betting on the lazyness of software developers.

                                                                    Suppose one of the Windows-only tools finds its way into one of the scripts powering one of the company’s products and sits there for about 3 years. Most of the time that is long enough for all the developers who’ve worked on it, to leave the company.

                                                                    When that happens it will be to hard or costly to fix those scripts and management will be forced to make the decision: “Just run our product on Windows”.

                                                                    I’ll have to admit that I’ve just made this example up, but I am basing it on an example similar to this:

                                                                    #include <stdio.h>
                                                                    
                                                                    int main() {
                                                                        int i = 1;
                                                                        int j = 2;
                                                                        
                                                                        printf("i=%d\n", i);
                                                                        printf("j=%d\n"); // Notice the missing parameter here!
                                                                    }
                                                                    

                                                                    If you compile this with gcc, it will fly in your face and tell you to fix your code. However if you compile this with a recent version of Visual Studio, you will discover that it happily compiles this into a working binary. It even fixes the missing parameter!

                                                                    In the past I’ve “inherited” a C++ application which should in theory be compiling on just about every OS. In reality this was infeasible because of all kinds of minute errors, like the one demonstrated above, which had slipped into the code due to the “extra and out of spec functionality” Microsoft had provided the developers with.

                                                                    You can call me a sadist or whatever you like, but I think that developers deserve a good whipping if they make errors like this. Doubly so if they then rely on the “extra functionalities” provided by their tools of choice.

                                                                    So having seen this, it’s not hard for me to imagine how this brand new terminal will lead to all kinds of “convenient accidents” which will lead to many pieces of software only being compilable and runnable on Windows boxes. It’s EEE in one of it’s purest forms.

                                                                    1. 5

                                                                      If you compile this with gcc, it will fly in your face and tell you to fix your code. However if you compile this with a recent version of Visual Studio, you will discover that it happily compiles this into a working binary. It even fixes the missing parameter!

                                                                      This simply isn’t true, and the reality is the opposite of what you are claiming, as you can verify yourself: by default gcc will compile this without warnings, whereas MSVC (and clang) will warn.

                                                                      1. 1

                                                                        Unfortunately, you are right, and I am dumping this only from memory and I am unable to share parts of the code base I was working on at that time… So I have no other hard proofs available.

                                                                        That doesn’t change my argument though. Allthough I have to admit that I’m genuinely surprised that gcc accepts this example.

                                                                      2. 2

                                                                        Interesting. Yeah, I could see subtle incompatibilities adding up overtime.

                                                              2. 1

                                                                Oh no, they’re still just as evil. They just adapted to the new market. They may or may not be doing less of EEE. I haven’t tracked that. They did use corrupt, patent law to suck billions of dollars out of Android ecosystem despite not contributing jack to it. Selling a polished ReactOS to enterprises probably would get you sued. People in Microsoft and articles like this said they started laying off lots of their QA people with more vulnerabilities, crashes, and data losses coming in future as result. They also have plenty of spyware they’re putting in the new versions of Windows. Plus, never forget their incentive structure means that the next CEO might do even more bad things.

                                                                Microsoft isn’t to be trusted now or ever in the future. All deals done with them should have an escape plan: portable software, data in open formats, and easy moves to another platform in worst case. With the costs of that, probably better to just do business with a more ethical company in the first place. On open platforms where possible. :)

                                                                Bonus: There’s actually a good example in your space. Github, not Microsoft’s offering, was the wise choice. Oh the irony of Github becoming Microsoft anyway.

                                                              3. 1

                                                                It’s very easy to write code that works on Cygwin and doesn’t work on Linux, so I’m not sure how this is any worse.

                                                                1. 3

                                                                  Cygwin is a hack. This is an official, supported offering by Microsoft. Tons of people will use it in places where folks hoped to convert people away from Windows. Everything they do might also become legacy systems that, if there’s a lock-in risk here, add to the transition cost of switching that ensures lock-in. Cygwin probably could never do that.

                                                                2. 1

                                                                  Nah, they won’t extinguish their own golden goose. Have you seen how much money they’re making with Azure? It ain’t chump change!

                                                              4. 2

                                                                Do you use WSL? If you don’t, you won’t understand the context.

                                                                Picture an environment where they replaced the blazing fast Ferrari like UNIX terminal handling routines with a Model T.

                                                                That’s what this is meant to fix AFAICT.

                                                                1. 1

                                                                  No I never have, nor plan to. But I don’t understand why me commenting about their peculiar style of writing/engagement is related to their technical product, which I am not trying to downplay.

                                                                  1. 1

                                                                    You’re right. Pardon me.

                                                                  2. 1

                                                                    But users of msys or cygwin have had a satisfactory terminal forever in mintty. So it’s not like there’s never been an option.

                                                                    1. 3

                                                                      There hasn’t been a better alternative part of Windows itself, though. Not everyone uses msys or cygwin, especially Windows-only developers.

                                                                  3. 2

                                                                    There’s always been communities around Microsoft products, esp developers and admins. It follows naturally with the combination of its humongous market and fact that people in Microsoft shops will want better experiences. They build helpful tools, share code, share tips, and so on like anywhere else. Entire sites were dedicated to Windows FOSS. Many of them also use UNIX/Linux on the side and/or in the same organization for different apps/services.

                                                                    Why does it surprise you that some chunk of Microsoft’s customers wanted a terminal with benefits of Nix terminals? Or that the company that made PowerShell, responding to market demand, would respond to market demand again for terminal improvements? I think it’s in their rational self interest to hook developers up with a good terminal much like it was to make Linux apps easier to run on Windows. Win win long as one is careful to not let Windows, esp any incompatibilities in Microsoft’s versions, to become a dependency. That avoids EEE. ;)

                                                                    1. 1

                                                                      This person is hired and talk in the name of the Microsoft Empire because it serve its interest well, such as actually LOVEing to customize its interest well.

                                                                      If MS learns that involving people from the outside into its project then it is a good thing, but I deeply agree that we should not fool ourself thinking Microsoft does something else than aiming for more market shares.

                                                                      It is the aim of any company of that size, even if it might feature honesty down the stream (the actual person writing the article).

                                                                      1. 3

                                                                        Yes that’s her job as a PM. Specifically they have open sourced this terminal. You talk about promotion and then jump to accuse them of the lack of bringing outsiders into the project.

                                                                        If you take a look at the contributions from the outside in under a week, I’d say outside involvement has already taken root. They’ve also been talking to developers directly (I’ve had a couple times where I’ve been invited to schedule time with them to discuss their development plans and I have no special relationship with Microsoft or the team).

                                                                        It seems like Microsoft has changed but there will be a balance to keep. As long as Azure keeps growing, Windows probably has less pressure to monetize and control every aspect of the platform. The reality seems less about them being generous and more about their business model shifting their focus. No altruistic leaning is needed to understand why Microsoft is suddenly interested in open source and continuing their interest engaging developers (this is how enterprise adoption stick so well, even if you weren’t part of that camp; I wasn’t and didn’t understand until I talked to folks that were).

                                                                        https://github.com/microsoft/Terminal

                                                                        1. 1

                                                                          The big, hard-to-change streams of revenue for Windows come from both lock-in to Windows-specific features and marketing Windows to companies that want their supplier to be around long time with big ecosystem. Microsoft has realized that open sourcing things that tie into their paid platforms usually can’t hurt those sales. They might even help them since the companies will be less likely to switch. They might even buy more Microsoft apps, hosting, or whatever. The community projects also bring them the business benefits of F/OSS and a PR boost.

                                                                          What’s not to like in this approach for a greedy company trying to make its numbers go up in a highly-competitive, changing market that features lots of F/OSS? ;)

                                                                    1. 1

                                                                      I’m confused about Jai. I’m sure I’ve seen streams where Jonathan Blow demoed their LLVM backend.

                                                                      Did that change at some point?

                                                                      1. 1

                                                                        I believe they have more than one backend (the first one was even slower as it compiled to C which was then compiled and linked again). Their fast compilation is meant for development and debugging. It produces reasonable code but then release builds can take much longer to regain the extra performance left on the table.

                                                                        1. 1

                                                                          Yeah I remember the C backend, but haven’t seen any other besides the LLVM one.

                                                                          Just curious if there’s more info on that

                                                                      1. 2

                                                                        The author goes out of the way to use confusing examples (like the nested here-doc example). I guess the point is that it’s possible and thus a feature. More likely it seems like this is party to Ruby’s conservative evolution, choosing to add rather than break old code.

                                                                        I can think of many more surprises, like the validity of def inside nonobvious contexts like argument default expressions, the BEGIN and END block feature and how nesting BEGIN and END inside BEGIN or END can be mind bending.

                                                                        It’s interesting to consider how hard it is to specify behavior after the fact, Ruby primarily being determined by it’s primary implementation in C (MRI/YARV/whatever it’s called these days). There were some removals going from 1.8 to 1.9 and 2.0 but most of the features remained intact, which is an impressive but scary proposition for such a complicated language. The fact that RSpec even manages to check such a wide set of edge cases is an impressive bit of work (I recall when the initiative was started and it took years to even get a baseline coverage for behaviors used in common things like Rails).

                                                                        If you’re not working on designing the language, the other takeaway might be to consider what libraries you use in Ruby very carefully to avoid things that tread too close to being unexplainable or risky. Of course, it’s hard to avoid many of these “magic tricks” as popular frameworks and libraries tend to operate on them by selling the idea that the DSL is more important than the boundary between comprehension and code you write. Rails seems to have cleaned up some of its more offensive practices but it’ll be interesting to see if the long term use of Ruby will remain closely tied to the preference for magic over boring old Ruby code which avoids all of these knobs.

                                                                        I’ll admit to having written some very experimental Ruby code in the past, though as I maintained more code over the years, I started gravitating towards simple and obvious code when possible. Now I haven’t written Ruby recently (none for many years now) but it’s interesting to see people get polarized over the malleable and/or sharp bits of Ruby. It’ll always be part of Ruby even if you don’t use it.

                                                                        1. 8

                                                                          For those pretty printing values, I highly recommend checking out the dbg! macro: https://doc.rust-lang.org/std/macro.dbg.html

                                                                          1. 2

                                                                            C# / F# et. al / NetCore were the only good things to come out of MS (in my mind)

                                                                            1. 5

                                                                              It depends where we draw the line a guess. MSR is where F# originated, as well as things like Z3, Orleans,and others. On the commercial side, we have the invention of XHR, performance centered 3d graphics (DirectX may be proprietary but it was a big change from old school OpenGL which was a mess of vendor extensions),

                                                                              The real kicker is that Microsoft was early on a lot of fronts and overconfident because of it: smart phones, tablets, pen computing, integrated web browsing experiences (which have all but become the user shell at this point which ironically they were sued for even though some operating systems like chrome os are exactly this), and others I’m sure.

                                                                              Maybe some folks don’t care about one thing or another but they moved a lot of things forward in critical ways to get us where we are today. Almost all of these fronts have been improved upon by others so folks might claim that it didn’t really matter what Microsoft did, but I think many crucial turning points in history come from things Microsoft decided to change. It’s the same way we look back at Apple and deciding the smart phone wasn’t done… most folks today have Android handsets but it’d be silly to pretend that the iPhone didn’t create the momentum in rethinking things just like products before the iPhone did.

                                                                            1. 3

                                                                              The interesting contribution of PCG comes from the observation that evaluation of the quality should be compared based on the number of bits used as state rather than some perfect ideal of a random number that’s hard to test since something can never be shown to be random but instead shown to be biased (or not so random looking). This means you can only really detect negative results, which makes a lot of earlier evaluation frameworks poor benchmarks (pass/fail rather than structural analysis and limitations).

                                                                              The structure here is applicable to PRNGs and CSPRNGs in the sense that N bits gives 2^N states and thus you have an upper bound on your cycles. Modeling an ideal permutation of states gives you the baseline for evaluating the performance. From there PCG offers a few interesting points:

                                                                              • your output should not be your entire state
                                                                              • evaluation should include change in quality vs bits added (for example, mersenne twister does not scale well here, LCGs do better)
                                                                              • it’s possible to modify a reasonable bit mixing construction to provide better cycle behavior by augmenting how state is managed and how state is turned into outputs

                                                                              This does lead to some remarkable improvements using this new framework for analysis but it’s not necessarily going to be the best performing in terms of CPU behavior. There are some tricky things about how data flows through PCG that makes it immune to instruction level parallelism. Likewise, hardware based acceleration of some other common cryptography primitives is leading some to use CSPRNGs (can still be slow with small outputs but pooling the output can help here). I think PCG is still a great place to start for someone who has an interest in learning more about PRNGs.

                                                                              1. 4

                                                                                […] there is a fast path where the sending thread is blocked, the receiving thread is unblocked, and control is immediately transferred without a trip through the scheduler.

                                                                                That’s really impressive. I could see this being a big deal in the case of processes doing a lot of this kind of IPC.

                                                                                1. 5

                                                                                  There’s an interesting talk by Paul Turner (kernel engineer at Google): https://www.youtube.com/watch?v=KXuZi9aeGTw

                                                                                  He describes some of the optimized thread-based programming models. It seems they arrived at similar thread swap primitives as well. I’m not sure if they’ve ever managed to upstream any of these ideas but it’s worth a watch.

                                                                                1. 3

                                                                                  I’ve always had an issue with the claim that Option gets rid of null, in the sense that None is null (conceptually and in some cases at runtime). The more precise way to compare unstructured nulls to options is that 1) with options you are usually compelled to explicitly check for null/none/nothing and 2) null does not inhabit other regular types in the language.

                                                                                  On 2, not all languages enforce this. F#, as an example, has to deal with nulls from the .Net side so point 1 is really the core benefit there. Continuing with .Net as an example though, value types make this null inhabitant impossible as the values are directly represented and it’s not always possible to pick some special null value like zero (int x = 0 in C# certainly doesn’t look null to me). So they add a nullable distinction in .Net and also allow one to explicitly wrap value types to make generics with a nullable constraint work.

                                                                                  In .Net we might ask, why have null at all? Isn’t it a bad thing? Why would I want types that could be nullable when I hear non-nullable types are the feature we want?

                                                                                  The real answer is not that null is a bad thing by itself. It’s a value like None. The problem is that it inhabits many or all of your types implicitly and you get no way to say: “I’ve already checked this or I can promise it’s not null”. This is where optional types take the structure of Option but layer it on as if there was an annotation the compiler understands that undoes this mistake of a null inhabitant where it doesn’t belong.

                                                                                  Now there are also other issues around optional values and good programming practice. I’ve seen codebases where options permeate the codepaths deeply and use combinators to continually build up a value while propagating the null as an exit path the entire distance. This is still bad code even if it’s free of null dereferencing. Making the null explicit makes the issue more obvious but it’s still important to scrub your inputs early and keep strong contracts in place.

                                                                                  1. 2

                                                                                    Ruby doesn’t automatically give required files their own namespace, but doesn’t evaluate them in the caller’s namespace either.

                                                                                    I don’t think this is true. The reason it seems that way is that module and class definitions in Ruby are scope guards (meaning the scoping rules are somewhat non-lexical). An example to demonstrate that requires are indeed evaluated in the caller’s context is as follows

                                                                                    # a.rb
                                                                                    Module A
                                                                                      def self.call_b
                                                                                        B::call
                                                                                      end
                                                                                    end
                                                                                    
                                                                                    # Note that B is nowhere in sight
                                                                                    A.call_b
                                                                                    
                                                                                    # b.rb
                                                                                    Module B
                                                                                      def self.call
                                                                                        puts "Called inside module B"
                                                                                      end
                                                                                    end
                                                                                    
                                                                                    # c.rb
                                                                                    require './b'
                                                                                    require './a'
                                                                                    
                                                                                    1. 4

                                                                                      You’re right but it’s worth understanding how it ends up this way. It’s been many years since I’ve written significant Ruby code so I could be fuzzy on the details but I’ll take a stab at it.

                                                                                      The reason it works is that constant lookup and definition (capitalized names) follows different rules from other kinds of definitions: local variable, class variable (@@), instance variable (@), and method (def) (ignoring globals ($) and special variables and special constants for now). Most of the various scoping rules have useful properties but it does make Ruby a complicated language if you want to really dig down into the details.

                                                                                      One way to start looking into constant lookup behavior is calling the Module.nesting method. When that list has been traversed and it still hasn’t found the constant then it checks the top level scope (I believe this order changed between Ruby 1.8 and 1.9, so I’d have to check). In the case of the Ruby top-level scope, this happens to be equal to opening Object (if you check what the class of self is at the top level you get Object). So in your example, the cross-file sharing is happening because b.rb adds B to Object. a.rb then looks for A::B and fails to find it, falling back to Object::B.

                                                                                      The scope instance at the top level of each module is also the default target for method definition outside of a class or module block so the following is true:

                                                                                      def abc; end
                                                                                      abc # valid call
                                                                                      Object.instance_methods.include? :abc #=> true
                                                                                      

                                                                                      However if I use self at the top-level:

                                                                                      def self.xyz; end
                                                                                      xyz # valid call
                                                                                      Object.instance_methods.include? :xyz => false
                                                                                      

                                                                                      This method ends up on the eigenclass/singleton class of self (NB: singleton class always seemed to confuse non-ruby coders as it sounds similar but is not a singleton object).

                                                                                      Likewise, setting a local or instance variable does not leak across file scopes. Class variables do and you can test this in irb with two prompts like:

                                                                                      > @@foo = 42
                                                                                      > irb # spawn a new irb evaluation context, similar to having a new file
                                                                                      > @@foo #=> 42
                                                                                      

                                                                                      So with that tested, we can see that Ruby files “leak” quite a bit but with modules being the preferred container for writing out code at the top-level, this is rarely a problem outside of lazy or novice coding. This seems similar to Python’s we’re all adults approach to public fields on classes. The fact that Ruby doesn’t enforce rigid separation is double edged sword but for the ~10 years that I wrote Ruby full time, it was never an issue. The main recommendation is to guard yourself when bringing in third party code. I usually had a script run that would diff constants and methods before and after a require call to see what something touched. It can be pretty telling if something finds it to be an advantage to implicitly modify a bunch of things and I generally avoided using those libraries. Those that haven’t tell me their horror stories of “that time they had to use Ruby” which almost always boils down to this issue.

                                                                                      1. 1

                                                                                        Your explanation should be a blog post.

                                                                                        I’ve been writing Ruby for a while as well but the scoping rules still bite me and I use a subset I can reason about by just reading the source to avoid accidentally evaluating something in a caller’s context. Usually that means putting things inside modules or classes and avoiding script level evaluation like in the example I posted.

                                                                                        1. 2

                                                                                          Thanks for the suggestion. I’ve been meaning to check up on how Ruby has changed over the past few years so this might be a good way to do so.

                                                                                    1. 6

                                                                                      I was a bit disappointed that there wasn’t a final evolution punchline waiting at the bottom in the form of an obvious and elegant solution.

                                                                                      1. 3

                                                                                        It was an elegant solution to the job security program. Good luck hiring another person who (a) knows Rust and (b) can maintain that pile of code.

                                                                                        1. 1

                                                                                          Could you expand? How is Rust viewed in the security community, both as a target for attacks and as a language to possibly write safer programs?

                                                                                          1. 3

                                                                                            It was a joke about how some people build overly-complex code on purpose to make the worker a necessity the company can’t get rid of. The Rust code toward the end could be examples.

                                                                                            Far as security community, I cant say given Im not in mainstream, security community. I know a few here like that it prevents more code injections by default. I know security and non-security folks like the no-GC safety if they can stand the borrow-checker. Far as high-assurance systems, I’m against it other than for prototyping because C and Ada/SPARK have more tools to find/prevent errors with a certifying compiler for C. Plus, safety-critical folks might use them.

                                                                                            Now, there’s another group among C developers that believe stopping errors is the programmer’s responsibility, that C is adequate for such people to write error-free code, optionally that its syntax/semantics are better than others, and compiler-to-binary isnt an issue (except Karger/Thompson Attack). Some people in this crowd will use some tooling, esp sanitizers. You can’t get them to switch to a safer language or use high-assurance methods due to their ideological beliefs about programming.

                                                                                            So, I try to figure out which Im talking to fast so I dont waste our time on stuff they’ll ignore. I try to give each group what might help them, though, among methods they might want to use.

                                                                                        2. 3

                                                                                          The good answer were hidden in the middle. “Junior Rust Programmer” and “Functional Rust Programmer”. The error pattern in “Senior Rust Programmer” is a good read too, but obviously tad complected for such a small thing.

                                                                                          1. 2

                                                                                            I think the first one was supposed to be the right one. I don’t really read Rust, but the first one was the only one I could mostly understand.

                                                                                          1. 1

                                                                                            Non-Ruby developer here. Is there any difference between a Ruby yield and just passing in a lambda as an argument?

                                                                                            1. 2

                                                                                              Yes, though there are a few aspects in play here. First, a block parameter is a special argument and in order to pass a lambda in as a block parameter you must use the & operator (this operator will attempt to coerce non-proc objects to procs, so this is why symbols are sometimes used in this way).

                                                                                              When you have a block in ruby, the lifetime of the lexical context is tied to the call-frame that is created upon invocation. This might sound like an implementation detail but it allows the runtime to avoid allocating a proc object to hold the closed over lexical environment (aka. faster and lighter). Generally, if you know you’re receiving a block, it can be an advantage to stick with yield and the implicit parameter rather than pull out the proc as a value and use #call (or an alias of #call like #[]) on the object (this has been optimized more recently in some cases to allow lazy allocation of the object if you only forward it to another call).

                                                                                              The other difference is around parameter behavior. Blocks and Procs don’t require the arity of the block and the yield to match. So you can take fewer or more arguments than are passed and the extra fields will either be dropped or set to nil. Lambdas however, require the arity to match, much like a method call (distinctly constructed using lambda or -> “stabby” syntax). This can be a good thing but I’ll avoid writing half an article here.

                                                                                              One more advanced case that many are unaware of, you can capture a proc using the constructor without a block, so these end up working in a similar way:

                                                                                              def a_is_42(a, &callable)
                                                                                                callable.call if a == 42 && callable
                                                                                              end
                                                                                              
                                                                                              def b_is_42(b)
                                                                                                Proc.new.call if b == 42 && block_given?
                                                                                              end
                                                                                              

                                                                                              Of course, it’s a contrived example on the second because we could just use yield there, but it does show that we can build the proc object lazily which can be a big win in some hot paths for Ruby code. There are some more optimization techniques but this gives a little taste of the range of differences between the proc world and the block world. If people are curious, I can write up more about this stuff.

                                                                                              TL;DR, you can avoid a lot of allocation and call overhead by keeping things in block form.

                                                                                              1. 1

                                                                                                That makes sense, thanks!

                                                                                              2. 2

                                                                                                The only difference is that your lambda will be an instance of the class Proc which is in charge of storing and executing the piece of code associated to your lambda.

                                                                                                It’s a bit more complicated than this, but let’s keep it simple if you’re not familiar with Ruby. ;-)

                                                                                                I invite you to read this article to dive into the specificities of the Proc class and lambdas in Ruby.

                                                                                                1. 1

                                                                                                  Which article?

                                                                                                    1. 1

                                                                                                      Thanks!

                                                                                              1. 15

                                                                                                I bought the limited edition hardcopy, this is a super-fun game.

                                                                                                If you like this game, you will likely enjoy the other zachtronics games.

                                                                                                If you enjoyed writing assembly in DOS, TIS-100 is your best bet.

                                                                                                If you like graphical/visual programming, try Opus Magnum.

                                                                                                For the games listed above, you can see the cost of your Steam friends’ solutions. Once you’ve solved a puzzle, it’s surprisingly much fun to try to beat the scores of people you know.

                                                                                                1. 10

                                                                                                  Shenzhen I/O is pretty rad too; bought the feelies for that one, and get asked about the binder routinely :D

                                                                                                  1. 6

                                                                                                    If verilog/VHDL is your kink, MHRD is a lot of fun.

                                                                                                    1. 3

                                                                                                      That’s been on my steam wish list for a very long time. I’ll have to check it out when I “finish” EXAPUNKS.My brief interaction with Verilog was mind-opening. I’d really enjoy a game with a similar medium but the right constraints.

                                                                                                      1. 1

                                                                                                        Funny, I’m just teaching myself verilog right now! I’m still in the random walk stage but hopefully soon I’ll have a clue.