Threads for tt

  1. 1

    Data-oriented thinking would compel us to get rid of deserialization step instead, but we will not pursue that idea this time.

    I found this statement interesting, but I’m finding it pretty difficult to imagine what this would look like. Could someone with more knowledge of data-oriented thinking explain this?

    1. 3

      So if we think about this, the db interface itself, where it returns a freshly-allocated Vec of bytes, is redundant. It necessary includes a copy somewhere, to move the data from bytes on-disk to Rust managed heap.

      It probably is possible to get at those bytes in a more direct way: most likely db either mmaps the file, or includes some internal caches. Either way, you probably can get a zero-copy &[u8] out of it. And it should be possible to “cast” that slice of bytes to &Widget, using something like arkyv or abonotation. That is, if Widget struct is defined in such way that bytes-in-the-form-they-are-on-disk are immediately usable.

      1. 2

        Or something like https://capnproto.org/

    1. 16

      In some ways, high-level languages with package systems are to blame for this. I normally code in C++ but recently needed to port some code to JS, so I used Node for development. It was breathtaking how quickly my little project piled up hundreds of dependent packages, just because I needed to do something simple like compute SHA digests or generate UUIDs. Then Node started warning me about security problems in some of those libraries. I ended up taking some time finding alternative packages with fewer dependencies.

      On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t. It’s cool to look at how tiny and efficient code can be — a Scheme interpreter in 4KB! The original Mac OS was 64KB! — but yowza, is it ever difficult to code that way.

      There was an early Mac word processor — can’t remember the name — that got a lot of love because it was super fast. That’s because they wrote it in 68000 assembly. It was successful for some years, but failed by the early 90s because it couldn’t keep up with the feature set of Word or WordPerfect. (I know Word has long been a symbol of bloat, but trust me, Word 4 and 5 on Mac were awesome.) Adding features like style sheets or wrapping text around images took too long to implement in assembly compared to C.

      The speed and efficiency of how we’re creating stuff now is crazy. People are creating fancy OSs with GUIs in their bedrooms with a couple of collaborators, presumably in their spare time. If you’re up to speed with current Web tech you can bring up a pretty complex web app in a matter of days.

      1. 24

        I don’t know, I think there’s more to it than just “these darn new languages with their package managers made dependencies too easy, in my day we had to manually download Boost uphill both ways” or whatever. The dependencies in the occasional Swift or Rust app aren’t even a tenth of the bloat on my disk.

        It’s the whole engineering culture of “why learn a new language or new API when you can just jam an entire web browser the size of an operating system into the application, and then implement your glorified scp GUI application inside that, so that you never have to learn anything other than the one and only tool you know”. Everything’s turned into 500megs worth of nail because we’ve got an entire generation of Hammer Engineers who won’t even consider that it might be more efficient to pick up a screwdriver sometimes.

        We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t

        That’s the argument, but it’s not clear to me that we haven’t severely over-corrected at this point. I’ve watched teams spend weeks poking at the mile-high tower of leaky abstractions any react-native mobile app teeters atop, just to try to get the UI to do what they could have done in ten minutes if they’d bothered to learn the underlying platform API. At some point “make all the world a browser tab” became the goal in-and-of-itself, whether or not that was inefficient in every possible dimension (memory, CPU, power consumption, or developer time). It’s heretical to even question whether or not this is truly more developer-time-efficient anymore, in the majority of cases – the goal isn’t so much to be efficient with our time as it is to just avoid having to learn any new skills.

        The industry didn’t feel this sclerotic and incurious twenty years ago.

        1. 7

          It’s heretical to even question whether or not this is truly more developer-time-efficient anymore

          And even if we set that question aside and assume that it is, it’s still just shoving the costs onto others. Automakers could probably crank out new cars faster by giving up on fuel-efficiency and emissions optimizations, but should they? (Okay, left to their own devices they probably would, but thankfully we have regulations they have to meet.)

          1. 1

            left to their own devices they probably would, but thankfully we have regulations they have to meet.

            Regulations. This is it.

            I’ve long believed that this is very important in our industry. As earlier comments say, you can make a complex web app after work in a weekend. But then there are people, in the mentioned above autoindustry, that take three sprints to set up a single screen with a table, a popup, and two forms. That’s after they pulled in the internet worth of dependencies.

            On the one hand, we don’t want to be gatekeeping. We want everyone to contribute. When dhh said we should stop celebrating incompetence, majority of people around him called this gatekeeping. Yet when we see or say something like this - don’t build bloat or something along the line - everyone agrees.

            I think the middle line should be in between. Let individuals do whatever the hell they want. But regulate “selling” stuff for money or advertisement eyeballs or anything similar. If an app is more then x MB (some reasonable target), it has to get certified before you can publish it. Or maybe, if a popular app does. Or, if a library is included in more then X, then that lib either gets “certified”, or further apps using it are banned.

            I am sure that is huge, immensely big, can of worms. There will be many problems there. But if we don’t start cleaning up shit, it’s going to pile up.

            A simple example - if controversial - is Google. When they start punishing a webapp for not rendering within 1 second, everybody on internet (that wants to be on top of google) starts optimizing for performance. So, it can be done. We just have to setup - and maintain - a system that deals with the problem ….well, systematically.

          2. 1

            why learn a new language or new API when you can just jam an entire web browser the size of an operating system into the application

            Yeah. One of the things that confuses me is why apps bundle a browser when platforms already come with browsers that can easily be embedded in apps. You can use Apple’s WKWebView class to embed a Safari-equivalent browser in an app that weighs in at under a megabyte. I know Windows has similar APIs, and I imagine Linux does too (modulo the combinatorial expansion of number-of-browsers times number-of-GUI-frameworks.)

            I can only imagine that whoever built Electron felt that devs didn’t want to deal with having to make their code compatible with more than one browser engine, and that it was worth it to shove an entire copy of Chromium into the app to provide that convenience.

            1. 1

              Here’s an explanation from the Slack developer who moved Slack for Mac from WebKit to Electron. And on Windows, the only OS-provided browser engine until quite recently was either the IE engine or the abandoned EdgeHTML.

          3. 10

            On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

            The problem is that your dependencies can behave strangely, and you need to debug them.

            Code bloat makes programs hard to debug. It costs programmer time.

            1. 3

              The problem is that your dependencies can behave strangely, and you need to debug them.

              To make matters worse, developers don’t think carefully about which dependencies they’re bothering to include. For instance, if image loading is needed, many applications could get by with image read support for one format (e.g. with libpng). Too often I’ll see an application depend on something like ImageMagick which is complete overkill for that situation, and includes a ton of additional complex functionality that bloats the binary, introduces subtle bugs, and wasn’t even needed to begin with.

            2. 10

              On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

              The problem is that computational resources vs. programmer time is just one axis along which this tradeoff is made: some others include security vs. programmer time, correctness vs. programmer time, and others I’m just not thinking of right now I’m sure. It sounds like a really pragmatic argument when you’re considering your costs because we have been so thoroughly conditioned into ignoring our externalities. I don’t believe the state of contemporary software would look like it does if the industry were really in the habit of pricing in the costs incurred by others in addition to their own, although of course it would take a radically different incentive landscape to make that happen. It wouldn’t look like a code golfer’s paradise, either, because optimizing for code size and efficiency at all costs is also not a holistic accounting! It would just look like a place with some fewer amount of data breaches, some fewer amount of corrupted saves, some fewer amount of Watt-hours turned into waste heat, and, yes, some fewer amount of features in the case where their value didn’t exceed their cost.

              1. 7

                We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t

                But we aren’t. Because modern resource-wastfull software isn’t really realeased quicker. Quite the contrary, there is so much development overhead that we don’t see those exciting big releases anymore with a dozen of features every ones loves at first sight. They release new features in microscopic increments so slowly that hardly any project survives 3-5 years without becoming obsolete or out of fashion.

                What we are trading is quality, by quantity. We lower the skill and knowledge barrier so much to acompdate for millions of developers that “learned how tonprogra in one week” and the results are predictably what this post talks about.

                1. 6

                  I’m as much against bloat as everyone else (except those who make bloated software, of course—those clearly aren’t against it). However, it’s easy to forget that small software from past eras often couldn’t do much. The original Mac OS could be 64KB, but no one would want to use such a limited OS today!

                  1. 5

                    The original Mac OS could be 64KB, but no one would want to use such a limited OS today!

                    Seems some people (@neauoire) do want exactly that: https://merveilles.town/@neauoire/108419973390059006

                    1. 6

                      I have yet to see modern software that is saving the programmer’s time.

                      I’m here for it, I’ll be cheering when it happens.

                      This whole thread reminds me of a little .txt file that came packaged into DawnOS.

                      It read:

                      Imagine that software development becomes so complex and expensive that no software is being written anymore, only apps designed in devtools. Imagine a computer, which requires 1 billion transistors to flicker the cursor on the screen. Imagine a world, where computers are driven by software written from 400 million lines of source code. Imagine a world, where the biggest 20 technology corporation totaling 2 million employees and 100 billion USD revenue groups up to introduce a new standard. And they are unable to write even a compiler within 15 years.

                      “This is our current world.”

                      1. 11

                        I have yet to see modern software that is saving the programmer’s time.

                        People love to hate Docker, but having had the “pleasure” of doing everything from full-blown install-the-whole-world-on-your-laptop dev environments to various VM applications that were supposed to “just work”… holy crap does Docker save time not only for me but for people I’m going to collaborate with.

                        Meanwhile, programmers of 20+ years prior to your time are equally as horrified by how wasteful and disgusting all your favorite things are. This is a never-ending cycle where a lot of programmers conclude that the way things were around the time they first started (either programming, or tinkering with computers in general) was a golden age of wise programmers who respected the resources of their computers and used them efficiently, while the kids these days have no respect and will do things like use languages with garbage collectors (!) because they can’t be bothered to learn proper memory-management discipline like their elders.

                        1. 4

                          I’m of the generation that started programming at the tail end of ruby, and Objective-C, and I would definitely not call this the golden age, if anything, looking back at this period now it looks like mid-slump.

                        2. 4

                          I have yet to see modern software that is saving the programmer’s time.

                          What’s “modern”? Because I would pick a different profession if I had to write code the way people did prior to maybe the late 90s (at minimum).

                          Edit: You can pry my modern IDEs and toolchains from my cold, dead hands :-)

                    2. 6

                      Node is an especially good villain here because JavaScript has long specifically encouraged lots of small dependencies and has little to no stdlib so you need a package for near everything.

                      1. 5

                        It’s kind of a turf war as well. A handful of early adopters created tiny libraries that should be single functions or part of a standard library. Since their notoriety depends on these libraries, they fight to keep them around. Some are even on the boards of the downstream projects and fight to keep their own library in the list of dependencies.

                      2. 6

                        We’re trading CPU time and memory, which are ridiculously abundant

                        CPU time is essentially equivalent to energy, which I’d argue is not abundant, whether at the large scale of the global problem of sustainable energy production, or at the small scale of mobile device battery life.

                        for programmer time, which isn’t.

                        In terms of programmer-hours available per year (which of course unit-reduces to active programmers), I’m pretty sure that resource is more abundant than it’s ever been any point in history, and only getting more so.

                        1. 2

                          CPU time is essentially equivalent to energy

                          When you divide it by the CPU’s efficiency, yes. But CPU efficiency has gone through the roof over time. You can get embedded devices with the performance of some fire-breathing tower PC of the 90s, that now run on watch batteries. And the focus of Apple’s whole line of CPUs over the past decade has been power efficiency.

                          There are a lot of programmers, yes, but most of them aren’t the very high-skilled ones required for building highly optimal code. The skills for doing web dev are not the same as for C++ or Rust, especially if you also constrain yourself to not reaching for big pre-existing libraries like Boost, or whatever towering pile of crates a Rust dev might use.

                          (I’m an architect for a mobile database engine, and my team has always found it very difficult to find good developers to hire. It’s nothing like web dev, and even mobile app developers are mostly skilled more at putting together GUIs and calling REST APIs than they are at building lower-level model-layer abstractions.)

                        2. 2

                          Hey, I don’t mean to be a smart ass here, but I find it ironic that you start your comment blaming the “high-level languages with package systems” and immediately admit that you blindly picked a library for the job and that you could solve the problem just by “taking some time finding alternative packages with fewer dependencies”. Does not sound like a problem with neither the language nor the package manager honestly.

                          What would you expect the package manager to do here?

                          1. 8

                            I think the problem in this case actually lies with the language in this case. Javascript has such a piss-poor standard library and dangerous semantics (that the standard library doesn’t try to remedy, either) that sooner, rather than later, you will have a transient dependency on isOdd, isEven and isNull because even those simple operations aren’t exactly simple in JS.

                            Despite being made to live in a web browser, the JS standard library has very few affordances to working with things like URLs, and despite being targeted toward user interfaces, it has very few affordances for working with dates, numbers, lists, or localisations. This makes dependency graphs both deep and filled with duplicated efforts since two dependencies in your program may depend on different third-party implementations of what should already be in the standard library, themselves duplicating what you already have in your operating system.

                            1. 2

                              It’s really difficult for me to counter an argument that it’s basically “I don’t like JS”. The question was never about that language, it was about “high-level languages with package systems” but your answer hyper focuses on JS and does not address languages like python for example, that is a “high-level language with a package system”, which also has an “is-odd” package (which honestly I don’t get what that has to do with anything).

                              1. 1

                                The response you were replying to was very much about JS:

                                In some ways, high-level languages with package systems are to blame for this. I normally code in C++ but recently needed to port some code to JS, so I used Node for development. It was breathtaking how quickly my little project piled up hundreds of dependent packages, just because I needed to do something simple like compute SHA digests or generate UUIDs.

                                For what it’s worth, whilst Python may have an isOdd package, how often do you end up inadvertently importing it in Python as opposed to “batteries-definitely-not-included” Javascript? Fewer batteries included means more imports by default, which themselves depend on other imports, and a few steps down, you will find leftPad.

                                As for isOdd, npmjs.com lists 25 versions thereof, and probably as many isEven.

                                1. 1

                                  and a few steps down, you will find leftPad

                                  What? What kind of data do you have to back up a statement like this?

                                  You don’t like JS, I get it, I don’t like it either. But the unfair criticism is what really rubs me the wrong way. We are technical people, we are supposed to make decisions based on data. But this kind of comments that just generates division without the slightest resemblance of a solid argument do no good to a healthy discussion.

                                  Again, none of the arguments are true for js exclusively. Python is batteries included, sure, but it’s one of the few. And you conveniently leave out of your quote the part when OP admits that with a little effort the “problem” became a non issue. And that little effort is what we get paid for, that’s our job.

                            2. 3

                              I’m not blaming package managers. Code reuse is a good idea, and it’s nice to have such a wealth of libraries available.

                              But it’s a double edged sword. Especially when you use a highly dynamic language like JS that doesn’t support dead-code stripping or build-time inlining, so you end up having to copy an entire library instead of just the bits you’re using.

                            3. 1

                              On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

                              We’re trading CPU and memory for the time of some programmers, but we’re also adding the time of other programmers onto the other side of the balance.

                              1. 1

                                I definitely agree with your bolded point - I think that’s the main driver for this kind of thing.

                                Things change if there’s a reason for them to be changed. The incentives don’t really line up currently to the point where it’s worth it for programmers/companies to devote the time to optimize things that far.

                                That is changing a bit already, though. For example, performance and bundle size are getting seriously considered for web dev these days. Part of the reason for that is that Google penalizes slow sites in their rankings - a very direct incentive to make things faster and more optimized!

                              1. 18

                                With Japanese we could still write variable names in Hiragana, Katakana or Kanji 🤔

                                1. 6

                                  Also the Roman alphabet and Arabic numerals! Plus emoji, Japan’s orthographic gift to the world 🎏🗿💮

                                  1. 5

                                    FULL width is also a thing 🤭

                                    1. 4

                                      ハンカクカナモアルンデス。

                                    2. 2

                                      Didn’t we have emoticons like a decade before emoji became popular (in the west, anyway)? Weren’t those functionally emojis (albeit not part of unicode, but I’m guessing the earliest emojis weren’t either). Anyone know for sure?

                                      1. 1

                                        The first emoji were made for Japanese cellphones in 1997, by which time English emoticons had existed for a decade or so. Japan even had their own variations on emoticons like ^_^.

                                        1. 2

                                          What I remember from that time is being mind blown by how Japanese had taken emoji to a whole new level, breaking free of the idea that eyes were always “:” and that faces were always sideways.

                                          1. 1

                                            I remember chat clients and forums would let you put actual smileys inline with text. It wasn’t ascii.

                                  1. 2

                                    I end up missing the brutal simplicity of Go interfaces in other languages like Rust.

                                    I feel like Rust gives you the freedom to enjoy this type of simplicity, though. Even though it’s often frowned upon by Rust enthusiasts, Rc<RefCell<dyn Trait>> is very similar to Go interface types. You don’t have to worry too much about lifetimes or concrete types, though it does add a little bit of additional overhead from the RefCell having to check outstanding references every time you want to actually do something with the contents.

                                    I suppose it’s not quite as simple, but Rc<RefCell<dyn Trait>> and Box<dyn Trait> both have a somewhat similar amount of cognitive load from typing as Go interfaces do.

                                    1. 5

                                      For me when I compare Rust version to Go version, Rust one seems much more complicated.

                                      1. 2

                                        Rust is more verbose, and things are arranged a little differently, but I would argue they’re similar in terms of actual complexity

                                        1. 4

                                          How can you look at

                                          let foo: Rc<RefCell<dyn Foo>> = Rc::new(RefCell::new(Baz(42)));
                                          

                                          and

                                          var foo Foo = NewBaz(42)
                                          

                                          and claim that they are anywhere near each other in terms of complexity?

                                          1. 6

                                            You can simplify both cases (Go, Rust) of construction to:

                                            let foo = Baz::new(42); // Rust
                                            foo := NewBaz(42)       // Go
                                            

                                            By moving some of that noise to Baz::new in Rust. It is a little bit unidiomatic but not much. The thing you can’t easily remove is borrow_mut(). Since Rust asks you to decide about mutability when you define a trait, even an implementation that does no mutations pays the (syntactic) price of that decision. In Go it is type that implements an interface that decides about that and callsites don’t differ for mutating and ‘pure’ versions.

                                            I completely understand that this is all for good reasons in Rust (and I very much like Rust) but for me Go (which I also like) is much less noisy (at the cost of weaker compile time guarantee).

                                            1. 1

                                              Complexity for whom? They have similar semantics. They do similar things. The “simplicity” is just that Go uses that kind of semantics everywhere. Except when it doesn’t.

                                              1. 4

                                                The rust version (Rc<RefCell<dyn Trait>>) comes with a high cognitive overhead. It is expressive, but I would definitively not qualify it as “simple”.

                                                1. 1

                                                  Yes. And understanding what is actually going on with the Go version is also pretty nontrivial.

                                                  I’m not saying that the shortcut isn’t useful, I’m just saying it’s a shortcut, not an abstraction.

                                                  1. 2

                                                    This thread started with a question about “cognitive load from typing”. Do you think that because Rust exposes implementation details in the type it has lower “cognitive load from typing”?

                                      1. 3

                                        The vm implementation seems to be based on or at least influenced by one described in Crafting Interpreters - pratt parsing, up values and references to wren.

                                        1. 6

                                          Yep, I’m a big fan of Nystrom’s past work, including Wren and Magpie. I found his chapters on upvalues and pratt parsing to be quite helpful and well-written while working on Passerine. The current VM is very simple (something like 17 instructions), and not the fastest (but still faster than a tree-walk interpreter); I’m currently working on a much lower-level runtime that should be able to compile to MiniVM IR or Wasm. Getting a rough working prototype out the door helped validate the design principles behind the language, and I hope that as the compiler matures we’ll be able to improve the performance and capabilities of the language.

                                          1. 4

                                            I’m also a fan of his work. Writing my rust implementation of lox vm/compiler was lots of fun :)

                                        1. 3

                                          Imperative programming languages typically make a grammatical distinction between statements and expression. Functional languages instead tend to be structured in a way that everything inside a function body is an expression. The latter is more minimalistic, and also imposes less constraints on the programmer

                                          When I was first reading about Rust, I read in their documentation that they have statements and expressions. My first impression was that this was a red flag; if the designer really understood what statements are, they would not have included them in the language, and thus their inclusion demonstrated a lack of understanding of the nuance of language design.

                                          Later I came to understand (and I’m not a Rust expert so I could have gotten this wrong) that their documentation just used a creative definition of the word “statement”: it is in fact, just an expression which returns a Unit type. Not what most people would call a statement at all.

                                          1. 3

                                            You are partially correct. Things like if or for are expressions, but let is not. See an example: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=43fb87f7b27c2199001b2d9678fc23b4

                                          1. 4

                                            I remember trying Clojure a bit, and being super interested in a lot of the ideas of the language.

                                            There is the universal quibbles about syntax (and honestly I do kinda agree that f(x, y) and (f x y) are not really much different, and I like the removal of commas). But trying to write some non-trivial programs in Clojure/script made me realize that my quibble with lisps and some functional languages is name bindings.

                                            The fact that name bindings require indentation really messes with readability. I understand the sort of… theoretical underpinning of this, and some people will argue that it’s better, but when you’re working with a relatively iterative process, being able to reserve indentation for loops and other blocks (instead of “OK from this point forward this value is named foo”) is nice!

                                            It feels silly but I think it’s important, because people already are pretty lazy about giving things good names, so any added friction is going to make written code harder to read.

                                            (Clojure-specific whine: something about all the clojure tooling feels super brittle. Lots of inscrutable errors for beginners that could probably be mangled into something nicer. I of course hit these and also didn’t fix them, though…)

                                            EDIT: OTOH Clojure-specific stuff for data types is very very nice. Really love the readability improvements from there

                                            1. 5

                                              Interesting to hear this–indentation to indicate binding scope is one of the things I really miss when I’m using a non-Lisp. I feel like the mental overhead of trying to figure out where something is bound and where it’s not is much higher.

                                              (I strongly agree on the state of Clojure tooling.)

                                              1. 1

                                                I think that racket solves this:

                                                (define (f x)
                                                    (define y (* 10 x))
                                                    (printf "~a ~a\n" y x))
                                                (f 42)
                                                
                                                1. 1

                                                  if you’re e.g. running a transactional database on Apple hardware

                                                  Does anybody do this?

                                                  1. 8

                                                    I would assume that since it’s possible to do that, there’s a non-zero population of people doing that.

                                                    1. 8

                                                      CoreData used across many Mac apps is using SQLite internally.

                                                      1. 4
                                                      2. 3

                                                        SQLite is ubiquitous on Apple devices. Almost anything that needs to store structured data uses it, either directly (Mail.app, or Cocoa’s URL cache) or as the backing store of Core Data.

                                                        1. 3

                                                          Yes but if you look into it (even comments here in this post), sqlite doesn’t suffer from the problem and they do full sync.

                                                          1. 4

                                                            And on consumer devices, I’d assume that the speed problem isn’t as critical as if you were doing, idk, prod SaaS db with tens of thousands of concurrent users touching things

                                                      1. 26

                                                        I find the complaints about Go sort of tedious. What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors? For some reason, the author finds the first unacceptable but the second laudable, but practically speaking, why would I care? I write Go programs by stubbing stuff to the point that I can write a test, and then the tests automatically invoke both the compiler and go vet. Whether an error is caught by the one or the other is of theoretical interest only.

                                                        Also, the premise of the article is that the compiler rejecting programs is good, but then the author complains that the compiler rejects programs that confuse uint64 with int.

                                                        In general, the article is good and informative, but the anti-Go commentary is pretty tedious. The author is actually fairly kind to JavaScript (which is good!), but doesn’t have the same sense of “these design decisions make sense for a particular niche” when it comes to Go.

                                                        1. 35

                                                          What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors?

                                                          A big part of our recommendation of Rust over modern C++ for security boiled down to one simple thing: it is incredibly easy to persuade developers to not commit (or, failing that, to quickly revert) code that does not compile. It is much harder to persuade them to not commit code where static analysis tooling tells them is wrong. It’s easy for a programmer to say ‘this is a false positive, I’m just going to silence the warning’, it’s very difficult to patch the compiler to accept code that doesn’t type check.

                                                          1. 34

                                                            What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors?

                                                            One is optional, the other one is in your face. It’s similar to C situation. You have asan, ubsan, valgrind, fuzzers, libcheck, pvs and many other things which raise the quality is C code significantly when used on every compilation or even commit. Yet, if I choose a C project at random, I’d bet none of those are used. We’ll be lucky if there are any tests as well.

                                                            Being an optional addition that you need to spend time to engage with makes a huge difference in how often the tool is used. Even if it’s just one command.

                                                            (According to the docs only a subset of the vet suite is used when running “go test”, not all of them - “high-confidence subset”)

                                                            1. 15

                                                              When go vet automatically runs on go test, it’s hard to call it optional. I don’t even know how to turn if off unless I dig into the documentation, and I’ve been doing Go for 12+ years now. Technically gofmt is optional too, yet it’s as pervasive as it can be in the Go ecosystem. Tooling ergonomics and conventions matter, as well as first party (go vet) vs 3rd party tooling (valgrind).

                                                              1. 21

                                                                That means people who don’t have tests need to run it explicitly. I know we should have tests - but many projects don’t and that means they have to run vet explicitly and in practice they just miss out on the warnings.

                                                                1. 2

                                                                  Even in projects where I don’t have tests, I still run go test ./... when I want to check if the code compiles. If I used go build I would have an executable that I would need to throw away. Being lazy, I do go test instead.

                                                              2. 13

                                                                Separating the vet checks from the compilation procedure exempts those checks from Go’s compatibility promise, so they could evolve over time without breaking compilation of existing code. New vet checks have been introduced in almost every Go release.

                                                                Compiler warnings are handy when you’re compiling a program on your own computer. But when you’re developing a more complex project, the compilation is more likely to happen in a remote CI environment and making sure that all the warnings are bubbled up is tedious and in practice usually overlooked. It is thus much simpler to just have separate workflows for compilation and (optional) checks. With compiler warnings you can certainly have a workflow that does -Werror; but once you treat CI to be as important as local development, the separate-workflow design is the simpler one - especially considering that most checks don’t need to perform a full compilation and is much faster that way.

                                                                Being an optional addition that you need to spend time to engage with makes a huge difference in how often the tool is used. Even if it’s just one command.

                                                                I feel that the Go team cares more about enabling organizational processes, rather than encouraging individual habits. The norm for well-run Go projects is definitely to have vet checks (and likely more optional linting, like staticcheck) as part of CI, so that’s perhaps good enough (for the Go team).

                                                                All of this is quite consistent with Go’s design goal of facilitating maintenance of large codebases.

                                                                1. 5

                                                                  Subjecting warnings to compatibility guarantees is something that C is coming to regret (prior discussion).

                                                                  And for a language with as… let’s politely call it opinionated a stance as Go, it feels a bit odd to take the approach of “oh yeah, tons of unsafe things you shouldn’t do, oh well, up to you to figure out how to catch them and if you don’t we’ll just say it was your fault for running your project badly”.

                                                                2. 4

                                                                  The difference is one language brings the auditing into the tooling. In C, it’s all strapped on from outside.

                                                                  1. 19

                                                                    Yeah, “similar” is doing some heavy lifting there. The scale is more like: default - included - separate - missing. But I stand by my position - Rust is more to the left the than Go and that’s a better place to be. The less friction, the more likely people will notice/fix issues.

                                                                  2. 2

                                                                    I’ll be honest, I get this complaint about it being an extra command to run, but I haven’t ever run go vet explicitly because I use gopls. Maybe I’m in a small subset going the LSP route, but as far as I can tell gopls by default has good overlap with go vet.

                                                                    But I tend to use LSPs whenever they’re available for the language I’m using. I’ve been pretty impressed with rust-analyzer too.

                                                                  3. 12

                                                                    On the thing about maps not being goroutine safe, it would be weird for the spec to specify that maps are unsafe. Everything is unsafe except for channels, mutxes, and atomics. It’s the TL;DR at the top of the memory model: https://go.dev/ref/mem

                                                                    1. 6

                                                                      Agreed. Whenever people complain about the Rust community being toxic, this author is who I think they’re referring to. These posts are flame bait and do a disservice to the Rust community. They’re like the tabloid news of programming, focusing on the titillating bits that inflame division.

                                                                      1. 5

                                                                        I don’t know if I would use the word “toxic” which is very loaded, but just to complain a little more :-) this passage:

                                                                          go log.Println(http.ListenAndServe("localhost:6060", nil))
                                                                        

                                                                        Jeeze, I keep making so many mistakes with such a simple language, I must really be dense or something.

                                                                        Let’s see… ah! We have to wrap it all in a closure, otherwise it waits for http.ListenAndServe to return, so it can then spawn log.Println on its own goroutine.

                                                                         go func() {
                                                                             log.Println(http.ListenAndServe("localhost:6060", nil))
                                                                         }()
                                                                        

                                                                        There are approximately 10,000 things in Rust that are subtler than this. Yes, it’s an easy mistake to make as a newcomer to Go. No, it doesn’t reflect even the slightest shortcoming in the language. It’s a very simple design: the go statement takes a function and its arguments. The arguments are evaluated in the current gorountine. Once evaluated, a new goroutine is created with the evaluated parameters passed into the function. Yes, that is slightly subtler than just evaluating the whole line in a new goroutine, but if you think about it for one second, you realize that evaluating the whole line in a new goroutine would be a race condition nightmare and no one would actually want it to work like that.

                                                                        Like, I get it, it sucks that you made this mistake when you were working in a language you don’t normally use, but there’s no need for sarcasm or negativity. This is in fact a very “simple” design, and you just made a mistake because even simple things actually need to be learned before you can do them correctly.

                                                                        1. 3

                                                                          In practice, about 99% of uses of the go keyword are in the form go func() {}(). Maybe we should optimize for the more common case?

                                                                          1. 1

                                                                            I did a search of my code repo, and it was ⅔ go func() {}(), so you’re right that it’s the common case, but it’s not the 99% case.

                                                                          2. 2

                                                                            I agree that the article’s tone isn’t helpful. (Also, many of the things that the author finds questionable in Go can also be found in many other languages, so why pick on Go specifically?)

                                                                            But could you elaborate on this?

                                                                            evaluating the whole line in a new goroutine would be a race condition nightmare and no one would actually want it to work like that.

                                                                            IMO this is less surprising than what Go does. The beautiful thing about “the evaluation of the whole expression is deferred” is precisely that you don’t need to remember a more complicated arbitrary rule for deciding which subexpressions are deferred (all of them are!), and you don’t need ugly tricks like wrapping the whole expression in a closure which is the applied to the empty argument list.

                                                                            Go’s design makes sense in context, though. Go’s authors are culturally C programmers. In idiomatic C code, you don’t nest function calls within a single expression. Instead, you store the results of function calls into temporary variables and only then pass those variables to the next function call. Go’s design doesn’t cause problems if you don’t nest function calls.

                                                                        2. 3

                                                                          At least they mention go vet so even people like me without knowing it can arrive at similar conclusions. And they also mention that he is somewhat biased.

                                                                          But I think they should just calmly state without ceremony like “And yet there are no compiler warnings” that this is the compiler output and this is the output of go vet.

                                                                          This also seems unnecessary:

                                                                          Why we need to move it into a separate package to make that happen, or why the visibility of symbols is tied to the casing of their identifiers… your guess is as good as mine.

                                                                          Subjectively, this reads as unnecessarily dismissive. There are more instances similar to this, so I get why you are annoyed. It makes their often valid criticism weaker.

                                                                          I think it comes as a reaction to people valid agreeing that golang is so simple but in their (biased but true) experience it is full of little traps.

                                                                          Somewhat related: What I also dislike is that they use loops for creating the tasks in golang, discuss a resulting problem and then not use loops in rust - probably to keep the code simple

                                                                          All in all, it is a good article though and mostly not ranty. I think we are setting the bar for fairness pretty high. I mean we are talking about a language fan…

                                                                          1. 5

                                                                            This also seems unnecessary: […]

                                                                            Agree. The frustrating thing here is that there are cases where Rust does something not obvious, the response is “If we look at the docs, we find the rationale: …” but when Go does something that is not obvious, “your guess is as good as mine.” Doesn’t feel like a very generous take.

                                                                            1. 6

                                                                              the author has years of Go experience. He doesn’t want to be generous, he has an axe to grind.

                                                                              1. 3

                                                                                So where’s the relevant docs for why

                                                                                we need to move it into a separate package to make that happen

                                                                                or

                                                                                the visibility of symbols is tied to the casing of their identifiers

                                                                                1. 3

                                                                                  we need to move it into a separate package to make that happen

                                                                                  This is simply not true. I’m not sure why the author claims it is.

                                                                                  the visibility of symbols is tied to the casing of their identifiers

                                                                                  This is Go fundamental knowledge.

                                                                                  1. 3

                                                                                    This is Go fundamental knowledge.

                                                                                    Yes, I’m talking about the rationale.

                                                                                    1. 3

                                                                                      https://go.dev/tour/basics/3

                                                                                      In Go, a name is exported if it begins with a capital letter.

                                                                                      1. 1

                                                                                        rationale, n.
                                                                                        a set of reasons or a logical basis for a course of action or belief

                                                                                      2. 2

                                                                                        Why func and not fn? Why are declarations var type identifier and not var identifier type? It’s just a design decision, I think.

                                                                                2. 1

                                                                                  The information is useful but the tone is unhelpful. The difference in what’s checked/checkable and what’s not is an important difference between these platforms – as is the level of integration of the correctness guarantees are with the language definition. Although a static analysis tool for JavaScript could, theoretically, find all the bugs that rustc does, this is not really how things play out. The article demonstrates bugs which go vet can not find which are precluded by Rust’s language definition – that is real and substantive information.

                                                                                  There is more to Go than just some design decisions that make sense for a particular niche. It has a peculiar, iconoclastic design. There are Go evangelists who, much more strenuously than this author and with much less foundation, criticize JavaScript, Python, Rust, &c, as not really good for anything. The author is too keen to poke fun at the Go design and philosophy; but the examples stand on their own.

                                                                                1. 2

                                                                                  I’ve also managed to keep my total runtime ~600ms - https://github.com/tumdum/aoc2021

                                                                                  1. 1

                                                                                    I enjoyed seeing your progress reports on IRC. I was trying to be as pythonic as I could, so it was fun to look on as people worked through with different goals. Especially “fast and in rust.”

                                                                                    Out of curiosity, you’ve got a few of your AoC trees hosted on sr.ht, one on bitbucket and a couple on github. And the split is in a different order than the relative age of those services would explain… what drove your split there?

                                                                                    1. 2

                                                                                      I also find it very interesting seeing others approaches on irc.

                                                                                      As far as my repos:

                                                                                      2015 on sr.ht is I think just me dumping old code found on some hard drive in ~2020.

                                                                                      Bitbucket is because some of the people on my private leaderboard used it and I wanted to fit in

                                                                                      Using and paying for sr.ht was me trying to care about GitHub monopoly. Looks like I don’t really care in the end.

                                                                                      Basically it’s mostly randomness :)

                                                                                  1. 19

                                                                                    At this rate, he might as well write a book on Rust. Learnt so much from his posts. ✅💯

                                                                                    1. 10

                                                                                      I think it’d actually be nice to put some of his guides into the rust docs. Stuff like typical profile setups, error handling infrastructure, serialization, time and rng handling that are totally fine to just include in your application. Obviously you can’t put this in the permanent rust docs or book as they’re very opinionated and may be outdated easily.

                                                                                      But what I’ve seen is that new people, who didn’t see the ecosystem grow since 2015 and follow the trends, are pretty easily overwhelmed with the question of what you should use and what is a fair price to pay performance wise* (backtraces,auto-boxing for errors etc). So it’ll probably be the rust commons ? And I think it should be done sooner than later, because while rust didn’t add a fat std for obvious reasons, it’s hard to search through stackoverflow just to find out what the typical rust way of doing logging/backtraces/errors/regex/.. is (or that you should watch out for async-std vs tokio vs tokio 1.0 in async land). And some will certainly just start to write their own (whacky) solutions instead of using the exisiting ones, creating an even bigger time-to-mvp and thus less motivating results of what you actually wanted to accomplish.

                                                                                      * I’ve watched myself and other people deciding against backtraces (stable_eyre), auto-boxing (anyhow) and tracing simply because in rust you can actually think about the overhead this may introduce. Meanwhile nobody will think about turning off backtraces on a hard crash in java/python/nodejs. And I’m very certain this is premature optimization which is preventing you from debugging failures and giving you a harder time using rust also contributes to the “hard to learn” feeling.

                                                                                      1. 4

                                                                                        Autoboxing with errors can actually improve performance if the type in your Err variant is large. This is because Rust enums are the size of their largest variant. So if the Err variant of a returned Result is large, the return slot of that particular function will always be large even if the “happy path” (the Ok variant, since errors should be rare) is small or even unsized (()). Boxing ensures the Err variant is only pointer sized.

                                                                                        Of course this isn’t always the case, but it’s common enough that it’s a reason crates like anyhow use this pattern.

                                                                                        1. 3
                                                                                          1. 1

                                                                                            Hm yeah true - didn’t think about that even though I know how rust enums work.. Another pitfall to add to the list. I guess many people just optimize for no-alloc. But I honestly don’t know how well you can do error-kind matching over anyhow/eyre, apart from some type-testing. Which would be my go-to for autoboxing.

                                                                                      1. 2

                                                                                        I will share my repo: https://github.com/tumdum/aoc2021 🦀

                                                                                        1. 4

                                                                                          https://github.com/timvisee/advent-of-code-2021

                                                                                          This year I’ll be trying to solve all 50 puzzles combined in <1 second again.

                                                                                          1. 2

                                                                                            How difficult did that end up being last year?

                                                                                            1. 3

                                                                                              Since this is a year ago, I can’t express this in numbers. But yeah, it took me quite some effort to get the runtime of some solutions down. I had to be smart about using algorithms and minimizing runtime. I wrote about some of it here.

                                                                                        1. 1

                                                                                          I like that Rust’s HashMap talks almost immediately focuses on the hashing function used: https://doc.rust-lang.org/std/collections/struct.HashMap.html

                                                                                          Performance-sensitive use cases should consider the hash function an important input to a hash table, and not just given. Hence this article isn’t about the hash map’s badness, but rather about the hash function itself.

                                                                                          1. 1

                                                                                            I didn’t know Rust’s default hash map was so advanced. I’m not sure I agree with using SipHash as the default, since it’s relatively uncommon for a hash map to be a DoS vulnerability, and they’re trading performance for resistance to that. (SipHash would be a bad choice in a (non-networked) game, for instance.)

                                                                                            This paragraph is weird:

                                                                                            The behavior resulting from such a logic error is not specified, but will not result in undefined behavior. This could include panics, incorrect results, aborts, memory leaks, and non-termination.

                                                                                            Is that a typo, and means to say it will not panic etc.? Or is it that those bad effects are possible but are not considered undefined behavior?

                                                                                            1. 2

                                                                                              “Undefined behavior” has a specific meaning in Rust literature - it always refers to cases where a program performs operations which violate Rust’s correctness requirements, which the language says will result in undefined behaviour in that program.

                                                                                              This paragraph in HashMap’s documentation attempts, perhaps unsuccessfully, to convey that your program will behave incorrectly and unpredictably in those cases, but only in ways which are “safe”[1] because they don’t invoke behaviour which is undefined by the language.

                                                                                              I’ve written what I now realise is somewhat of a tome below, which explains this a bit more with some technical details on why this distinction is being made.

                                                                                              The idea of the partition of Rust using unsafe is that the compiler prevents “safe Rust” code from using language operations if they can invoke said undefined behaviour[2]. Safe Rust need to be able to rely that it’s protected by Rust’s guarantees for safe code, which the operations allowed in unsafe can violate if used incorrectly. This also means that Rust imposes a contract upon authors of code in unsafe blocks, which are the boundary between safe and unsafe Rust code; they too must not allow safe Rust code to invoke undefined behaviour in unsafe regions.

                                                                                              This means that code in safe Rust which has unspecified behaviour still cannot invoke Rust’s language level undefined behaviour. By extension, it means that this contract between authors of unsafe Rust and the language require them to defend against wild misuse and logic errors in safe traits like Eq[3] or Hash which are expected to comply by certain basic rules, similar to those of typeclass laws in e.g. Haskell. This is explicitly called out in the Rust language reference. It’s not permissible for your unsafe Rust code to allow even the worst of errors in safe code to allow things like stack or heap corruption, dangling references, or cause it to exhibit data races or violations of Rust’s ownership model.

                                                                                              [1] As this paragraph implies, some surprising things are “safe” because the program state is well defined by the language - panics cause a defined abort or stack unwind, aborts cause the program to halt (which is a defined program state), memory leaks are a performance issue rather than a memory safety issue (see the famous “ballistic missile malloc” anecdote), and obviously, going into an infinite loop or returning incorrect values for safe types is defined behaviour. Integer overflow is also safe (surprising to C and C++ users in particular), because the behaviour is implementation defined, but specified, because the language only permits implementations to choose between either panicking or specifically having two’s-complement wraparound behaviour for integer overflow. This can still result in logic errors, but as discussed later, logic errors in safe code are explicitly defined to be safe.

                                                                                              [2] Such restricted operations are typically useful operations which the language cannot prove are guaranteed to be sound in all cases - use of memory with uninitialised regions, executing functions across the FFI boundary, bit reinterpret casts (known as transmute in Rust jargon), reading or writing of mutable globals, accessing fields on unions, or dereferencing unsafe pointers. Misuse of these can either directly result in undefined behaviour, or can allow undefined behaviour to occur by violating Rust’s guarantees.

                                                                                              [3] A clear logic error in Eq would be just returning random results always, because Eq specifies implementors must return the same result for multiple invocations with the same values. A more surprising logic error is failing to implement the Eq trait with the reflexive ((a == a) == true)[4], transitive ((a == b AND b == c) == (a == c)) and symmetric ((a == b) == (b == a)) properties, because Eq is the trait for total equality, which includes these propreties as a requirement, unlike PartialEq. An emergent logic error is one like HashMap’s documentation describes, where safely managed interior mutability through types like Cell or RefCell allow their implementations of Eq to violate the basic rule of equality producing the same results for the same pair of input values - because they defer their Eq implementation to the values they contain, which can change between invocations.

                                                                                              [4] Rust’s builtin floating point types are defined to always be IEEE 754 compliant, and therefore do not implement Eq, because IEEE 754 values are not reflexive (because a == a is false if a is NaN). This has the effect that floating points can’t be used as keys in BTreeMap, or as values in BTreeSet; not that you should really want to do so.

                                                                                              1. 1

                                                                                                It is the second option - none of those listed effects are considered undefined behaviour in rust.

                                                                                            1. 1

                                                                                              Java is much older than Rust and carries backward compatibility weights. I am surprised it is not much slower.

                                                                                              1. 2

                                                                                                C++ is much older than java and I’d be surprised if it wasn’t in the same performance ballpark than Rust for optional. Old age is not an excuse for bad design.

                                                                                                1. 5

                                                                                                  Are you sure it is bad design? Perhaps Java’s design just makes different trade-offs for performance (like consistency or lower mental model overhead or… I’m sure there could be many).

                                                                                                  1. 1

                                                                                                    I can confidently say that it isn’t for consistency or mental overhead when compared to c++‘s generic implementation. Else this mess wouldn’t exist: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/function/package-summary.html and requiring fall back on external libs if you want more arguments, like https://projectreactor.io/docs/extra/3.2.0.M3/api/reactor/function/Consumer8.html

                                                                                                    1. 1

                                                                                                      I can confidently disagree, having seen the C++ standard library implementations of gcc, llvm, msvc, and a few others.

                                                                                                      1. 1

                                                                                                        why would the stdlib implementation matters ? what matters is that as a C++ dev I can just do optional<int>, optional<float>, optional<std::string>, optional<whatever_type> while in Java I have to remember that I must use OptionalInt, OptionalLong, OptionalDouble, and Optional<T> for everything else.

                                                                                                        Anecdotally, I find libc++‘s implementation (https://github.com/llvm/llvm-project/blob/main/libcxx/include/optional) fairly readable, outside of the necessary __uglification of headers (and when one takes into account all the various optimizations that it does that Java’s does not seem to do - if I read https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/util/Optional.java correctly, it means that Java’s Optional does not work with value types since it relies on value being nullable ? how is that remotely serious)

                                                                                                        1. 1

                                                                                                          What matters is that in rust I can do Option<NonZeroU32> and have a type with sizeof 4. C++ can’t do that - how is that remotely serious.

                                                                                                          I hope this will help you see your replies in this thread in a different way :)

                                                                                                          1. 1

                                                                                                            ? Of course it can, boost::optional did this for e.g. storing optional<T&> in a pointer for years. Sure, it’s a bit of work to specialize std::optional for your NonZero type but there’s nothing technically impossible

                                                                                                            1. 1

                                                                                                              Not really. Sure, you could maybe handcode specializations for a few selected types? Maybe? It’s not a trivial template. But even then this only handles some explicit/short list of types. In rust this is automatic for any type with ‘spare’ bits and works recursively (https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=004c4172a53ba7ab00fc7afb5e3adceb).

                                                                                                              Also, since two posts before you complained about java needing external libs then don’t use boost as an excuse for c++ standard libs ;)

                                                                                                          2. 1

                                                                                                            If you were implementing your own generic libraries instead of just consuming them, then your view might be different.

                                                                                                            For the language designers, both viewpoints are important.

                                                                                                            1. 1

                                                                                                              Having had to do this in Java, C# and admittedly mostly) C++, I really prefer the C++ way, especially nowadays with concepts.

                                                                                                              1. 1

                                                                                                                Concepts help, but they are quite recent. Also, C++ has objectively quite a few more footguns than the other languages, despite individual preferences.

                                                                                                    2. 3

                                                                                                      Java has a huge boat-anchor, the JVM. It’s stuck with a bytecode instruction set, binary format, and runtime model designed in the mid-1990s before the language had even seen serious use. There have been minor changes, like clarifying the memory concurrency model, but AFAIK no serious breaking changes.

                                                                                                      This makes some improvements, like stack-based objects, either impossible, or achievable only with heroic effort by the JIT and only in limited circumstances. (See also: the performance limitations of Java generics due to the requirement of type-erasure.)

                                                                                                  1. 2

                                                                                                    I wonder if you could diff two ASTs easily by converting them to a gron-like representation: a list of (path, node) pairs. Unlike text, there are no parens/braces to mess up. And unlike(?) a tree, it would be easy(?) to tell when a subtree moved, because it looks just like a chunk of lines that moved.

                                                                                                    1. 4

                                                                                                      If you mean what I think you mean, it wouldn’t work well. For example, say you take the example from the gron page:

                                                                                                      json[0].commit.author = {};
                                                                                                      json[0].commit.author.date = "2016-07-02T10:51:21Z";
                                                                                                      json[0].commit.author.email = "mail@tomnomnom.com";
                                                                                                      json[0].commit.author.name = "Tom Hudson";
                                                                                                      

                                                                                                      and wrap the commit in a… commitGroup or something:

                                                                                                      json[0].commitGroup[0].commit.author = {};
                                                                                                      json[0].commitGroup[0].commit.author.date = "2016-07-02T10:51:21Z";
                                                                                                      json[0].commitGroup[0].commit.author.email = "mail@tomnomnom.com";
                                                                                                      json[0].commitGroup[0].commit.author.name = "Tom Hudson";
                                                                                                      

                                                                                                      Then the tree diff should be a single edit: “insert commitGroup as a parent of commit”, but the gron-diff shows that every line changed.

                                                                                                      My understanding is that computing tree diffs is, unfortunately, just plain-out computationally difficult. Cubic time in the size of the tree or something.

                                                                                                      1. 3
                                                                                                        1. 2

                                                                                                          Nope! I’m glad that exists, hopefully structured diffs are the future!

                                                                                                          You’ll notice that the first listed known problem is “performance”. I remember talking to a Github employee at a conference years ago about structured diffs, who said that was a big obstacle and they had ideas for clever heuristics to make it faster. I wonder if that ever went anywhere (for all I know it could be related to difftastic).

                                                                                                          (As an aside, I really love when a project points out both their strengths and weaknesses… you know it has some weak points, so it’s a question of whether they’re going to tell you or you have to find out the hard way.)

                                                                                                    1. 3

                                                                                                      Very nice introduction to criterion. One thing worth mentioning is that you don’t have to create fake binary to avoid profiling criterion logic. You can use --profile-time to run existing benchmark for some time without involving analysis done by criterion.

                                                                                                      1. 7

                                                                                                        Has anyone submitted anything in the past? What’s the most comfortable framework for making games in Lisp/Scheme/etc? I ask as a curious person who is looking for a hygienic framework for creating games in. My issue is finding something that’s easy to cross-compile for other platforms (eg. I use Linux, but would want to compile for Windows friends). I have been mulling over Lua/love2d for a while but never committed since I would prefer a Lisp environment.

                                                                                                        1. 13

                                                                                                          You might like https://fennel-lang.org/

                                                                                                          1. 15

                                                                                                            I have participated in every one since 2018 using Fennel. The easiest way to get started IMO is using TIC-80 which has built-in support for Fennel and lets you publish your games to play in the browser, so no downloads are required: https://tic80.com

                                                                                                            Targeting love2d is another popular choice tho. It’s a lot more complex than TIC-80 but it also offers a lot more flexibility: https://love2d.org It’s harder to get love2d games to run in the browser but still really easy to cross-compile from Linux to Windows.

                                                                                                            (disclaimer: Fennel lead developer here)

                                                                                                            1. 4

                                                                                                              There’s even a starter kit for Love2D and Fennel

                                                                                                              https://gitlab.com/alexjgriffith/min-love2d-fennel

                                                                                                          2. 7

                                                                                                            Hello, jam organizer here. There have been lots of submissions from a variety of different Lisps in the past. You can check out the past jams on our wiki. Fennel seems to be a popular choice each jam, but I can’t say much more than that, as I am a diehard Common Lisp user :) For Common Lisp, there are a lot of partial solutions, as most people seem to be focused on building extremely general game engines rather than focused engines for a particular game/genre. This is expected in a way, as Common Lisp is extrememly performant, and there is no reason we need to be confined to C++ etc with Unity/Unreal/Godot…it just isn’t there yet though. I have been working towards that for a good 10 years now…but nothing worth announcing yet…anyway have fun regardless of which dialect you decide on!

                                                                                                            1. 1

                                                                                                              Thank you! I appreciate the insight. I have indeed looked at Godot in the past but felt it was too much for me to take on. I definitely want to try something out in Common Lisp, so the wiki you linked looks like a great place for me to start doing some research. Thank you for organizing this event!

                                                                                                            2. 5

                                                                                                              For 2D in Common Lisp, popular choices are Sketch and trivial-gamekit (apologies for self-plug). I, as an author of the latter framework, use travis/appveyor/github actions CI solutions to make builds for different platforms. If there would be any interest, I probably can arrange github action for building gamekit-based stuff for Linux and Windows. But otherwise, there exist examples for how to do that with travis and appveyor.

                                                                                                              1. 4

                                                                                                                Thank you! I don’t mind the self-plug, in fact I’m more inclined to check out your project for responding to my question! I will totally look into your library to see how it works now!

                                                                                                              2. 5

                                                                                                                Perhaps you might like CHICKEN; it is straightforward to compile static binaries, and cross-compiling to Windows (mingw) from Linux is also supported. There’s hypergiant, a game development toolkit, and on IRC you’ll find a few people interested in game writing too. In CHICKEN 4 we used to have a love2d-inspired framework called doodle, which shouldn’t be too hard to port to CHICKEN 5.

                                                                                                                1. 2

                                                                                                                  I have been doing a bit of practice with Chicken and trying to get familiar with that environment. The cross-compiling is very appealing to me and I was actively looking at hypergiant. I might have to give that another shot!

                                                                                                              1. 18

                                                                                                                The whole damn thing.

                                                                                                                Instead of having this Frankenstein’s monster of different OSs and different programming languages and browsers that are OSs and OSs that are browsers, just have one thing.

                                                                                                                There is one language. There is one modular OS written in this language. You can hot-fix the code. Bits and pieces are stripped out for lower powered machines. Someone who knows security has designed this thing to be secure.

                                                                                                                The same code can run on your local machine, or on someone else’s machine. A website is just a document on someone else’s machine. It can run scripts on their machine or yours. Except on your machine they can’t run unless you let them and they can’t do I/O unless you let them.

                                                                                                                There is one email protocol. Email addresses can’t be spoofed. If someone doesn’t like getting an email from you, they can charge you a dollar for it.

                                                                                                                There is one IM protocol. It’s used by computers including cellphones.

                                                                                                                There is one teleconferencing protocol.

                                                                                                                There is one document format. Plain text with simple markup for formatting, alignment, links and images. It looks a lot like Markdown, probably.

                                                                                                                Every GUI program is a CLI program underneath and can be scripted.

                                                                                                                (Some of this was inspired by legends of what LISP can do.)

                                                                                                                1. 24

                                                                                                                  Goodness, no - are you INSANE? Technological monocultures are one of the greatest non-ecological threats to the human race!

                                                                                                                  1. 1

                                                                                                                    I need some elaboration here. Why would it be a threat to have everyone use the same OS and the same programming language and the same communications protocols?

                                                                                                                    1. 6

                                                                                                                      One vulnerability to rule them all.

                                                                                                                      1. 2

                                                                                                                        Pithy as that sounds, it is not convincing for me.

                                                                                                                        Having many different systems and languages in order to have security by obscurity by having many different vulnerabilities does not sound like a good idea.

                                                                                                                        I would hope a proper inclusion of security principles while designing an OS/language would be a better way to go.

                                                                                                                        1. 4

                                                                                                                          It is not security through obscurity, it is security through diversity, which is a very different thing. Security through obscurity says that you may have vulnerabilities but you’ve tried to hide them so an attacker can’t exploit them because they don’t know about them. This works as well as your secrecy mechanism. It is generally considered bad because information disclosure vulnerabilities are the hardest to fix and they are the root of your security in a system that depends on obscurity.

                                                                                                                          Security through diversity, in contrast, says that you may have vulnerabilities but they won’t affect your entire fleet. You can build reliable systems on top of this. For example, the Verisign-run DNS roots use a mixture of FreeBSD and Linux and a mixture of bind, unbound, and their own in-house DNS server. If you find a Linux vulnerability, you can take out half of the machines, but the other half will still work (just slower). Similarly, a FreeBSD vulnerability can take out half of them. A bind or unbound vulnerability will take out a third of them. A bind vulnerability that depends on something OS-specific will take out about a sixth.

                                                                                                                          This is really important when it comes to self-propagating malware. Back in the XP days, there were several worms that would compromise every Windows machine on the local network. I recall doing a fresh install of Windows XP and connecting it to the university network to install Windows update: it was compromised before it was able to download the fix for the vulnerability that the worm was exploiting. If we’d only had XP machines on the network, getting out of that would have been very difficult. Because we had a load of Linux machines and Macs, we were able to download the latest roll-up fix for Windows, burn it to a CD, redo the install, and then do an offline update.

                                                                                                                          Looking at the growing Linux / Docker monoculture today, I wonder how much damage a motivated individual with a Linux remote arbitrary-code execution vulnerability could do.

                                                                                                                          1. 1

                                                                                                                            Sure, but is this an intentional strategy? Did we set out to have Windows and Mac and Linux in order that we could prevent viruses from spreading? It’s an accidental observation and not a really compelling one.

                                                                                                                            I’ve pointed out my thinking in this part of the thread https://lobste.rs/s/sdum3p/if_you_could_rewrite_anything_from#c_ennbfs

                                                                                                                            In short, there must be more principled ways of securing our computers than hoping multiple green field implementations of the same application have different sets of bugs.

                                                                                                                          2. 3

                                                                                                                            A few examples come to mine though—heartbleed (which affected anyone using OpenSSL) and Specter (anyone using the x86 platform). Also, Microsoft Windows for years had plenty of critical exploits because it had well over 90% of the desktop market.

                                                                                                                            You might also want to look up the impending doom of bananas, because over 90% of bananas sold today are genetic clones (it’s basically one plant) and there’s a fungus threatening to kill the banana market. A monoculture is a bad idea.

                                                                                                                            1. 1

                                                                                                                              Yes, for humans (and other living things) the idea of immunity through obscurity (to coin a phrase) is evolutionarily advantageous. Our varied responses to COVID is one such immediate example. It does have the drawback that it makes it harder to develop therapies since we see population specificity in responses.

                                                                                                                              I don’t buy that the we need to employ the same idea in an engineered system. It’s a convenient back-ported bullet list advantage of having a chaotic mess of OSes and programming languages, but it certainly wasn’t intentional.

                                                                                                                              I’d rather have an engineered, intentional robustness to the systems we build.

                                                                                                                              1. 4

                                                                                                                                To go in a slightly different direction—building codes. The farther north you go, the steeper roofs tend to get. In Sweden, one needs a steep roof to shed show buildup, but where I live (South Florida, just north of Cuba) building such a roof would be a waste of resources because we don’t have snow—we just need a shallow angle to shed rain water. Conversely, we don’t need codes to deal with earthquakes, nor does California need to deal with hurricanes. Yet it would be so much simpler to have a single building code in the US. I’m sure there are plenty of people who would love to force such a thing everywhere if only to make their lives easier (or for rent-seeking purposes).

                                                                                                                                1. 2

                                                                                                                                  We have different houses for different environments, and we have different programs for different use cases. This does not mean we need different programing languages.

                                                                                                                            2. 2

                                                                                                                              I would hope a proper inclusion of security principles while designing an OS/language would be a better way to go.

                                                                                                                              In principle, yeah. But even the best security engineers are human and prone to fail.

                                                                                                                              If every deployment was the same version of the same software, then attackers could find an exploitable bug and exploit it across every single system.

                                                                                                                              Would you like to drive in a car where every single engine blows up, killing all inside the car? If all cars are the same, they’ll all explode. We’d eventually move back to horse and buggy. ;-) Having a variety of cars helps mitigate issues other cars have–while still having problems of its own.

                                                                                                                              1. 1

                                                                                                                                In this heterogeneous system we have more bugs (assuming the same rate of bugs everywhere) and fewer reports (since there are fewer users per system) and a more drawn out deployment of fixes. I don’t think this is better.

                                                                                                                                1. 1

                                                                                                                                  Sure, you’d have more bugs. But the bugs would (hopefully) be in different, distinct places. One car might blow up, another might just blow a tire.

                                                                                                                                  From an attacker’s perspective, if everyone drives the same car, it the attacker knows that the flaws from one car are reproducible with 100% success rate, then the attacker doesn’t need to spend time/resources of other cars. The attacker can just reuse and continue to rinse, reuse, recycle. All are vulnerable to the same bug. All can be exploited in the same manner reliably, time after another.

                                                                                                                                  1. 3

                                                                                                                                    To go by the car analogy, the bugs that would be uncovered by drivers rather than during the testing process would be rare ones, like, if I hit the gas pedal and brake at the same time it exposes a bug in the ECU that leads to total loss of power at any speed.

                                                                                                                                    I’d rather drive a car a million other drivers have been driving than drive a car that’s driven by 100 people. Because over a million drivers it’s much more likely someone hits the gas and brake at the same time and uncovers the bug which can then be fixed in one go.

                                                                                                                      2. 3
                                                                                                                        1. 1

                                                                                                                          Yes, that’s probably the LISP thing I was thinking of, thanks!

                                                                                                                        2. 2

                                                                                                                          I agree completely!

                                                                                                                          We would need to put some safety measures in place, and there would have to be processes defined for how you go about suggesting/approving/adding/changing designs (that anyone can be a part of), but otherwise, it would be a boon for the human race. In two generations, we would all be experts in our computers and systems would interoperate with everything!

                                                                                                                          There would be no need to learn new tools every X months. The UI would familiar to everyone, and any improvements would be forced to go through human testing/trials before being accepted, since it would be used by everyone! There would be continual advancements in every area of life. Time would be spent on improving the existing experience/tool, instead of recreating or fixing things.

                                                                                                                          1. 2

                                                                                                                            I would also like to rewrite most stuff from the ground up. But monocultures aren’t good. Orthogonality in basic building blocks is very important. And picking the right abstractions to avoid footguns. Some ideas, not necessarily the best ones:

                                                                                                                            • proven correct microkernel written in rust (or similar borrow-checked language), something like L4
                                                                                                                            • capability based OS
                                                                                                                            • no TCP/HTTP monoculture in networks (SCTP? pubsub networks?)
                                                                                                                            • are our current processor architectures anywhere near sane? could safe concurrency be encouraged at a hardware level?
                                                                                                                            • less walled gardens and centralisation
                                                                                                                            1. 2

                                                                                                                              proven correct microkernel written in rust (or similar borrow-checked language), something like L4

                                                                                                                              A solved problem. seL4, including support for capabilities.

                                                                                                                              1. 5

                                                                                                                                seL4 is proven correct by treating a lot of things as axioms and by presenting a programmer model that punts all of the difficult bits to get correct to application developers, making it almost impossible to write correct code on top of. It’s a fantastic demonstration of the state of modern proof tools, it’s a terrible example of a microkernel.

                                                                                                                                1. 2

                                                                                                                                  FUD unless proven otherwise.

                                                                                                                                  Counter-examples exist; seL4 can definitely be used, as demonstrated by many successful uses.

                                                                                                                                  The seL4 foundation is getting a lot of high profile members.

                                                                                                                                  Furthermore, Genode, which is relatively easy to use, supports seL4 as a kernel.

                                                                                                                            2. 2

                                                                                                                              Someone wrote a detailed vision of rebuilding everything from scratch, if you’re interested. 1

                                                                                                                                1. 11

                                                                                                                                  I never understood this thing.

                                                                                                                                  1. 7

                                                                                                                                    I think that is deliberate.

                                                                                                                                2. 1

                                                                                                                                  And one leader to rule them all. No, thanks.

                                                                                                                                  1. 4

                                                                                                                                    Well, I was thinking of something even worse - design by committee, like for electrical stuff, but your idea sounds better.

                                                                                                                                  2. 1

                                                                                                                                    We already have this, dozens of them. All you need to do is point guns at everybody and make them use your favourite. What a terrible idea.

                                                                                                                                  1. 8

                                                                                                                                    This is more like 60 lines, not 30. This is 30 a tcp proxy in 30 lines (in Go): https://gist.github.com/tumdum/afc52f43c257e655f0071701740b8f60 ;)

                                                                                                                                    1. 2

                                                                                                                                      Exact line count doesn’t really matter here. Both solutions work roughly the same in principle, this Rust one uses about 30 lines to configure clap. Arguably both could use better error handling, but that is not the point here.

                                                                                                                                      1. 2

                                                                                                                                        So what does matter here?

                                                                                                                                        1. 3

                                                                                                                                          The relative simplicity of it and the ease of extending this for local / debugging purposes.

                                                                                                                                          1. 3

                                                                                                                                            My point is that this post is imho pointless - it took me max 10 minutes to write equivalent Go version. It also doesn’t really tell us anything which is not written in the tokio docs. I’m pretty sure you can just as easily write equivalent code in most modern languages.

                                                                                                                                            1. 1

                                                                                                                                              Simple / easy is relative. See also https://xkcd.com/1053/. Having experience makes things look easy. But if you for instance know http a little but have no idea how the underlying bits and pieces work this can be very enlightening.

                                                                                                                                              1. 2

                                                                                                                                                Contrary to what you are trying to suggest I didn’t make fun of the author in anything I wrote. But I do try to show that this is low-value content that shouldn’t have been posted to this site - right now this is barely anything more than a hello world.

                                                                                                                                                I mean, if you want to read this kind of beginners posts regularly be my guest - I’m pretty sure there are many other sites where beginners show their discoveries. I just think that lobsters is not a place for this kind of content.

                                                                                                                                      2. 2

                                                                                                                                        To be fair, 30 of the rust lines are an absurdly verbose parser for 2 CLI arguments, unrelated to the TCP proxy. But you don’t have to hate rust to like golang – it’s not zero-sum.

                                                                                                                                        1. 2

                                                                                                                                          Who is hating rust? I like it almost as much as I like Go. In fact I consider myself lucky since I work in a pure rust codebase :)