Threads for pekkavaa

  1. 2

    A great little piece of history - but my number one question remained unanswered: what is a rage diamond, and where did that term come from?

    1. 1

      I’ve not come across it before (or after).

      I take it to mean something wonderful created from feeling deep frustration.

      1. 1

        Sigh. I clicked on the link really thinking it was an FPGA reimplementation of some 90s graphics card called (ATI) “Rage Diamond.” Shows my biases!

    1. 3

      The flip side is something I’ve come to appreciate about ML systems: confidence values. For each prediction our system makes, it provides a confidence number: “This quarter note has a sharp next to it, and I’m 92.78 percent confident of that.”

      If he is referring to the normal output values of a machine learning model: Those are not probabilities! Of course there is a correlation, but those outputs are highly non-linear.

      To get probabilities you need to create a Bayesian machine learning model. For this there are Tensorflow Probability or BoTorch, but other methods exist a well. For example you can train an ensemble of models with different sample weights (Bayesian bootstrap) or turn on dropout layers during inference and sampling multiple predictions.

      1. 2

        Well to be fair he did call them “confidence values” and not probabilities.

        1. 2

          Those values have nothing to do with confidence, uncertainty or other probabilistic concepts. They are basically the median of the distribution of predictions. The spread of this distribution tells you how confident the model is in that prediction. The median and the spread of a distribution are two different and independent metrics.

          PS: I really liked the article :)

          1. 1

            You’re absolutely correct that “confidence” is usually associated with the spread of the distribution. I’m wondering what would be the right word for values like that then. Scores?

      1. 1

        Wow I haven’t seen that explicit surface formulation of metaballs before. Looks great! I don’t think it generalizes to more than two balls though.

        1. 1

          Depending on your fill method, you could probably just superimpose all the pairs on each other perhaps. Not awake enough yet to think that through.

          1. 1

            Yeah that would probably look fine but it doesn’t give the same result as summing all the potential fields and then thresholding. But since it’s a visual effect it all boils down to what looks good :) You can find some old metaball debates on pouet.net btw.

        1. 1

          This trick seems to bear a resemblance to the visitor pattern but I’m having trouble following its code.

          Anyone care to explain why is the area function a lambda and not a template parameterized member function? And what is the purpose of the Concept class and why isn’t it merged with Model?

          1. 4

            While “formally” a release post (which I normally avoid pushing), there is something of a ‘try asking lobste.rs’ embedded into it – being the filter bubble I have access to that best fits the “ask”.

            There is a much lengthier dive into this that I’ll spare people for now, but in the networking section, the lines between app models (mobile, desktop, embedded, web) is blurred and the distance from ‘develop and share’ shortened down to just before the 80ies home computer ‘boot into REPL’ era meets heroku ‘push and deploy’.

            I have some idea where this is heading, but much less of its pros, cons and fixable flaws. Normally, to test such things, I tend to pick a few applications, run it against the model and see what comes out on the other side. Here I am having some troubles coming up with ideas that aren’t tainted ;)

            What I am specifically looking for here are ideas for applications that are (annoying, awful, expensive), to develop in the current web2.x “networked application substrate” sense that I can prototype and compare against – one example on the drawing board is the kind of collaborate gaming planning being done with (https://xivsim.com/). Any others?

            1. 7

              I own computers A and B, am physically at computer A, videochatting with you, and want to show you a resource ‘located’ on computer B.

              1. 2

                That one is practically possible right now (quality issues around sound and unintuitive tooling aside) - it’ll show off improvements to local discovery though (“of all the devices I trust, which one are available right now) so a good target for 0.6.3 regardless. A nice expansion would be something like Twitter Spaces (moderated many to many) with media/blob data sharing through.

              2. 5

                One application idea: collaborative Jupyter-like notebooks with client-side hardware accelerated graphics. Think 3D plots that each client can rotate and zoom but also update via REPL for all participants.

                1. 2

                  Did you look at the pipeworld[1] project? Ignoring the visual flair / ZUI shenanigans - the basic design and processing model is quite close. The scattered notes for that project have a few collaboration options, though I mostly considered the unsexy one of composing a set of cells into a shared / collaborative that act as a ‘remote desktop’.

                  The more delicious approach would be to have cells that do synch against other connected users, but the sharing /synchronisation semantics for cells (say I run an external program and you want to use the contents of its clipboard as part of a shader applied to the webcam feed from another being sent to a third) are “non-trivial”. Still it would show off the parts of synchronising local state to a server-side identity-tied store and drive the use of a trusted third party as rendezvous for sharing within a group, so I’ll probably implement it…

                  [1] https://arcan-fe.com/2021/04/12/introducing-pipeworld/

                  1. 3

                    I did see pipeworld earlier: very impressive :) The collaboration part could be just a text editor as “remote desktop” as you said. Not very impressive but something that’s not easy to implement in other environments.

                  2. 1

                    The ‘hardware accelerated graphics’ part is there. The ‘collaborative’ part is an application level concern, not a substrate level concern; probably crdt or similar.

                  3. 3

                    The raid planner example calls to mind a “serverless” take on competitive and hot seat games: chess, go, scorched earth, anything fairly simple and turn-based with perfect information where a player could pull their “seat” onto other devices and back.

                    That by itself is basically a parlor trick, but I’m thinking also that the slower pace afforded by moving players around/across devices (closer to play by email or BBS door games) could also allow for forking and recombining game states: not only to participate from my own device, but to disconnect, play out a few turns against AI or a different human opponent on yet another device, then roll back and reconnect (perhaps from a different network somehow?) or even reconvene on the one original device. Useful for learning or cheating at chess, and so on. In all it’d be a simple and decently demoable example of sharing, validating, merging, and expiring state. A server could do it juggling client websockets and tracking state trees centrally but that sounds kind of horrible.

                    1. 2

                      Your suggestion is probably a more approachable middle ground than the XIV example (though I might just do that for other reasons). The scaffolding for the case you presented was added way back, I think around 2013 when we experimented with ‘rewind’ in emulators (I think the retroarch/libretro guys have some videos around that on youtube still) and then used it as a way to shorten the distance between latency an perceived latency with selective rollback.

                      You did lead me on to thinking of shared state synchronisation. It will be a bit uncomfortable to implement on the directory server end in its current state, but having multiple concurrent sessions on the same keypair is possible, so synchronising persistent state store updates would be a logical continuation.

                      I think the demo video will be something visually easy to grasp like desktop wallpaper and colorscheme updates being changed across all local devices at once. That would still match your chess case internally - at load time all nodes get the same state, the alternate hypotheses play out, AI controlled nodes playing forward from their current state, and on a global ‘push’ revert to the next shared default and spin off again.

                      The next release (and current thing being coded and recorded) sortof fits that – the way clients downloaded and ran an appl is structured so that the server can live ‘push’ a new update, and the local arcan runner can switch to that instantaneously (related is the articles on crash resilience). The same action can act as this sync point. The demo would be something like live-coding something visual and see it project across different form factors, with crashes collected and fed back to the development machine.

                      I am still in a denial phase (then comes anger, …), but it very much looks like the server end will grow a nodejs (albeit Lua) like structure for expressing sharing semantic, state management and so on.

                  1. 10

                    This is incredibly cool. I liked it initially, and then I saw that its deployment model is Smalltalk-style images and liked it even more. For the folks not excited enough by the summary:

                    • MIT licensed
                    • Supports modern JavaScript features.
                    • Runs on the desktop, generates a VM image (compiled bytecode and object state), moved to the MCU
                    • Really, really tiny.
                    1. 1

                      Agreed I also love it! I’m tempted to try and run it on the Nintendo 64 for some homebrew hacking.

                      These engines are so cute with their teeny little feature lists: “mJS lets you write for-loops and switch statements for example” :)

                      1. 5

                        Quite surprisingly, I was able to build it on Morello with no source-code modifications. It looks as if they keep pointers to large malloc-backed regions and uses 16-bit integers as offsets to these, which means that it works fine with a strict-provenance architecture.

                      2. 1

                        This puts me in mind of General Magic, which had a system where software agents could travel from one host to another and maintain their state. So you could send a program to a server that could talk to the server, do something on your behalf, and return back to you with results.

                        Microvium’s hard limit of 64KB of heap space is kind of, uh, limiting, but it would be reasonable for small tasks. And it might make it safer for running untrusted code, since there’s no way for the VM to read or write outside its heap (right?) or cause trouble by growing without bound.

                        1. 2

                          A 64 KiB heap is fine for the intended use case: microcontrollers with tens to hundreds of KiBs of SRAM. The way that it manages the memory is quite interesting. The embedding environment needs to provide a malloc call that will allocate chunks. It then uses these to build a linear address address space that it bump-allocates into. When you turn a 16-bit integer (actually a 15-bit integer with a 1-bit tag) into a pointer, you have to search the list of chunks to find the one with the corresponding offset. When you run the GC, it does a semi-space compacting GC pass, allocating new chunks and copying live objects into them.

                          This works very well with CHERI because each chunk is represented by a valid pointer and so you get a clean derivation chain. It also works very nicely with the temporal safety work: JavaScript objects are live until a GC happens, and so if you’ve taken a pointer to them and passed it into C code then this is stable. When a GC happens, all of the existing chunks will be deallocated and the revoker will prevent dangling pointers from C code accessing them.

                          1. 1

                            Sounds clever, but also kinda slow to deref a “pointer”; though I recognize speed isn’t a goal here.

                            1. 2

                              If you’re on an 8-bit or 16-bit device (or more accurately, a device with a 16-bit address space) then pointer access is native and should be fast.

                              For devices with a larger address space (e.g. 32- or 64-bit) but where you know that the VM memory will always be in a 64kB sub-range of the total address space, you can configure a base address and then the 16-bit references are relative to that base address. In my tests when compiling to ARM, this adds just one or two instructions to each pointer access (1 instruction in full ARM and 2 instructions in Thumb) so it’s still pretty efficient. This case works for 32-bit MCUs where SRAM is mapped to a sub-range of the 32-bit range. It also works in desktop and server-class devices if you implement MVM_MALLOC and MVM_FREE to allocate within a subrange (this is what I do for testing and debugging).

                              In the worst case, if you’re working with a large address space with no guarantee about the address sub-range, then pointer encoding/decoding is expensive and involves traversing the linked list of allocated buckets. But one consolation is that a GC cycle essentially packs all the active buckets in one.

                              More info here: https://coder-mike.com/blog/2022/05/20/microvium-updated-memory-model/

                      1. 1

                        the system just starts babbling randomly—but it has no sense that its random babbling is random babbling

                        If you can estimate the likelihood of a phrase I believe you could make decent guess on how much sense a question makes. Wouldn’t make the system “more conscious” but maybe it could fool some skeptics, heh.

                        1. 5

                          It’s not about what makes sense, but being conscious of what it’s doing. One great innovation in these chatbots is that they’re not designed as a dialog system, they are generating chatlogs. You have to stop generation early, parse the output, and present it as a chat. If you don’t stop it early, it’ll start hallucinating the human parts of the chats as well. It’ll write a whole conversation that just meanders on and on. It will make sense, maybe even more than an average human-human chat, but it’s not anything a conscious language user would ever do.

                          1. 1

                            I understood the babbling mentioned in the article referred exactly to the nonsense answers to nonsense questions, not to the way the model goes off the rails eventually. Also apparently it’s enough to just change the prompt to make the model deal with funny questions.

                            1. 3

                              I must confess I didn’t remember where your quote came from, and linked it to the Google employee’s interview. In stead of rereading the article to contextualize, it appears I just started babbling without being conscious about it.

                        1. 1

                          This seems very interesting after a quick skim but I didn’t understand what exactly are you caching. Shader programs?

                          1. 2

                            No, it’s caching everything. The entire point is minimal recomputation. It is so thorough that a normal interactive program doesn’t even have a render loop. It simply rewinds and reruns non looping code instead.

                            You only need a render loop to do continuous animation, and even then, the purpose of the loop is to just schedule a rewind using requestAnimationFrame, and make sure animations are keyed off a global time stamp.

                          1. 10

                            Having spent a lot of time with Python and Typescript, I agree with the safety argument – even in completely untyped Python-django-megamodel-spaghetti code I never saw a homonym method getting called accidentally with a compatible signature.

                            That said, working mostly in Rust now, I think I prefer traits to duck typing for a different reason, which is code organization. I like being able to look at foo.rs and see “here’s the data, here’s the inherent impl with methods that will only be called on this /specific/ type, and here are the trait impls that can be called generically”. Sometimes in say, Python, looking at a big class it’s unclear (unless it’s __dunder__ ofc) whether a method is part of an existing interface, or a bespoke helper for this specific class. It’s not a huge deal, but it helps me get my bearings.

                            I’m not familiar with Go so I don’t know if Go’s flavor of duck-typing suffers the same organizational troubles.

                            1. 6

                              Duck-typing also makes it more difficult to do automatic refactoring because you can’t tell who implements what, as you mentioned in the Python example. I still love it though.

                              1. 5

                                You’re mentioning automatic refactoring, but this is just as applicable to manual refactoring. Modifying any API anywhere you want and then fixing the errors that the compiler outputs is very relaxing, as opposed to staring at an integration test that found breakage that happened somewhere in a piece of code not covered with enough unit tests.

                              2. 2

                                Traits are duck typing. Especially if you use trait objects.

                                1. 11

                                  I don’t think so. You have to impl SomeSpecificTrait for YourType, and to be compatible with a param: impl SomeSpecificTrait it must refer to the exact same trait.

                                  1. 4

                                    If it impls WalksLike<T> and QuacksLike<T> where T: Duck it must be a duck.

                                    1. 4

                                      I think it would be better to say that traits quack like duck typing. 😜 They mostly achieve the same things. Especially in Rust where it’s easy to use the derive macro to get certain trait methods, and yet more traits have blanket implementations.

                                1. 5

                                  As always, a nice and thorough report. Good work @aphyr!

                                  Is anybody using Redpanda instead of Kafka at work?

                                  1. 5

                                    No, but I’ve definitely been tempted, at least for low risk/ dev environments, but haven’t pulled the trigger. I’m awaiting to give it a good spin first. I’ve been looking for something like this post though.

                                    FWIW I’ve been reluctant mostly because having new folks working with Kafka is tricky enough without additional “Kafka compatibility “ issues popping up. Similar to the reasoning that had us decide doing kstreams in Java instead of scala or clojure. I’d file it as “nice but will break the innovation budget”.

                                    1. 3

                                      This may be the first time I hear the term: “innovation budget”. Astonishingly fitting.

                                      1. 6

                                        Sidenote if this topic is interesting to you: it was a principle in Rusts development under the name of “strangeness budget”.

                                        https://steveklabnik.com/writing/the-language-strangeness-budget

                                        1. 6

                                          I believe it was popularized by this blog post from 2015: Choose Boring Technology. There it was conceptualized as innovation “tokens” like in friendlysock’s comment earlier.

                                          1. 3

                                            The concept of an “innovation budget” is something that is super important for all engineers, especially folks at a startup.

                                            Every team has some finite number of innovation tokens, which may or may not replenish over time. If you spend all of your innovation tokens on, say, building your webserver in Rust then you really can’t afford to do more exotic things like microservices or blue/green deploys or whatever else.

                                            Similarly, the model goes, if you pick a really boring webstack and ops stuff (say, modern-day Rails on Heroku), then you can spend those innovation tokens on more interesting problems like integer programming for logistics or something. The business version of this would be deciding to support multiple vendors or go with a new payment processor instead of just trusting Stripe or Braintree or whoever.

                                            My extension to the model is that, in a startup, you can exchange innovation tokens for efficiency tokens or cost coupons…if you build your stack in bespoke Julia it’s a lot harder to hire (and a lot harder to backfill departures!), whereas if you go with python or javascript then you can easily find interns or bodyshops to round out the workforce.

                                            One of the great open questions to me about the model is: how does an org deliberately replenish innovation tokens?

                                            1. 1

                                              One of the great open questions to me about the model is: how does an org deliberately replenish innovation tokens

                                              Apologies for necroing an old thread here but this just popped into my head: you can recycle an innovation token by using a given piece of originally-interesting technology until over time it becomes boring. This works if you’re huge or there arises a large community around it.

                                              Rails is a prime example of this. It used to be an interesting choice and now it’s offered up as an example of a boring technology.

                                        2. 2

                                          Is anybody using Redpanda instead of Kafka at work?

                                          I’d run a PoC at work (and even found a minor bug). In my use-case the performance was exactly the same. In the end this did not move forward as I switched teams inside the organization and the old team went ahead with a managed service.

                                        1. 1

                                          Note that in GCC you can use a language extension and define nested functions. Should work at least in the qsort use-case.

                                          1. 38

                                            A company “bought” Audacity and added spyware. The same company also did it to MuseScore.

                                            You know, it really was and still is a stretch to describe basic, opt-in telemetry as spyware just because they made the unfortunate decision to use Google Analytics as a backend.

                                            1. 19

                                              Also, from what I heard they are doing decent work, actually paying maintainers to work on the software. You know, the exact thing that OP is complaining about not happening.

                                              1. 5

                                                please explain how Google Analytics isn’t spyware? it is software that monitors user behavior and reports it to a 3rd party, all typically without user consent.

                                                1. 20

                                                  Audacity/GA would be spyware if it was monitoring usage of other things the user was doing on their computer. Using the term to describe the app recording usage of itself is hyperbole.

                                                  1. 5

                                                    If my business was audio engineering, having a tool that started reported on my usage of it would be problematic. I would immediately start looking for alternatives. Why should I have to look through the code to find out exactly what it’s logging? File names? My use of licensed plugins? The inference that the lead singer needs pitch correction on every single track, or that we brought in a pro “backup” singer who is actually 85% of the lead on the final mix?

                                                    When I am editing my family’s amateur instrumental work, I think I can reasonably feel equally peeved at having my sound editor report back to base.

                                                    Calling it spyware is not necessarily hyperbole.

                                                    1. 5

                                                      Fortunately the scenario you described doesn’t exist since the telemetry is opt-in.

                                                  2. 19

                                                    all typically without user consent

                                                    Except here it is opt-in, as pekkavaa said.

                                                    1. 2

                                                      thanks, i missed that.

                                                      I was curious what kind of consent was involved, and honestly it’s better than I expected. Based on the issue linked in the OP it seems Audacity now displays a dialog asking users to help “improve audacity” by allowing them to collect “anonymous” usage data. They don’t seem to mention that it also reports this to Google.

                                                    2. 8

                                                      Counting how many people clicked the big red button and bought products, or how many users have a 4K monitor, or how fast the page loads technically involves monitoring.. but it’s not really the same as what you would typically imagine when you hear the word “spying” is it?

                                                      It’s rather silly to equate performance metrics, usability studies and marketing analytics to a secret agent carefully reading all your messages.

                                                    1. 3

                                                      A useful guide. I remember being confused about the difference between glTex* and glTexture* functions. It’s a pretty awful way to tack on a new convenience API but I can see how it came to be.

                                                      1. 1

                                                        Yeah. I’m fairly surprised there even is a younger API. I just started DuckDuckGo’ing random OpenGL Extensions my command line returned, but I did not recognize and it let me to learning DSA. I’ve been doing mostly fully bindless drawing by hardcoding data into my shaders, but this is a very pleasant middle ground.

                                                      1. 3

                                                        Bliss might not be the first word to come to mind when seeing the final implementation but hey it really seems to work. I’d be tempted to replace constexpr auto name() { return "α"; } with a macro param(name, "α") :)

                                                        Also I thought this was really funny:

                                                        The technique is basically a band-aid until we get true reflection: it counts the fields by checking whether the type T is constructible by N arguments of a magic type convertible to anything, and then uses destructuring to generate tuples of the matching count.

                                                        Anyway, very useful stuff. Saving this for future reference.

                                                        1. 1

                                                          hi, op here :)

                                                          Bliss might not be the first word to come to mind when seeing the final implementation

                                                          Something which was maybe not that clear in the article is that the “bliss” part is only for the algorithms authors, not for the person who writes the binding code (which is indeed extremely hairy until actual reflection becomes standard instead of whatever hack people can find within C++).

                                                          From my point of view, how hairy writing the binding code really does not matter, because the ratio of algorithms-to-APIs is so heavily skewed towards the algorithms it’s not even funny.

                                                          For instance, KVR, a database of audio plug-ins, references roughly 18000 plug-ins, most of them based on the VST2 and 3 APIs. So no matter how hard writing the binding code is, it’s still worth to do it because one person writing that binding code will potentially make the life of thousands of developers, who’ll be able to just write simple structs, much better.

                                                          I’d be tempted to replace constexpr auto name() { return “α”; } with a macro param(name, “α”) :)

                                                          yep, basically having a way to state clearly “this is some constant meta-class information” would be neat.

                                                          1. 1

                                                            Hard agree. Maintaining native-to-whatever glue code really sucks and no C++ magic can be uglier than debugging a “happens only on IL2CPP build on iOS” bug because you forgot to add a struct member somewhere. Luckily I wasn’t responsible for tracking down that one :)

                                                            Thanks for sharing this technique.

                                                        1. 2

                                                          Absolutely phenomenal! Gurney’s Color and Light is one of my favorite books and these paintings (they are paintings, right?) captured some aspects of his style despite being formless blobs when examined carefully.

                                                          1. 1

                                                            Funny how well that LOCO-1 predictor seems to work despite its crudeness. Also it’s pretty cool how Rice coding degrades into unary encoding when k=1.

                                                            1. 1

                                                              Congrats on successfully training a GAN on your own dataset! I know from experience how frustating it can be. WGAN seems like a good choice exactly for the reason you mentioned: lower loss actually corresponds to training progress.

                                                              One thing that I was missing was a concrete example of this statement at the beginning of the article:

                                                              Generative machine learning algorithms have recently made it possible to generate synthesizer patches in the style of genres or artists automatically.

                                                              How can you generate a patch if the only thing you have is a mixed and mastered recording?

                                                              1. 1

                                                                One possibility is to start with a human-developed synth patch “in the style of” a particular genre, artist or song. There are a lot out there, either commercial or free, for any synth.

                                                                As the article complains, it sucks that most synths’ patch formats are undocumented binary blobs. Of the dozen or so hardware synths I’ve used, I can only remember one with official documentation of the format, and another one or two with reverse-engineered partial docs if you hunt for them. (I don’t use software synths so I can’t comment on those…)

                                                              1. 9

                                                                For the past two years I’ve been using a Raspberry Pi Zero W (yeah, the $10 one) as my daily driver. I do occasionally get to do extraneous stuff on my uni-issued laptop, but for the most part I use the Pi, with Alpine Linux (which I found to be much more lightweight than Arch or Debian-based distros) and i3wm.

                                                                Suffice to say that most things simply don’t work on the Pi. Browsing is limited to Netsurf with JS disabled, meaning I have to use my phone to comment here (HN being the only site I’ve seen so far that works perfectly without JS). I practically live in xterm, because most GUI stuff is too slow to be usable. I do occasionally use some lightweight GTK or SDL applications though.

                                                                By far the most annoying thing about using the Pi Zero, though, is that any software that relies on GPU acceleration is completely off-limits. That means that pretty much all Vulcan/OpenGL/GLU software is unusable, which includes all Qt applications (I haven’t found a way to disable OpenGL in Qt; if you know a way, by all means let me know). Even simple things like music players and pixel art editors that lack any kind of animations or videos or any other godforsaken garbage that would benefit from hardware acceleration in the least uses Vulcan or OpenGL for some reason. Why? Just why?

                                                                1. 5

                                                                  Regarding hardware acceleration in 2D applications. Using a dedicated chip for rendering is more power efficient so it should be used if possible. For example in a video player the image needs to be interpolated to match screen resolution and a colorspace conversion might be involved as well, which are both an ideal match for a GPU.

                                                                  1. 4

                                                                    I’ve wondered this myself. My home computer is an 8-core AMD FX with a 1GB GT720, and anything that uses opengl for compositing feels sluggish. I have to configure Firefox to not use the GPU for compositing, else I run up to the 1GB limit after opening just a few web pages, and then everything starts to feel slow.

                                                                    If it’s not for compositing then I don’t know what else Qt would be using opengl for. 20 years ago I ran Qt apps on machines that didn’t even have mesa installed, so what changed?

                                                                    1. 2

                                                                      As someone who uses a ThinkPad T41 (Pentium M, 1GB RAM), I recommend Opera 12. It starts slower than Dillo or Netsurf, but it handles relatively modern JavaScript. Many sites like Lobsters and old Reddit work perfectly. The T41 is probably a bit more powerful than the Pi Zero W, but it might work for you too.

                                                                    1. 16

                                                                      Bullshit around Python at a previous gig is what finally convinced me to just shove things into containers and say fuck it, embrace Docker.

                                                                      Thanks Python. :|

                                                                      1. 2

                                                                        I work in ML and the fact that the whole ecosystem is founded on Python spaghetti has made me seriously reconsider working in this field long-term. Network architectures are the perfect use case for static (if not fully dependent) types. I’m at least hoping Julia will disrupt things.

                                                                        1. 1

                                                                          Julia is still dynamically typed and in my very limited experience the type system doesn’t help as much with catching bugs as one would expect.

                                                                          Maybe I was just doing it wrong and you’re supposed to run a separate linter to catch trivial mistakes. But you can do the same thing in Python with mypy and type annotations so I’m not sure that counts.