Threads for faassen

    1. 2

      This is wonderful. So, the question now is, how do I implement efficient rank/select for bitvectors?

      1. 3

        The method I’ve used for rank involves precomputing rank sums: you keep a running total of 1 bits for each large block (every 64kb, for example), and then a second array of counts within smaller blocks (maybe every 4kb). You store each packed into only as many bits as they need, and the block sizes can be tuned for time/space tradeoffs.

        To find the rank of a bit offset, you’d start with the preceding 64k block’s total, then add each 4k block’s count until you’re close to the offset, then popcount the remaining individual 64-bit words up to the exact bit offset. This can be made reasonably efficient. It’s also effectively constant time, since there’s one random access for the large block total, followed by adding a bounded number of adjacent smaller block counts within the final large block, then a bounded number of word popcounts within the final small block.

        For select, you can binary search on that rank result, or you can build a similar index using precomputed tables to jump to roughly the right area, then scan and count bits to find the exact offset. Either rank or select can be implemented in terms of the other.

        My implementation was for a LOUDS, a kind of succinct trie structure whose backing bitvector has an almost perfectly even balance of 1s and 0s by design. There are different approaches with better tradeoffs if you know the ratio of 0s to 1s is likely to be very skewed, though.

        I found Erik Demaine’s open courseware lectures about succinct data structures particularly helpful when I was first learning about those data structures – I think it was sessions 17 and 18 here, but it may have been different recordings from the same course. He describes the block sample sum and popcounting approach.

        For more depth, Gonzalo Navarro’s Compact Data Structures describes a couple other approaches for rank & select, along with several other succinct structures. If you’re at all interested in this stuff I’d absolutely recommend checking it out, it’s a challenging read at times but it’s fascinating.

        1. 2

          Good question I myself don’t know the answer to, but I’m sure there are papers you can dig up (keywords “succinct rank select” should get you started). You can also look at existing implementations such as the one in the vers crate.

          1. 3

            This might be helpful, I just ran into it (in an unfortunately phrased orange site comment…)

            https://stackoverflow.com/questions/72580828/what-is-a-succinct-rank-data-structure-how-does-it-work

        2. 5

          One particularly interesting aspect about rank/select is how it is amenable to SIMD. E.g. here’s a Rust implementation: https://github.com/sujayakar/rsdict

          1. 3

            The vers crate I mention also has SIMD support, available behind a feature.

          2. 8

            You talk about indexing XML files, but I’ve also seen this technique used for JSON. The Hackage hw-json-simd package draws on the paper Semi-indexing Semi-structured Data in Tiny Space, which you might find interesting.

            1. 1

              Thank you for that reference, I will read that with interest!

            2. 4

              I’m confused and I feel bad as I’m going to be disagreeing a lot.

              This post seems to imply that signals are somehow not declarative, but a performance-oriented non-declarative thing. And that plain React code is easier to write and maintain (as it’s somehow more declarative) until you have unique performance characteristics.

              My experience has been very different. I found signals, in Solid.js, just as declarative as React code, and easier to reason about. Their declarative nature is the very thing that allows the code that builds on them to be performant - it has more information to reason about to trigger precise updates. The marketing surrounding signals emphasizes the performance characteristics a lot, but I myself find it far more important that it’s easier to reason about.

              As I wrote a few years ago

              Raw performance of a framework isn’t everything: developer usability matters. The fastest framework is no framework at all. But we use frameworks for a reason, and the reason is not performance, it’s developer usability.

              That post probably says it better than my comment here, but I’ll continue.

              I’ve used React + MobX for a long time and this brings some of the same affordances as signals do. When I used hooks-based React I was frustrated with how difficult it was to reason about - how many finicky rules there are around the various hooks, and how easy is it to get wrong with state management requirements that are dead-simple to handle with MobX.

              It contrasts signals with “functional component-based frameworks like React”, but Solid.js is signals based and it looks like a component-based framework like React. Perhaps “functional” is doing the heavy lifting here but if a framework makes components look very similar to React even though it’s not re-rendering components for each update, how is “functional” useful?

              Signals are not immutable of course, but immutability is not what makes something declarative. It’s describing the what over the how, and signals do that at least as well as hooks - I’d argue better as they do automatic dependency tracking, a “how” which is only about performance optimization, not functional requirements, that I don’t need to worry about anymore.

              I don’t think there’s anything inherent in signals that make an API that uses it less declarative, and you can easily argue it makes it more declarative.

              For developers weighing their options, the trade-off is pretty clear: stick with React if your data aligns well with your UI and you value a straightforward, mature ecosystem.

              I think the trade-offs are different, so perhaps it’s less clear than that.

              Stick with React if you want to use the mature ecosystem and the vast amount of libraries available. I’d recommend using a nice declarative observability framework with it - I’ve seen nothing that parallels mobx in ease of use and power, though things might’ve changed in the last couple of years when I have been paying less attention.

              But if you want straightforward declarative code and still React-like components, use a signals-based system like Solid.js. In my experience it’s easier to use than React as well as faster and simpler inside, while extremely similar to it on the surface. The one obstacle I found (and it’s a huge one!) is the lack of mature libraries in its ecosystem.

              1. 7

                I haven’t read it all yet but the section on TDD stood out to me.

                • Ousterhout mischaracterizes TDD in his book and hasn’t actually used it, yet criticizes it

                • UB says it’s the only way to go

                I think a bunch of other positions are possible.

                Ousterhout doesn’t see how TDD can encourage a decoupled design; I strongly believe that TDD does that - the idea that you mock existing interfaces only is common but I disagree - you design the API your code needs, rather than mock an existing API. That’s where the decoupling comes from. Ousterhout seems to believe TDD leads to worse design; I don’t agree.

                On the other hand, I agree with Ousterhout that writing tests just after is fine, and writing unit tests before is unnecessary to get most of the benefits of testing. I also agree that it’s often work well to go in “chunks” rather than the smallest step possible.

                I practice pure TDD sometimes, and I often take small steps, but I am happy to write the test just after or just before, whatever I feel like. I’m also happy to sometimes take a larger step if I feel confident.

                I love TDD a a teaching tool. When you practice TDD you learn how to take small steps, which is very valuable. You learn how to really think about your API before you build it, as you have to write a test. You learn how to design for testability, which can mean more decoupled code. You learn to refactor early and often. So TDD is a good way to build up a lot of skills and practices that are useful to have in your arsenal. Many developers don’t know how to take small steps, or how to think about API design by writing example code, or how to think about coupling and testability and TDD teaches that.

                But after doing TDD for a long time in many contexts, I often have a pretty good idea by now how to write decoupled code without needing TDD to spell it out for me. I know when in certain areas I feel confident enough in my own knowledge (and the type checker) to take a larger leap, just as I know how to switch gears and do smaller iterations when that makes sense. And I’m pretty sure a developer can learn these things without practicing TDD (Ousterhout did); I just think that practicing TDD, especially in a pair or group context (in a safe space like a dojo), is a very effective way to learn to get better at programming.

                1. 2

                  When I was learning how to code I copied code from a Basic programming manual with only a bare understanding of what was going on, or even English. “syntax” in Syntax Error, ‘function” in “Illegal Function Call” or all words in “Type Mismatch” were errors Basic would throw at me. I had no idea what these words meant; now I do. It didn’t help that I was also a kid. I wrote in my blog about this

                  The internet provided a very different new path to learning how to program not available to me until much later.

                  In this era of LLMs there are going to be new paths to learning how to program too. Are they better or worse paths? I think the jury is still thoroughly out on that one; this topic is only being explored right now. I know a lot of people have a strong opinion that it’s definitely bad but who knows what learning trajectories are being figured out by intelligent, motivated people right now?

                  1. 6

                    I wrote this a while ago about print debugging and about how it’s fine. See lobsters discussion. So I have some observations.

                    It’s interesting how the author went with print debugging being fine for many years, had reasons why that was good and fine, and then finally ended up in a context where he needed a debugger, and now has changed his mind and considers debugger use as the new baseline.

                    Hinted at but not discussed in detail is what in code necessitates use of a debugger. Code size isn’t it. “Lots of callbacks” make it important, apparently and “complex systems”.

                    Is it only complexity or also how that complexity is organized: i.e. architecture? There’s a relationship between software architecture and how easy is it to debug with print statements, and how easy is it to test. I think we should strive for islands in our codebase as big as possible that are easy to test automatically, and thus probably also be debuggable with simple tools like print. But that’s not always feasible - parts of your code may have a structure that make more advanced debugging tools helpful, just like in some cases it’s actually easier to test by hand than writing an automated test.

                    That said, it’s worthwhile to see how far you can push a codebase so it’s easier to test and debug again. See for instance recent discussions about sans-IO

                    1. 2

                      I’ve found structured log based print debugging with a file:lineno pair to be super helpful. It gives you something to click that takes you to where the log message came from, which works wonders for rapid development. Basically, invest in it if you find yourself constantly slowed down by grepping the codebase for the log message– especially if you end up with a lot of logs that have a generic message format like “Failed to do $thing”.

                      This can be annoying to implement properly, depending on the languages ability to introspect the call stack, but if you get it working it’s been worth it, for me. Need to worry about things like nesting log emit behind helper functions, you need a way of telling the underlying file:lineno logic “go up by N from callsite”.

                      I’ll still drop in breakpoints for the surgical work, but when debugging a system holistically, you may need to see a stream of logs as things are happening in real time to get the full picture.

                      1. 1

                        For me lately I hardly work on code that is not parallelized in one form or another. Reasoning about how that code runs strictly based on print debugging is difficult.

                        Sometimes it’s also difficult to do in a debugger, but having a stack and a memory state to look at is better than anything that could be done in print in my opinion.

                      2. 2

                        This is a fascinating well-researched article comparing list comprehensions in many languages and very relevant to users of more mainstream languages at the same time. I was aware of list comprehensions coming from Haskell when they were introduced into Python but had no experience with the deeper history. Their connection to set-builder notation in mathematics, which I encountered quite a bit lately, is also something I hadn’t fully comprehended.

                        1. 1

                          Having learned some basic discrete maths a little before Haskell, I found comprehension syntax very natural and was not especially impressed to find it in Python. I’ve since been disappointed in other languages which don’t have similar features.

                          In maths, it’s completely unremarkable to write an infinite set as a comprehension, and to consider that as a reified object. Haskell approximates this for countable collections with its systemic laziness, and Python can approximate it with generators. If you want to work with uncountable constructions, maybe something like Lean or Agda might give you the tools, but it’s not going to be anything as tidy as x \in $\mathbb R$.

                            1. 5

                              I mean, sure, it’s a GUI demo, not an actual app. To be fair, maybe I’m just easily impressed and there are far better graphical frameworks out there (since, as one comment says, this only seems to be focusing on the graphical part of GUI).

                              I agree with all the critical comments here that were posted after me, so maybe I just saw shiny and spoke too quickly. There’s definitely plenty to criticise, and the line count is not on a 1-for-1 basis, since all the existing GUI libraries are complex for many reasons as is being talked about: accessibility, input handling, font processing, OS integration, etc. — each of which brings in a whole sublist of difficult problems by itself, e.g. rendering complex scripts for font processing.

                              1. 6

                                Audio apps tend to make nice, shortish demos because they are made out of repeated parts. Each individual control might be some effort, but then you just loop it over a bunch of channels, and there’s your whole UI. This helps in usage too, like yeah, maybe it is 64 controls, but the user needs to learn 4 of them, and the rest is duplicated over 16 different inputs. (And actually, their demo only has 5 channels, whereas midi is usually 16 channels… it can be hard to make space to show them all on screen! (Though drawing them is usually trivial; “thousands of frames per second” doesn’t really matter because most things rarely change.)

                                I’m tempted to try to make something with my gui lib that is similar to this, and I think it would come in with a favorable line count too. One thing to realize is a lot of the controls here are actually just custom-drawn standard interactions - when they say “There is no cheating here, no hidden stylesheets or textures, no pre-built library of fancy buttons and effects.” I’m again meh on it…. you’re providing a pre-built library! So it isn’t “cheating” to, for example, subclass the generic slider control to provide a new graphical skin while reusing the rest of the interaction. That’s probably exactly what you’d expect a generic gui library to provide!

                                For example, when I look at those knobs, I don’t see a from-scratch custom widget. I see a standard slider control - in html terms, <input type="range">, with a custom drawn skin (it’d take a lot of css and -prefixed-properties to do that with html, but you can do it, and wit a desktop gui library it is reasonable to expect you to be able to do some kind of class Knob : Slider { override void paint() {} } thing. Ideally, you’d use two-dimensional input, so dragging it up is the same as dragging it right, but that’s not even a necessary part of it; the knobs will feel perfectly natural as a reskinned horizontal slider. You don’t really move the mouse in a circle to turn the knob, you just grab it and move left or right. What, exactly, “grab it” means might be another thing to customize, since in the standard slider, you need to grab the handle, which is a proportional fraction of the track range, and here you probably want to click anywhere, but again, you can probably customize this in a subclass. (Thinking about this in my library, overriding paint is easy, overriding the mouse grab is a bit trickier, you’d need to override the default mouse down method and redirect that to the thumb… or the recompute child layout method, to do custom proportions of the thumb size, then also override the child paint since now the thumb is what actually draws. It’d take…. prolly idk 30 lines of code to make it work, maybe more if I’m forgetting something, which is more than I’d prefer to admit. But even if you did have to override the mouse down/up/move handlers - which, in addition to the paint method feels like doing basically the whole control again, you still come out ahead since the default keyboard, context menu, external event api, and ui automation / accessibility hooks all remain intact. So it feels like redoing it on the surface, but you still benefit from the rest of that deep sea iceburg stuff.

                                Anyway, that’s knobs, but volume sliders are an even more obvious skin on a standard slider control, vertical this time. There’s reskinned radio buttons on the right, and bottom, reskinned toggle buttons above. Animated display things for the waveforms, but since there’s no user interaction, that’s just class WaveDisplay : GenericWidget { override paint() {} }, among the simplest possible cases.

                                Then that piano control, I wrote one of those for my midi program too, it looks fairly big, but really, you pay attention to getting one octave right, then loop that to show the whole thing. the hover/click effects are a two layer bounding box (white keys then black keys plus one on the z-order), so not complicated. I did that from scratch in my application. I didn’t use those legit beautiful gradients (i do love gradients…) but again changing fillRectangle to drawGradient isn’t rocket science. One tricky thing might be on the click and drag: suppose the user clicks down, drags off your window, then releases the mouse button? Does your application miss the mouse up event and think it is still dragged? I’d note the default behavior from the OS on this is different on Windows and X11. (… and I’m not even sure about Mac, I should try it.) So it is a subtle thing that isn’t exactly hard to get right, but a little thing you might not even know to test unless you’ve done it before.

                                Lastly, that ASDR dialog is cute, but we see the same knob-slider control again, then the graph is a static display…. almost. You can click those dots and move them. My lib has a class called MouseActivtatedWidget for that kind of thing but using it is a pain in the button, so you’d probably do it from scratch there. Can punt the ui automation concerns to the sliders; the basic control still works if the user is typing in values too (something you ought to let them do!) so I look at the graph display as a progressive enhancement.

                                So again, I agree it is a pretty demo, but scratching beneath the surface leaves a bit to be desired. What they reject as “cheating” is what I see as the key reason to use a gui library - there’s a lot more to these interactions than just drawing, and being able to hook your custom drawing into that existing backend infrastructure is a big benefit, not a cheat.

                            2. 4

                              DRY is one of those principles that can be taken way too far (namely, it often leads to building towers of abstractions which only loosely fit to avoid (mostly) repeating yourself even once). A while ago, someone (I believe the following is the original, but I am not sure) advocated for WET (Write Everything Twice), less because writing a second instance gives you the opportunity to see what differences exist before refactoring out the common bits (though, to your point in the article, that may indeed be a benefit), but more because it offers an opportunity to avoid premature abstraction.

                              Upon reading it, I jokingly came up with MOIST (Mostly One Instance; Sometimes Two) with the idea that while there are plenty of times you probably have the same pattern which should be abstracted to a single instance as-needed, you may sometimes need multiple copies of “the same” code which will actually differ slightly due to differing requirements in the use (e.g., this might be preferable over a function taking a boolean argument). I even wrote a small shell script which very naïvely determines the moistness of your code (by checking that the total lines of code is roughly 1.3 times the size of the unique lines of code). :P

                              All the best,

                              -HG

                              1. 3

                                MOIST…

                                I’ve called this the ‘WET-DRY’ principle in the past, sharing many of the same concerns as OP. “WET-DRY” = “Write Everything Twice, (then) Don’t Repeat Yourself”.

                                It is much easier for me to extract the whole repetition if I let it replicate a few times. Much less likely I extract the wrong abstraction.

                                1. 2

                                  I recall the notion of writing stuff multiple times before you try to abstract it floating around for many years. Of course whether that’s a good idea depends on context, but incremental insight is our great friend while doing software development for sure.

                                  A measure of how much “grease” there is in code sounds cool! You could then measure what is the optimal level of codebase grease, though I imagine in applications the optimal amount of grease is going to be much bigger than in libraries for instance.

                                2. 3

                                  I don’t think there is a marked shift in applications, but I think it’s fair to say there has been an increase in interest in these topics.

                                  Distributed networking is a movements as much of the past as it is of the present. It has deep roots: the web is distributed, email has become more centralized but certainly started that way, DNS is distributed, usenet news is, and the whole internet itself is designed along these lines. The social part of the internet was much more centered around open protocols than it is today.

                                  Then, about 20 years ago, there was a period where peer to peer applications were in vogue, but the impact on the way the internet works was relatively limited.

                                  Centralization has clear benefits to users: it’s more predictable, a global state is easier to think about, running a centralized service can have such rewards the creators can really focus on usability, which also tends to be easier to implement in an centralized system. And it also has immense rewards in money and power to those controlling these systems. So we’ve seen a massive growth in centralization online over the last few decades.

                                  This goes into the major drawbacks of centralized systems, but those are more subtle and touch upon how we humans handle the exercise of power and the distribution of resources: they are political.

                                  So more recently there is an increasing interest among small groups of mostly technically knowledgeable people in internet systems that distribute and diffuse human power rather than centralize and create it. That includes the fediverse (mastodon and such). There’s also the cryptocurrency world, and the “local-first” software movement also touches upon this.

                                  As others pointed out, I think most non-technically inclined people, and even most technical inclined people live in a world where they cannot see this and it’s irrelevant to them. There are powerful forces keeping people on centralized platforms. It’s conceivable to see a political development where more people make conscious choices about the greater information environment, akin to the environmental movement, but that movement is not big at present. For that the problem of centralized systems needs to become bigger, the awareness needs to spread that these problems are in part due to centralization and that there are alternatives. And these alternatives need to keep getting better themselves.

                                  1. 13

                                    I may be missing something as I didn’t absorb all of this article, but I think what is described as “typed stack traces” is a common pattern in Rust, which also has an enum type. You can define an error enum that subsumes another error enum, and with a clever From conversion you can use the ? and automatically have them coerce one error into another. This means you can just use ? and have the error get wrapped the way you want them. This is very nice, though I disagree with the article that stack traces are useless anyway - I miss them with Rust (though I know there are ways to have them back).

                                    I agree with the article that micro services are overused. I don’t think the primary cause of this has much to do with tracing errors. The second hypothesis, that micro services are more modular and considered to be easier to learn, is probably closer to the truth, but of course we have better and simpler ways to do modularity, as the article mentions.

                                    I wrote about that here. As I describe there, I think the prevalence of micro services is due to a combination of something that makes sense in some contexts at some scale getting elevated into a “best practice” losing its context, and Conway’s law: we get things separated into services following the divisions of the organization structure itself, rather than what actually makes sense.

                                    1. 2

                                      Agreed, I think the meat the author wraps around the bones of the idea is kinda weird – not sure I’ve heard anyone say that stack traces are useless before. But the bones remain pretty good.

                                    2. 4

                                      This reminds me of the distinction between “mega frameworks” and “micro frameworks” that we used to talk about in the Python space, where Django was the prototypical mega framework and Flask the prototypical micro framework; FastAPI would be a micro framework. There were also mega frameworks assembled out of existing components, such as TurboGears

                                      I wrote a little more about this distinction on my blog here.

                                      I was into Python web frameworks for many years. The original Zope, Zope 2, was a mega framework before the concept “web framework” even existed. Zope 3 moved into the “assembled out of components” mega framework direction. I helped create Grok, which put an easier face on top of Zope 3, more like a traditional mega framework. I’ve also created a Python micro framework over 10 years ago called Morepath, which didn’t gain any traction but I still think has a lot of neat ideas.

                                      In particular the idea that the objects (created by an ORM or at runtime) being represented in a web application are known by the framework, and the framework routes to these objects before it routes to views is something I learned from Zope, but outside Zope, and its descendants Pyramid and Morepath I haven’t seen it. Making this object known by framework means you can use its properties for dispatch (automatic 404 not found), security checks (automatic 403 forbidden), linking to it, etc). I’m not sure what this means - perhaps the idea is less valuable than I think it is, or perhaps it’s a potentially useful conceptual step too large to take to catch on, like, category theory.

                                      1. 1

                                        Hello Martijn, nice to see you here. I used to read your writing on Grok back in the day.

                                        1. 2

                                          Thank you! I still have the blog I wrote those on, though I’ve moved on to other topics!

                                      2. 6

                                        This was a fascinating read. Proving that a particular cluster of misinformation originated from an LLM is impossible (and he acknowledges that!) but the circumstantial evidence is pointing that way. I’m really glad the author researched and wrote about this, because it’s a concrete example of “information pollution” in the wild.

                                        I’m reminded of Gell-Mann amnesia: a journalist writes about your area of expertise, and all their mistakes are glaringly obvious—but you turn the page and somehow forget what you just saw, because the remainder of the newspaper seems like a reliable source of truth. No malice required: just an intelligence (artificial or not) who’s trying to generate plausible text.

                                        1. 12

                                          One of my pet peeves that I think is related is how the term “API” is being (has been?) hijacked by web programmers.

                                          In the old days (before 2010?) “API” used to mostly mean “The set of function this library provides, and what data goes in and out”.

                                          Now it seems to mostly mean “The set of endpoints in an HTTP server, and what kind of JSON it expects and respond with”.

                                          1. 10

                                            This seems like the same thing to me, just a different kind of linkage. In the same way a bicycle had a drive train that a car mechanic might not be familiar with - it has no drive shaft and the transmission is totally different - a web application program interfaces with its dependencies in a way many native developers aren’t familiar with. That doesn’t mean we need a different word/concept.

                                            1. 2

                                              the API description of a web server, and API’s in general are clearly different concepts.

                                              simply demonstrated: all toads are frogs but not all frogs are toads

                                            2. 5

                                              And then some people think the go-to way to do modularity is to use web services…

                                            3. 4

                                              Performance aside, these articles remind me of something a professor told me years ago. “You’re being clever again, be clear, not clever.” A lot of times the attempts to cram everything into a single line of Functional Perfection™ feel like programmers trying to be clever.

                                              1. 3

                                                Yes, I’m trying to communicate heuristics; often the for loop combines readability with performance, in particular when you’re constructing collections of a different size out of other collections.

                                                I like declarative code just fine, and it doesn’t have to be that clever. But code that just loops and mutates has an important place.

                                              2. 3

                                                I’ve since posted an update to this article (at the bottom, you may need a reload), as a comment got me thinking. I had originally excluded the case where you do accumulator.extend(list), because in the JavaScript article I argued that this is harder to reason about and breaks strict functional rules.

                                                But that’s not true for Rust; it passes in a moved Vec as the accumulator, and thus we know there are no other references to it and it’s entirely safe to modify and return. So that changes the trade-offs somewhat; while I still think the for loop is slightly simpler and easier to modify, there isn’t much to say against the fold solution with extend either.

                                                1. 12

                                                  On a loop of 10000 integers it’s 6 times faster on my machine!

                                                  Rust has some specialized implementations for vec.into_iter().collect() that avoid reallocations, but in this case this could be a measurement error.

                                                  10K integers doesn’t sound like much. In release mode it could run too quickly to be reliably measured. Test runs that take microseconds are too easily affected timer resolution, rare costs of memory allocator init/growth, warmups of various CPU caches. Quick operations need to be measured with bencher or criterion.

                                                  OTOH if you didn’t run it with --release, then the default debug mode would be 10x-100x slower, with wildly varying runtime speeds, depending on very uninteresting irrelevant details that are designed to be removed in the --release mode.

                                                  1. 3

                                                    I used cargo bench with the divan benchmarking library to measure this. Do you think there could be a measurement error with this? It’s also possible I need to drop in black_box somewhere, though the difference in performance at least suggests different work is being done. I’ll play around with that a bit more.

                                                    1. 2

                                                      I don’t know too much about Rust, but generally, instead of doing X once and trying to measure the time it takes with great precision, you can do X many times and measure how long that takes.

                                                      (You might be doing this already, not sure.)

                                                      1. 2

                                                        Yes, that’s what Divan is doing, so that should be all right.

                                                    2. 2

                                                      I’ve now published the benchmark code itself:

                                                      https://github.com/faassen/flatten-rust

                                                    3. 5

                                                      Since my previous post on the Humble For loop in JavaScript got some enjoyable discussion here I figured I’d share this one too. I hope people enjoy it. I hope I didn’t get too much wrong; I present this to Rust wizards with some trepidation.