1. 0

    I don’t really understand this. Sure, it’s cool to optimize something so well, but I don’t see the point of going to so much effort to reduce memory allocations. The time taken to run this, what it seems like you would actually care about, is all over the place and doesn’t get reduced that much. Why do we care about the number of allocations and GC cycles? If you care that much about not “stressing the GC”, whatever that means, then better to switch to a non-GC language than jump through hoops to get a GC language to not do its thing.

    1. 10

      On the contrary, I found this article a refreshing change from the usual Medium fare. Specifically, this article is actually technical, has few (any?) memes, and shows each step of optimization alongside data. More content like this, please!

      More to your point, I imagine there was some sort of constraint necessitating it. The fact that the allocation size dropped so drastically fell out of using a pooled allocator.

      1. 4

        Right at the beginning of the article, it says:

        This data is then used to power our real-time calculations. Currently this import process has to take place outside of business hours because of the impact it has on memory usage.

        So: They’re doing bulk imports of data, and the extra allocation produces so much overhead that they need to schedule around it (“outside of business hours”). Using 7.5GB may be fine for processing a single input batch on their server, but it’s likely they want to process several data sets in parallel, or do other work.

        Sure, they could blast the data through a DFA in C and probably do it with no runtime allocation at all (their final code is already approaching a hand-written lexer), but completely changing languages/platforms over issues like this has a lot of other implications. It’s worth knowing if it’s manageable on their current platform.

        1. 3

          They’re doing bulk imports of data, and the extra allocation produces so much overhead that they need to schedule around it

          That’s what they claim, but it sounds really weird to me. I’ve worked with plenty of large data imports in GCed languages, and have never had to worry about overhead, allocation, GC details, etc. I’m not saying they don’t have these problems, but it would be even more interesting to hear why these things are a problem for them.

          Also of note - their program never actually used 7.5GB of memory. That’s the total allocations over the course of the program, virtually all of which was surely GC’ed almost immediately. Check out the table at the end of the article - peak working set, the highest amount of memory actually used, never budged from 16kb until the last iteration, where it dropped to 12kb. Extra allocations and GC collections are what dropped. Going by the execution time listing, the volume of allocations and collections doesn’t seem to have much noticeable effect on anything. I’d very much like to know exactly what business goals they accomplished by all of that effort to reduce allocations and collections.

          1. 1

            You’re right – it’s total allocations along the way rather than the allocation high water mark. It seems unlikely they’d go out of their way to do processing in off hours without running into some sort of problem first (so I’m inclined to take that assertion at face value), though I’m not seeing a clear reason in the post.

            Still, I’ve seen several cases where bulk data processing like this has become vastly more efficient (from hours to minutes) by using a trie and interning common repeated substrings, re-using the same stack/statically allocated buffers, or otherwise eliminating a ton of redundant work. If anything, their timings seem suspicious to me (I’d expect the cumulative time to drop significantly), but I’m not familiar enough with the C# ecosystem to try to reproduce their results.

          2.  

            From what I understood, the 7.5GB of memory is total allocations, not the amount of memory held resident, that was around 15 megs. I’m not sure why the memory usage requires running outside business hours.

            EDIT: Whoops, I see you responded to a similar comment that showed up below when I was reading this.

          3. 2

            The article doesn’t explain why they care, but many garbage collection make it hard to hit a latency target consistently (i.e. while the GC is running its longest critical section). Also, garbage collection is (usually better optimized for short-living allocations than malloc, but still) somewhat expensive, and re-using memory makes caches happier.

            Of course, there’s a limit to how much optimization one needs for a CSV-like file in the hundreds of MBs…

            1. 1

              Maybe their machines don’t have 8gb of free memory lying around.

              1. 2

                As shown in the table, they don’t use anywhere close to 8gb of memory at a time. This seems like a case that .NET is already very good at at a baseline level

            1. 2

              I often do, but don’t try to force it. Sometimes I just don’t have spare mental energy for it, especially when I’m doing deep work at my day job. Hobbies that get me away from tech help ground me and keep me from burning out (I particularly like cooking and bicycling). I’m usually not interested in programming for its own sake, so much as for particular projects I want to work on.

              Also, remember that coding means different things to different people. You might find another branch of programming fits you better. I don’t find front-end web development enjoyable at all (I’m not actively involved in that space, and the tooling changes quickly, so it feels like starting from scratch every single time), but find playing with microcontrollers / Arduinos and making generative art bots to be a lot of fun. I’m especially interested in some particular problem domains, like parsing, networking, and testing, which have had more or less overlap with my jobs over the years.

              1. 3

                This is consistent with my experience on my last couple projects – using contracts along with property-based testing and/or fuzzing seems to have an especially high payoff (bugs found, with input that reproduces them) for the effort involved.

                The property tests were conceptually all stateful tests – the test library (theft) generated and executed a sequence of API calls, updating a straightforward in-memory model of the thing under test along the way, and checked that the result from each API call made sense, based on the model so far. If the code and in-memory model ever diverged, or if any contracts* failed, then it found something worth further investigation. It also used shrinking to remove most of the irrelevant details from the input.

                This approach is incremental, and combines well with other approaches – there’s no need to thoroughly cover everything in contracts before they start catching things. Properties can check the overall expectations of the code (“no series of valid operations leads to data loss”; “no input leads to pathologically bad performance”), but also the contracts will help catch tangential issues, like implicit assumptions about initialization. To some degree, contracts sidestep thinking about the overall properties of the code, because contracts will catch any inconsistencies uncovered by combining operations at random.

                * Well, asserts, because this was in C – but I was using asserts for internal consistency contracts, not just p != NULL; and whatnot.

                1. 8

                  While the syntax is quite different, I think people really underappreciate how simple the Erlang syntax is. It takes no more than a few hours to learn everything about Erlang’s syntax. It has so few things about it that even if they are someone odd, it’s still so much smaller than almost any other language I’ve used. I think people tend to get way too caught up on its oddness rather than this strength of simplicity.

                  1. 1

                    I actually agree with you about stuff like that. Most programmers don’t, though. Being C-like helped C++ and Java win converts more easily. As in, you always get more adoption giving a crowd what they’re used to starting out.

                    1. 1

                      Also, the syntax is really unsurprising if you’ve toyed with Prolog in the past.

                      1. 1

                        The author explicitly mentions that as a negative given it’s a weird language hardly any mainstream coders would’ve messed with. Instantly creates an obstacle to adoption for the masses.

                      2. 1

                        Yes, I played a bit with Erlang some time ago, and I was really surprised by how beautiful and simple it is.

                      1. 10

                        This one’s mine: source (hint) That was a LOT of fun to write :D

                        1. 2

                          I was going to take a quick glance at it to see if anything jumped out at me. Yeah, to heck with that lol. Great art, though! Love it!

                          1. 2

                            If you’re trying to pick apart an IOCCC entry, generally the first thing to do is pass it through the C preprocessor, and then through a code auto-formatter like astyle.

                            With my entry, that should bring several other layers of obfuscations into focus. ;)

                        1. 13

                          The author has a pretty explicit bias toward Earley parsing, and using LL(k) over LR(k) methods (which I mostly share). There’s a wealth of information here, but he’s missing a couple other significant strands of generalized parsing research outside of Marpa’s particular Earley variant (perhaps because they don’t fit within his narrative).

                          GLR (Generalized LR):

                          • Tomita, LR parsers for natural languages, 1984.
                          • Tomita, Efficient parsing for natural language, 1986.
                          • Scott and Johnstone, Generalised bottom up parsers with reduced stack activity, 2005. This segues into their later work with Earley and GLL.

                          GLL (Generalized LL):

                          • Scott and Johnstone, GLL Parsing, 2010.
                          • Afroozeh and Izmaylova, Faster, Practical GLL Parsing, 2015.

                          Outside of Earley, GLL seems like a very practical generalized parsing approach. Instaparse is one implementation.

                          Earley / Shared Packed Parse Forests:

                          • Scott, SPPF-style parsing from Earley recognisers, 2008 (and several related papers by Scott and Johnstone). Note: I’ve never been able to get the approach described in the paper implemented correctly, and not for lack of trying.

                          Earley Intersection Parsing (not sure if there’s a canonical name for this):

                          • Bar-Hillel, Perles, and Shamir, On formal properties of simple phrase structure grammars in Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 1961. Proves some important results about intersecting automata and context free grammars.
                          • Chapter 13 (“Parsing as Intersection”) of Grune and Jacobs (also cited elsewhere in his timeline), particularly 13.4 (pgs. 437-439): This describes Bar-Hillel’s Intersection Parsing approach using contemporary CS & parsing terminology, and then suggests combining it with Earley parsing. While the basic intersection parsing method produces an astonishing amount of dead-end nodes to garbage-collect later, Earley parsers limit searching to productions reachable from the start symbol. If the completer is modified to produce non-terminals in the intersection parsing format (which is easy), intersection parsing nicely handles producing the parse forest (with structural sharing, when ambiguity produces multiple derivations).

                          I’ve been working on a C library for Earley Intersection parsing, and an awk-like, pattern-match-directed language based on it, but working on testing them thoroughly tends to lead me to working on theft instead.

                          1. 3

                            Thanks for details about alternatives.

                            I have played around with Bison’s GLR option and have almost managed to create a C parser that does not require special handling of typedefs in the lexer (ambiguities still appearing at runtime for a few constructs).

                            Grune and Jacobs’ Parsing Techniques - A Practical Guide is sitting in a pile waiting to be read (the first edition was manageable, the second looks daunting)

                            1. 1

                              I have a print copy of the second, and it’s never struct me as daunting – reading it cover to cover, perhaps, but it’s well suited to dipping into for just specific techniques, with a well-curated bibliography for further details.

                            2. 2

                              packrattle is another example of a GLL parser.

                              The times I’ve read about Earley parsers, they seem like the same solution as GLL, but attacking the problem from a different angle. Hopefully someone in academia will eventually prove that.

                              1. 2

                                I’m particularly interested in Earley parsers because some use cases of mine assume ambiguity (reverse-engineering binaries, scraping damaged data), and the chart that Earley parsers build up supports several operations beyond just recognizing & building parse trees:

                                • what terminals/nonterminals would advance the current parse(s) in progress? (i.e., autocomplete)
                                • if we don’t assume where the starting position is, what overlapping instruction encodings are there?
                                • if we inserted a fake nonterminal here, would enough of the parse have completed to match any instances of (some structure) between here and there?
                                • incremental re-parsing of changing input, since earlier columns in the chart are constant once the parser has moved on.
                              2. 2

                                Thank you for adding this. I understand why everybody is excited about Earley parsing, though I personally don’t feel sufficiently grounded in it to share that excitement, but at the very least other lines of research deserve to be mentioned.

                              1. -3

                                The authors obviously don’t know anything about R.

                                1. 3
                                  1. 0

                                    What is the difference between array and vector languages? Vector languages operate on slices of arrays (which might be called array languages).

                                    1. 1

                                      I’ve always heard the terms used interchangeably.

                                1. 4

                                  I very frequently do the sort of testing described here as “FSM validation” in C (using theft). Even if I didn’t use property-based testing for anything else, the results I’ve gotten from that approach alone would more than justify the time I’ve spent learning property-based testing tools.

                                  1. 3

                                    Appreciate the feedback since I want more corroboration about that. I know worked well with formal specs but people will want to see it on code-only, too.

                                    @ferd seemed to think it was only Erlang technique. These things used to be called “specification” or “model” based testing rather than property. The use of state-based modeling with pre- or post-conditions in formal specifications was also pretty much default due to FSM’s and Diskstra’s methods being popular. People had done randomized testing of them many times but most research tried to avoid it to reduce execution time due to hardware costs or thinking they could outdo random. Here’s an example from 1997 with FSM modeling plus heuristic and random testing. One with Statecharts. Another one from 2009 with UML. Recent results with “property”-based testing create a bandwagon effect where lots of these tools are being built, including for old languages. QuickCheck style being most common.

                                    The tooling never went mainstream, though. That’s likely because it usually came with formal specs that were hard for people to learn. There were good reasons for that for sure. However, the newer ones are just giving a purpose-built, lightweight spec that only generates tests. The simplicity and working within the language itself is probably driving a lot of adoption. The methods are old, though. One more benefit while we’re at it is that model-checkers and provers can work with FSM models better than most. So, one can get property-based testing now plus some stronger stuff later if they or a contributor has the resources.

                                    1. 2

                                      It doesn’t feel like a fundamentally new technique, but most of the writing I’ve seen that explicitly describes it building on property-based testing tooling has been Erlang-flavored. The commercial Erlang QuickCheck’s documentation suggests testing stateful systems that way, and the library has some built-in support, so it’s probably more familiar within the Erlang world. (IIRC, they call it “statem” testing, for “state machine”.)

                                      Hypothesis also has API support for stateful testing now, and its docs refer to similar support in ScalaCheck.

                                      theft doesn’t have any explicit API hooks for this use case yet, mostly because I haven’t figured out the right abstractions for a C API, but it definitely supports it. Typically, I have theft generate an array of structs, each of which describes an operation in the API I’m testing – if I were testing a hash table, it would generate things like:

                                      { .type = OP_SET, .key = "key001", .value = 23 },
                                      { .type = OP_GET, .key = "key005", },
                                      { .type = OP_DELETE, .key = "key001", },
                                      

                                      and then a simple switch-case loop calls the code under test with each operation, checks that the result makes sense according to a simple in-memory model (typically just an array, with dead-simple linear search), and then updates the model with new bindings. After the whole series of operations, it would compare the end state of the model to the hash table. If deleting a binding has caused other earlier, colliding bindings to get lost, or growing the table has corrupted data, it would fail the test, shrink the steps, and present a small series of steps to reproduce the bug.

                                      I’d already been doing that before I encounterted Erlang QuickCheck – I was trying to use PBT like a lightweight model checker, but one that would catch implementation bugs (because, hey, I mostly work in C.). I learned a bit about model checkers from dabbling in Alloy after Guillaume Marceau’s presentation at !!Con 2014. Describing the approach as a lightweight, probabalistic model checker still makes more sense to me than “state machine testing”, because I might not be testing a state machine at all – I’ve used it for cache invalidation logic, filesystem code, stress-testing a garbage collector, fuzzing a compiler, and several other things.

                                      I suspect this technique is so effective because there are lots of codebases where individual pieces have been pretty well tested, but combining them in unusual ways uncovers slight misalignments, and randomly chaining these pieces finds combinations that compound the misalignments and trigger surprising failures.

                                  1. 6

                                    I like how concise and unopinionated this article is (or rather that the opinions are clearly marked and a good attempt made to separate the information from it).

                                    I’m pretty bad with tests, essentially doing only acceptance, diff and a lot of manual tests (for what could be automanual). I have yet to find tests that aren’t end-to-end and can survive major refactors (that are complete rewrites or close to it) or even language changes. I’d also like them to not discourage me from changing a bad API (that’s likely to invalidate a lot of unit tests).

                                    And while I’m wishing, something that can go from manual (done from a REPL or debugger) to automanual more naturally would be nice too. Even creating a second file in a project meant to be small feels cumbersome.

                                    Before reading this, I used to think property tests meant fuzzing and now learned that fuzz tests don’t actually check the output!

                                    1. 6

                                      And while I’m wishing, something that can go from manual (done from a REPL or debugger) to automanual more naturally would be nice too. Even creating a second file in a project meant to be small feels cumbersome.

                                      Ooh this is totally a thing- obscure, but a thing. I’ll find some references and add a section on it when I get a chance.

                                      1. 5

                                        “and now learned that fuzz tests don’t actually check the output!”

                                        It’s one of those terms that don’t have a precise definition. Depends on which researchers or tool developers you’re talking to. What they have in common is throwing random data at something watching for an effect. That used to not involve a check because the problems they looked for often caused crashes. If the problems don’t, you might need checks to spot them. So, fuzzing != no checks but it’s common to not have them if target is C language or something.

                                        Lots of random input is the defining trait.

                                        1. 6

                                          There’s definitely a point where if you’re using a fuzzer, but adding lots of assertions about the output being reasonable, it starts to feel more like property-based testing.

                                          I did an internal tech talk about property-based testing at work couple months ago, and made the case that PBT tooling belongs in one’s toolbox because it can be applied to a range of test styles: along with typical PBT “tactics” (to use hwayne’s term), the test library can also run properties that are closer to fuzzing or model-checking. New tactics and a variety of approaches can plug into existing PBT tooling.

                                          The core interface* is arguably specifying how to generate input for a property (reproducibly), and how to classify that input as interesting or not (by running some test code and checking for failures/errors). This framing is pretty general, and mainly ensures the PBT library has the info it needs to shrink interesting input. Plugging different testing tactics can lead to tests with very different trade-offs:

                                          • “random input” + “it doesn’t crash”: Classic fuzzing.
                                          • “structured input” + “these two implementations agree”: Comparing the code under test against a reference implementation, a naive/inefficient version, or the same codebase without an experimental optimization. Taking something easier to verify, and using it as a foothold to check something more complex.
                                          • “structured input” + “resource limits are sufficient”: Searching for input that uses disproportionately large amounts of memory, CPU time, or whatever.
                                          • “structured input” + “roundtrip a conversion”: encode some data, decode it (say, pack and unpack data for serialization to disk), and check if anything got lost in translation. A classic tactic.
                                          • “a sequence of operations against an API” + “call API and check results, update in-memory model”: This works more like a model checker, and an in-memory dictionary can be an easy stand-in for a database, filesystem, or other model of the state of the outside world when testing complex logic in a vacuum. While it isn’t exhaustive, the way (say) TLA+ can be, and it’s difficult to apply to problems that are inherently nondeterministic due to concurrency, it’s a great way to cheaply stress-test logic and discover surprising interactions. There are lots of APIs where the individual operations make sense, but have subtle misalignments between them that can compound and spiral out of control when combined in certain ways. As a bonus, shrunken failing input for these properties tends to have a narrative: “when you do this, then this, then this, then this, it fails in this way”.

                                          Controlling the input generation also means that it can be steered towards particular areas of the state space, in a way that feels more direct than (say) temporarily using asserts to convince afl-fuzz that those branches are boring.

                                          * I have the most experience using theft on C (I’m the author), but it seems like this applies to most other PBT libraries.

                                          1. 1

                                            I see. Thanks for the clarification!

                                          2. 4

                                            And while I’m wishing, something that can go from manual (done from a REPL or debugger) to automanual more naturally would be nice too.

                                            I might have misunderstood you, but something like python’s doctests could be what you’re looking for. You can basically use the REPL to manually test your code and how to use it, you paste that in the docstrings, and doctests runs it again and makes sure it behaves like it should.

                                            1. 1

                                              something like python’s doctests could be what you’re looking for

                                              Good suggestions. I quite like the idea of doctests but here are some things that dissuaded from adopting them, at least right away.

                                              • I usually don’t want to see my tests when editing code. I guess I want tests either in a separate file (which also goes against my wish to have fewer files) or maybe grouped together at the end (or beginning) of the file.
                                              • Only the very first few (manual) tests tend to focus on a single function.
                                              • In a REPL or debugger, I usually have a lot of state set up. In the debugger (say when post-mortem debugging), I don’t have the steps to recreate this state! Of course, I could spend time to cook up an example but that takes more thinking than just a copy-paste (and I’m trying to find something that lets me progressively more effort for more certainty).

                                              Right now, I know that a lot of my REPL commands are just lost to the history file and could probably be made automatic instead.

                                          1. 12

                                            This post was really eye-opening. I was expecting another “it leads to coupling n stuff” but this gave me a way to think about what’s been bothering me. Thanks for that.

                                            Historically, I don’t think inheritance was “meant” to be that way. If you look at the method wars period (late 80s, early 90s), people were drawing all sorts of relations between classes: refines, implements, abstracts, generalizes, serves, etc. Inherits was the only one that became language construct, at least for a long time. I wonder if that was a major mistake, since it restricted “powerful” relationships to whatever the language designers knew about.

                                            If that’s the case, I’d want to explore OOP more in dynlangs.

                                            1. 2

                                              If that’s the case, I’d want to explore OOP more in dynlangs.

                                              I’ve found Lua handy for exploring object systems – instead of having one built in, it presents a carefully chosen set of metatable hooks (including some similar to __getitem__ in python and method_missing in ruby). These can be used to construct several kinds of object systems.

                                            1. 8

                                              Preparing release 1.4.0 of greatest: (Edit: it’s posted.)

                                              • Add a flag to call abort() on test failure, so if tests are being run inside gdb, it will break on the failure with the call stack and other runtime info.

                                              • Add a function to set optional test name suffixes, so if a test is being run with parameter(s), it can print test results as the_test_ABC, the_test_DEF, etc. This is particularly useful when tests are being run in shuffled order. The test-name-based filtering includes this optional suffix in its matching.

                                              • Several other misc. improvements, especially work on the documentation.

                                              After that’s out, I’m going to work on a new release for heatshrink. It’s essentially done, and has been stable for years, but it wouldn’t hurt to flesh out the documentation more. Also, it’d give the project some recent activity – people seem to consider projects that haven’t had changes in 3 years stale, even when they’re really tightly scoped and feature complete.

                                              Strangest request: Someone connecting with me on LikedIn specifically to ask me to write a C# wrapper…Unlikely, though I will probably add blocking, bitstream-compatible implementations in C and Python to make porting easier. The majority of heatshrink’s complexity comes from being able to suspend and resume on IO, between any bytes.

                                              I will probably be back to working on theft again in a week or two – I have a massive restructuring underway to add multicore support. Since it interacts with nearly everything, there has been a ton of integration work, but it’s starting to come together.

                                              Work: Wrapping up loose ends so that we can start something really exciting soon. Also, working on a couple internal presentations about recent successes.

                                              1. 2

                                                The majority of heatshrink’s complexity comes from being able to suspend and resume on IO, between any bytes.

                                                Been a while since I heard someone mention that feature. Might be worth a write-up or detailed comment on how what makes it complex or how it’s done.

                                                1. 9

                                                  It implements LZSS, one of the LZ77 family of data compression algorithms. Everything is based on a state machine, with an internal buffer for the data being processed. I chose LZSS because it does all of its processing using that buffer (the “sliding window”), so its CPU time and memory usage is very predictable.

                                                  As a friend described the API: “you put in data, then turn the crank until nothing more comes out, repeat.” When you’re done, you call a function to notify it the end of the stream has been reached, and keep turning the crank until processing is finished. It isn’t that surprising, since the point of compressing / decompressing is to change the size of the data, but the assumption that the caller will keep calling to sink more data from a stream means that it can check its current state / where it left off, if there’s enough data buffered to move on to the next step, and how much room is left in the output buffer.

                                                  I’m not saying it’s incredibly complicated, but a blocking, single-pass LZSS implementation is like a page of code; the suspend/resume logic is well over half of heatshrink, line-wise. There are also optimizations for doing decompression or compression on memory-constrained embedded systems – for example, the indexing in the encoder is simple, but either significantly faster or uses an order of magnitude less memory than other approaches I’ve seen. If you’re unpacking data out of ROM on a system with only a few hundred bytes of RAM to spare, approaches like gzip’s big in-memory hash table are a non-starter.

                                                  1. 1

                                                    Appreciate the detailed description. That’s very interesting. :)

                                              1. 3

                                                The damn cookie notice spam won’t go away, and it’s over the contents.

                                                1. 1

                                                  Haven’t tried this one, but I usually delete the Instagram etc. overlays in developer mode.

                                                  1. 1

                                                    Try clicking then dismissing the settings thing (cog icon), I had the same issue.

                                                    1. 1

                                                      I found it helpful to open it in Incognito, then export it as a PDF.

                                                    1. 14

                                                      Suggestions that aren’t already in other comments:

                                                      • Have check-in / check-out habits, so you know by habit when you’ve started and are done for the day.
                                                      • While I use the same desk and one of my external monitors for my desktop and docked work laptop, I switch keyboards when I’m working. It’s a very continuous, tactile reminder, and unplugging and switching it is part of my check in / check out habits. (I’ve never seen anyone else suggest this.)
                                                      • Learn the preferred communication styles of your coworkers and try to meet them halfway – Some people are much easier to work with once you know whether they prefer a quick voice call to hash things out, via text chat and issues/PRs, etc.

                                                      In general, I’ve found setting boundaries, having a regular routine, and over-communicating helpful, but the specifics will vary between people, and between teams. It makes a big difference whether a team / company has a wide time-zone spread, for example.

                                                      1. 4

                                                        That keyboard suggestion is really interesting. Makes me think about what other good priming strategies there are…

                                                      1. 6

                                                        k/q don’t have (f g h) and have fewer “words” so it’s easy to take a look and see if it’s more discoverable.

                                                        k)mode:*>#:'=:
                                                        q)mode:{first idesc count each group x}
                                                        

                                                        but having 200 primitives shouldn’t be a problem, it’s just a lot to (re) learn. 眾數!

                                                        Anyway.

                                                        If I need the mode, why wouldn’t I just write?

                                                        y{~{.\:+/y=/y
                                                        

                                                        That seems perfectly clear to me, and doesn’t require fiddling with “expert level” at all.

                                                        I think stuff like this:

                                                        (i. >./) & (#/.~) { ~.
                                                        

                                                        is one of those things where if you needed to compute a trillion modes a day, you might try to think about how you reduce the time and space of the operation and so it’s worth spending forty minutes on it.

                                                        btw, this is faster: ~. {~ [: (i. >./) #/.~

                                                        1. 6

                                                          As a data point: While reading the blog post, I tried writing it in k, and came up with exactly the same answer you did (albeit with slight syntactic changes for kona). It only took a few minutes.

                                                          1. 2

                                                            It doesn’t really make sense to compare APLs to logographic languages. Natural languages are optimized for single-pass parsing.

                                                            A more apt comparison (and the most popular one, I believe) might be between the notation of APL and mathematical notation. But even then most mathematics can be read without the insane backtracking and potential for error that APL presents.

                                                            1. 2

                                                              I don’t agree that “natural” languages are optimised for anything: “L’invité qu’il a dit des folies”, or “The horse raced past the barn fell“ or even “为了环保运动必须深化” all demonstrate how sometimes a lack of clarity early in the sentence, accidentally or not, can force the reader to backtrack in a “natural” language.

                                                              I understand APL this way: from left to right, top to bottom. If you understand it a different way it may be helpful to explain to people how you are (or came to be) successful in APL instead of trying to mix up my metaphors.

                                                              1. 2

                                                                Certainly you can construct natural language sentences that require backtracking. But my statements about the features of natural languages are uncontroversial (ask any linguist) and backed up by significant evidence. In fact, you can easily observe this yourself through the following experiment: take an program in an APL and a sentence in Chinese or even English and see how much someone can understand when you black out a few characters/words.

                                                                So I’m a bit surprised as to why you seem to be arguing whether natural languages are like that instead of whether APL is like that.

                                                                Comparing APLs to Chinese or other logographic languages is disingenuous because it suggests that Chinese, a language used by billions for thousands of years, shares the flaws of the APLs, a niche language used by at most hundreds for a few decades; it attempts to link a dislike of the design of the APLs with a dislike of logographic languages in general. As I’ve said, this comparison is superficial, and it is much easier to understand the flaws and benefits of APLs through a less synthetic comparison, e.g. with mathematical notation or other programming languages. It’s folly to “mix up the metaphors” of natural language and computer language.

                                                              2. 1

                                                                based off of this, would it make sense to make an APL variant where you still type the syntax like in ASCII variants, but you apply a rich formatting environment to it? Do you think there would be possibly better layout techniques for reading the code?

                                                            1. 10

                                                              I’ve been building a C testing framework for work and heard about Snow on Lobsters, so I’m planning to peruse it’s features for inspiration. The one I’m building isn’t as macro-heavy/macro-driven but I think there are a number of advantages to leveraging macros so I want to see what I can add.

                                                              1. 5

                                                                You should have a look at greatest, which has worked out great for me in the past. I don’t do a lot of C, but dropped my own little testing setup for greatest, and haven’t looked back.

                                                                1. 2

                                                                  I’ll check it out, thanks for the link. At a glance, my framework does look similar.

                                                                  Probably worth mentioning, I am sort of targeting this at folks that develop software using the traditional full cycle SDLC and have to live through that cycle many many times. As a result, I also have a goal to formally support requirements engineering. Basically what that means is that as a test engineer writes a test for a developer to implement against, they can optionally map it to either a requirement (by ID), a type of requirement (functional, non-functional, performance, etc), or a set of requirements (multiple IDs). On a very large project with many moving parts, support for this in a test tool can be invaluable.

                                                                  The nice side benefit of this is that if you’re using a tool like Git, you can scroll through major points in the Git history and clearly see the progression of development not just by what tests are passing, but also by how many of the requirements solicited from the customer/stakeholder are satisfied. Eventually, I’ll support generating metrics from the tests in a common business/professional format (such as an Excel spreadsheet, so managers can create visualizations and whatnot).

                                                                  I think it’ll be useful for developers because they don’t just have to point at a results table and say “I’m passing all the tests”, they can point at a results table and say “I’m passing all the tests and here’s also proof that the tests passing fully cover all the initial requirements laid out, therefore the product you asked for is built” (and of course if they don’t like what they got, they can go talk to the requirements engineer :P )

                                                                  1. 6

                                                                    Hi, greatest author here. :)

                                                                    Something that might be useful in your situation: with greatest, if you have formal requirements IDs, you could use them as tags in the test function names, and then run tests based on those – you can use the -t command line switch in the default test runner to run tests based on test name / substring match. (Similarly, -x skips matching tests, so, for example, you could skip tests with __slow in the name.) If you name tests, say, TEST handle_EEPROM_calibration_read_failure__ENG_471(void) { /* ... */ }, then ./tests -t ENG_471 would run that. (Running tests by name also helps with quick feedback loops during development.)

                                                                    I did some automotive embedded work several years ago. We had a whole requirement traceability system that involved scraping requirement IDs out of comment headers for tests, which eventually fed into coverage reports.

                                                                    1. 1

                                                                      Oh wow, that’s pretty cool. That tagging system can certainly be useful for more than just the requirement IDs but ya, that would work. Being able to filter tests by the tags is also really neat and I hadn’t thought of that as a feature.

                                                                2. 1

                                                                  I’d be curious to see what someone could come up with if the test framework didn’t use the C preprocessor and used something else instead. Might be a fun exercise. But then again, maybe I’m just not liking the preprocessor lately.

                                                                  1. 1

                                                                    What would it look like to drive tests, for C programs, in say Lua? It seems like a wonderful idea, but I’m not sure if the boilerplate stuff can be automated in such a way to make it feasible…

                                                                    1. 1

                                                                      I’m not sure either, but it still might be an interesting exercise (or mini research project). Maybe I should be the one to look into it since I’m the one that spoke up. ;)

                                                                      1. 1

                                                                        Actually, this sounds like something @silentbicycle has probably already tried. Might be worth checking in with him first. :)

                                                                1. 6

                                                                  Working on implementing an Earley parser using this amazing tutorial. I’ve read that site about a dozen times, and last night it finally clicked. Money quote from the author:

                                                                  Most compiler texts are obsessed with slaying the compiler dragons of the 60s with 70s technology.

                                                                  If you have any sort of appreciation for algorithms or parsing, you should study it.

                                                                  1. 4

                                                                    The coverage in Dick Grune’s Parsing Techniques: A Practical Guide is great, and his page for the first edition includes PDFs. If you’re interested in parsers in general, it’s definitely worth picking up.

                                                                    I’ve also been working on implementing Earley parsers, and the Earley/intersection parsing approach seems particularly compelling. It’s described in chapter 13 in the second edition, doesn’t seem to appear in the first.

                                                                    1. 2

                                                                      So Earley will recognize all context-free grammars, including ambiguous ones? I have two problems with that:

                                                                      1. Doesn’t it have to return a “forest” of all possible derivations of ambiguous grammars? What do you do with that? That doesn’t seem to be solving the problem; it’s more like pushing it to a later stage.
                                                                      2. Context free grammars aren’t powerful enough to represent most languages we use (C, C++, shell, Ruby, Perl 5 and 6, etc.). So even if Earley magically solved problem #1, it still wouldn’t be an ideal algorithm.

                                                                      I’ve written about this length in various comments, but I think CFGs are the wrong abstraction for human programming languages. They’re simultaneously not powerful enough and too inefficient.

                                                                      My argument is analogous to the one here regarding CFGs and network protocols. This is admittedly is a fringe claim, where as “CFGs should be used to parse programming languages” is the claim you will read in nearly every textbook:

                                                                      https://lobste.rs/s/uyjzjc/science_insecurity_meredith_l_patterson#c_pzjzxh

                                                                      One of the langsec people responded on HN: https://news.ycombinator.com/item?id=16126234

                                                                      I read that paper about “calc-regular languages” but it was immature – which the authors admitted – it still can’t handle some common constructs.

                                                                      Anyway, my overall point is that we should use formalism, but formalism should match practice, not the other way around. Practice shouldn’t be put in the straitjacket of a theory (especially when that theory doesn’t provide you with good efficiency or correctness guarantees, as CFGs don’t).

                                                                      (FWIW I have a very messy wiki page about the difficulties with parsing here: https://github.com/oilshell/oil/wiki/Parsing-is-Difficult

                                                                      I don’t believe any of them are addressed by Earley parsing. The main bugaboo I have now is that every language has at least TWO parsers. One for the compiler/interpreter, and a totally separate one for linting/formatting. The former ignores whitespace; the latter must recognize whitespace. Even a brand new language like Julia has two parsers!!!

                                                                      Any system based on CFGs don’t solve that problem. I do NOT want parse trees (which include the full derivation of the string from the grammar); I want ASTs or “lossless syntax trees”.

                                                                      1. 2

                                                                        Well, I’m not happy about how Earley detects ambiguity TBH. But I am looking for something that is simple enough to take the question off the table so I can continue moving forward for now.

                                                                        You ended up using Pratt parsing for Oil, correct? The choice of a parser for my project is still very much up in the air; the most important consideration is whether I can build the grammar immediately before parsing a module, which isn’t a great use case for a lot of existing tools. I don’t need the ability to mutate the parser while I’m parsing, as I don’t mind cheating by pre-processing to locate all the grammars in use.

                                                                        From my research, Pratt parsing looks really underutilized, but I wasn’t able to discern whether it was painful to build the rules programmatically after parsing patterns and such, and whether it’d be substantially different from PEGs. For example, if I’m building a for x in iterable statement, would matching the in keyword be painful, especially if the metagrammar specification was declarative in nature? I won’t be writing the led() and nud() methods by hand here, they’ll be generated from specifications provided by the user. Also, I was initially put off by all the precedences on everything, then later realized those are implicit in most grammar specifications anyway, so that is pretty unfounded.

                                                                        1. 2

                                                                          Also remember you can always use more than one parser. sklogic used to point that out on HN when people were choosing between one that succeeds/fails fast and one easy fod error messages. Something like that.

                                                                          His tool on “combinatorylogic” Github is a LISP that embeds a bunch of DSL’s he mainly uses for program analysis. So, his double-parser strategy might be well-informed. ;)

                                                                          1. 1

                                                                            Oil is best described as 3 interleaved recursive descent parsers + one Pratt parser for expressions, with 15 or so lexer modes. [1]

                                                                            Pratt parsing is really best suited for expression languages, not a whole programming language, as Crockford does in his widely read article (which I reviewed here [2]).

                                                                            Although Pratt parsing isn’t a grammar-based approach, there is something called an “operator precedence grammar” which is a subset of CFGs, and what it handles roughly corresponds to that set of languages:

                                                                            https://en.wikipedia.org/wiki/Operator-precedence_grammar

                                                                            Bob Nystrom has a good way of putting it:

                                                                            http://journal.stuffwithstuff.com/2011/03/19/pratt-parsers-expression-parsing-made-easy/

                                                                            If recursive descent is peanut butter, Pratt parsing is jelly. When you mix the two together, you get a simple, terse, readable parser that can handle any grammar you throw at it.

                                                                            I would add the caveat that this technique is really good for implementing existing languages. If you are designing a language from scratch, it might be helpful to use some kind of parser generator to guide the design.

                                                                            In summary, I would say the parsing method you use depends very heavily on the language. And many languages are compositions of multiple languages. I don’t have experience with Earley but I’m sure the same thing is true: it’s good for some things and not good for other things.

                                                                            [1] http://www.oilshell.org/blog/2017/12/17.html

                                                                            [2] http://www.oilshell.org/blog/2016/11/02.html

                                                                          2. 2

                                                                            Have you heard about parsing with derivatives?

                                                                            I recently explored the possibility of doing this with Clojure’s spec. Here’s an example:

                                                                            (s/conform ::s-expression "(abc  def)")
                                                                            ⇒
                                                                            [[:sub {:left  \(
                                                                                    :nest  [[:atom [\a \b \c]]
                                                                                            [:whitespace [\space \space]]
                                                                                            [:atom [\d \e \f]]]
                                                                                    :right \)}]]
                                                                            

                                                                            Notice that in the third line, the whitespace has been captured. A linter could examine whitespace and a compiler/interpreter could throw it out. Specs can be bidirectional too, so a formatting tool could take a data structure like the above, adjust white space, and unform it, producing formatted source code.

                                                                            Regarding returning a parse forest, the author of parsing with derivatives goes into that here. On the next slide, the author alludes to the possibility of modifying the grammar during parsing (!) which may be of particular interest to you.

                                                                        1. 8

                                                                          High line coverage doesn’t necessarily imply covering enough of the state space. (Branch coverage is only slightly better in this regard.) As the post notes, testing with groups of input that correspond to different equivalence classes is more meaningful, because it exercises the code under test in several different ways.

                                                                          There’s a problem hiding in plain sight, though: how often do we know all the edge cases?

                                                                          Property-based testing, fuzzing, model checking, static analysis, and symbolic interpretation help here: Property-based testing and fuzzing do guided random exploration of the state space and report edge cases (known and unknown). Model checking, static analysis, and symbolic interpretation reason about possible behavior within the state space as a whole (possibly with some upper size bound).

                                                                          1. 1

                                                                            If you know a line is run during a test suite, it tells you nothing about whether it’s well tested.

                                                                            However, if you know the line is not executed during a test suite, that does indeed tell you something about how well tested it is.

                                                                            I don’t work towards 100% coverage - I work away from having code I know isn’t tested. It looks like the same thing from a distance, but it’s an important distinction to me.

                                                                            1. 2

                                                                              Absence of coverage can definitely be a useful insight. Line coverage can be an absence of evidence, but negative line coverage is evidence of absence. :) It just seems to lose something in translation when that coverage becomes a metric that triggers failures in CI, though.

                                                                              (Also, if test input generated via property-based tests still aren’t covering parts of a program, it’s a sign the generator needs more variety.)

                                                                            2. 1

                                                                              I like to move this back to the root. Eliminate edge cases as much as possible.

                                                                            1. 6

                                                                              I’m actively working on multi-core support for theft again. While I’m still integrating the code for shrinking failures, running individual tests parallelizes nicely: in some early/casual benchmarks, running 10,000 trials of a CPU-bound test on a 4-core system indeed runs ~4x as fast as before. Shrinking should also be well-suited to parallel exploration. Once that’s working, I’ll do more substantial benchmarking, on a system with 40 cores.

                                                                              It may be a couple more weeks before the next release, but in the time I’ve found for it, I’m making steady progress.

                                                                              Incidentally, taking an existing project and restructuring it so that individual units of work can be relocated to different processes, moved around a network, etc. is an interesting exercise. A lot of implicit decisions become explicit. Thinking about what actually needs to be passed along in messages to the next phases (rather than everything staying in-scope by default, due to sequential computational inertia) draws attention to implementation details hiding in plain sight. Also, this restructuring is also a bit tedious in C, because, well, C isn’t Erlang. Pattern matching and tagged tuples would help a lot…

                                                                              I started learning x86-64 assembly a couple weeks ago, and started doing Project Euler for practice. I’m still only a couple problems in, because I’ve focused on better understanding details like the costs of different approaches. The early problems have just the right level of difficulty for useful self-study. It also gave me some context that was useful for Meltdown & Spectre-related investigations last week.

                                                                              Finally, I am working on a few submissions for the IOCCC. :D

                                                                              1. 2

                                                                                While both are probably few years out of date, here’s a post about building a 20-core (40 thread) server with 64 GB of RAM, with roughly the same budget. Building with nine Raspberry Pis doesn’t seem that efficient, unless your goal is also to “make[] parallel concepts explicit”.

                                                                                (I haven’t done this yet, but it’s tempting, especially as I’m working on multi-core support for theft…)

                                                                                1. 3

                                                                                  Between fighting a cold and jury duty, not so much reading lately. I just started re-reading Alexey Radul’s thesis, Propagation Networks: A Flexible and Expressive Substrate for Computation, though.

                                                                                  For networking, here are a couple different books I’ve found helpful. Each has a pretty different angle, at least one will probably resonate with you.

                                                                                  If you’re working in C, I wrote socket99, to make it simple to set up several common cases in the BSD sockets interface, using C99 designated initializers. (For example, opening a socket for an async TCP server.) The BSD socket API is especially gnarly.

                                                                                  (This is paraphrased from a comment I left on metafilter a while ago, FWIW.)

                                                                                  Also, feoh recommended The Linux Programming Interface. While it isn’t primarily a networking book, it does explain the socket interfaces and such in depth, among many other things. Stevens’ Advanced Programming in the UNIX Environment covers a lot of the same material, in a less Linux-specific way (which can be a pro or con). While they mostly overlap, one or the other might seem clearer for particular topics.