1. 5

    Parsing Models Cheatsheet

    I recently got the Earley parsing religion, which is O(n³) worst-case just like everything else, but can handle most sensible CFGs in linear time and space, even with infinite lookahead. No shift/reduce errors like yacc, no accidental infinite recursions like PEG parsers, just reliable operation.

    1. 1

      By the way, I think that GLL parsing can be similarly elegant and concise as that of Earley. I haven’t gotten around to translating that to Python though.

      1. 1

        The religion has its appeal… but doesn’t get the job done on its own. Can you recommend some implementations?

        1. 2

          If you are looking for a simple well-explained Python implementation that does the related optimizations such as Aycock’s fix and Leo’s right-recursion optimization, do look at ours. We are writing this as part of a larger book on fuzzing with grammars.

          1. 1

            Thanks! That book looks quite useful. I will share it around.

          2. 1

            The industrial-strength implementation is Marpa, which is a C library. There are bindings for various languages, but the original author’s Perl binding is the most complete.

            Searching for “earley” on nearly any language package manager will probably give you a bunch of implementations of varying quality.

            Naturally, I’m writing my own: as a learning exercise, and because I’ve got Strong Opinions about documentation quality I’d like to demonstrate. I haven’t yet got it to the point of public presentability, but it’s coming along nicely.

            1. 1

              Would love to see your progress. Are you going the SPPF route or are you building the trees after the Earley algorithm has completed?

              Here is our attempt at explaining Earley parsing in Python.

              1. 1

                I currently have zero documentation, but if you’d like to take a look at the source, I put it on sr.ht.

                I confess I don’t know exactly what SPPF actually is, except that Loup Vaillant couldn’t understand Scott’s paper about it, and Jeffrey Kegler independently invented something equivalent which he doesn’t really explain. Basically, I build a parse-tree node for each completed item by stitching together parse-tree nodes for non-terminal symbols that appear in the right order and cover the required span. I happen to do it while the parse is in progress, but I think the same approach would work if done afterward.

                Thank you very much for that link! I just discovered the hard way that the Leo optimization breaks naïve parse-tree construction, and yours is the first document I’ve seen that seems to acknowledge that’s an issue. Unfortunately, it only mentions it in an exercise, so there’s no worked solution, but at least there’s a hint which is more than I’ve found anywhere else.

                1. 1

                  Unfortunately, it only mentions it in an exercise,

                  The solution is there! Look at the LeoParser using Tagged class. (Most exercises do come with a worked solution.)

                  Thank you for your link.

                  1. 1

                    Ah, right! It was late at night when I first looked, now that I’ve had some rest I can see the solution.

                    Although I’m sure I’d fare better if I’d used your tutorial from the beginning, I’ve had some trouble following the code examples:

                    • Exercise 5 has a big block of text describing the concept of a deterministic reduction path, and then it has a block of code that adds a tag field to the State class. Nothing in the text motivates this addition, and it’s not actually used in the following LeoParser class.
                    • The tag field is eventually used in the solution to that first exercise, but very subtly: inside the new leo_complete() method, a single call is changed to have an extra parameter. That’s the only change to the previous leo_complete() function, and it doesn’t have a comment pointing out the change or even a keyword argument tag=state.name to highlight that something is being done with tags
                    • It turns out that tags are not even necessary to make a right-recursion-optimised recogniser, only a parser, so this is all distraction from the idea (“deterministic right reduction”) that the text is trying to convey and it would be very easy for a reader to ignore this scaffolding and assume they’ve understood the material.
                    • By the time we get to the hint that says “any time you see a tagged state, look at its end point”, there hasn’t been any discussion of how, when or why a state gets tagged, or what a state gets tagged with. It turns out that the code does provide answers to most of those things, but (as I said) it’s pretty subtle.
                      • EDIT: Now that I look more closely, the state is tagged with the LHS of the production that was originally produced, but the code never examines the tag content, so effectively the tag is a boolean “produced by Leo reduction” flag. Is that right?
                    • Another thing that confused me: inside the parse() method of the final LeoParser, it starts by calling parse_prefix(), checking the result, and then calling chart_parse(). If I understand correctly, that code will recognise the whole input string, then throw away the intermediate and final results, then start parsing again from scratch?
                    1. 1

                      Thank you for the detailed review. While the rest of the text has gone through multiple revisions, the exercises hasn’t received as much care as the main body other than checking that the idea is correct and the implementation works. I will go through the exercise again, and clarify the points you have noted; and update you.

        1. 1

          You can find me at @scholar.social/@rahul (well trying to prefer that to twitter anyway)

          1. 2

            Deleted my original rant response because, as @sgreben rightfully pointed out, my rant was exactly the kind of thing i ranted about.

            But, yes, I’d like to see a freeze in new users for a while, to avoid lobste.rs from collapsing under its own weight.

            I’d also suggest:

            1. Culling users who have been new members for X months and never posted (but once you post/comment once, you’re in for forever?)
            2. Requiring more than one invitation for joining?
            3. Moving to a subscription model. I get enough enjoyment out of this site that I’d be happy to pay for it.

            But, that’s just my USD$0.02.

            1. 15

              Culling users who have been new members for X months and never posted (but once you post/comment once, you’re in for forever?)

              I will note that there are people who primarily lurk in any online community. This site has a private message system and possibly other features of value to members who never post publicly.

              Original research I was privy to in my first moderating position suggested that about 20 percent of users were active participants who posted regularly or semi regularly, another 10 percent posted only once or very rarely and the rest lurked. Anecdotal observation suggests that these figures probably are fairly representative of other communities I have engaged in.

              1. 8

                Lurkers are harmful to communities like this, because they have influence in shaping the site but also don’t bother to engage beyond being a silent majority that can be pandered to (purposefully or not) they amplify any democratic issues the site might have.

                Better to purge them and leave control of the site (what little there is) in the hands of the people who bother participating.

                Edit: lurkers here being those who have accounts but don’t post.

                1. 7

                  Hi friendlysock, I’m malxau and I’m a lurker.

                  The reason I ended up like this is because the technology landscape is very broad today (and getting broader), and I have firsthand knowledge or experience with a tiny fraction of topics that get discussed. So the best way I can see to keep the signal-to-noise ratio high is to read about things that I don’t know, including comments from people more familiar with them, and avoid contributing to or moderating those posts.

                  Occasionally there will be something I know, but something I deeply know and have firsthand knowledge of is still rather rare. (In my case, I’ve spent the last 14 years working in Windows kernel mode; I’m an active practitioner, but looking at submissions you’ll see why I don’t feel like I know the breadth of topics being discussed, including things like the Palantir thread.)

                  Do you still think I’m a problem? Do you think the site would work better if I commented or moderated more?

                  1. 4

                    I can’t see your upvotes or flags, so I can’t comment on that front. That said, I think the site would definitely be improved by your participation and submissions of things relating to your background with Windows arcane programming!

                    Thank you for giving your perspective here.

                    1. 1

                      Your site was refreshingly different since it covered stuff I don’t usually see on Lobsters. Doing low-level kernel stuff, I bet you ran into both content and lessons learned that Lobsters might have found interesting regardless of you writing on Windows. There’s also Lobsters on Windows. There’s also a lot of Lobsters that hate Windows.

                      I have no idea how well your stuff would’ve been received. There’s a chance people might have found it interesting, though. If it’s Windows-like as someone said, an easy example is Minoca OS getting lots of appreciation. Another thread on its documentation had 10 votes. So, there’s potential.

                    2. 6

                      Hey there. That seems like a fairly strong opinion. Any research or data you can point me to? I’m not aware of evidence that lurkers are somehow harmful in most cases.

                      1. 5

                        Have you seen HN or Reddit? I’m serious. It’s called hivemind for a reason.

                        People that care enough about a site to post content, or even comment, are, by definition, more involved in the site than users who maintain accounts but don’t do anything but vote up and down.

                        Lurkers who just vote and flag look an awful lot like slacktivists. They’re freeloaders, contributing no content of their own and no discussion, but they can still screw up conversations by voting with a knee-jerk reaction.

                        One of the things that sets Lobsters apart is that is made up quite largely of people that actually write code frequently (instead of, say, being growth hackers, or bloggers, or marketers, or bankers, or whatever else) and that those people are given transparency and tools for interacting with the running of the community. Lurkers run counter to at least the latter of those key characteristics.

                        1. 11

                          Yes, I’ve seen both HN and Reddit.

                          I don’t think I’ve ever seen a forum that didn’t have a lot of lurkers. Do you know of any forums where “post or leave” is actual policy? Do you know of any research on this angle?

                          I’m not making any recommendations here. I’m just seeing people saying “I think we should do X!” and the things I’m seeing don’t fit with my understanding of best practices. But I certainly don’t know everything, so I’m trying to share what I know concerning actual (pertinent) data and asking if anyone knows of any supporting research for their positions.

                          To be clear, I’m absolutely not trying to tell anyone how lobsters should be run. I was given an invitation by a coder who wants to start a discussion board and he asked if I would consider taking on the role of lead moderator. I tentatively agreed.

                          So I’m not actually a programmer, though I have some technical training and so on. I’m genuinely interested in learning if there is good data and research supporting the various proposals in this discussion because I’m looking for, among other things, stuff pertinent to the project I’m trying to collaborate on.

                          I’m genuinely curious and open to seeing good information on such things. I’m aware these questions may be unwelcome here, both because I’m new and because people will tend to interpret my comments as intent to shape policy on lobsters the very day I joined.

                          A best case outcome is that my comments and questions serve to be helpful and thought provoking for people here who are trying to shape lobsters while I get useful resources to support my project. But a less nice and more likely outcome is that people decide my questions are somehow bad behavior and I get told to gtfo of the discussion or something.

                          1. 7

                            I’ve never thought about how lurkers skew voting until this thread, but it seems commonsensical now. You end up with the posters performing for a silent audience, instead of interacting with each other.

                            Maybe a half-measure we could try is giving people a pool of votes that’s replenished when you post, and you spend from that pool when you up or down a story or comment; one post (submission or comment) could earn you 10 votes or something. That way votes come from the people who are actually engaging with the site, but we’re not kicking anyone off for not being chatty.

                            1. 10

                              Maybe a half-measure we could try is giving people a pool of votes that’s replenished when you post

                              No no no no no no no. That would result in users creating a large number of low-effort comments in order to refuel. It’s bad enough that internet users will do almost anything to make a number go up. It’s even worse when you attach actual incentives to that number.

                              1. 3

                                We could do something like requiring a comment/post have at least +3 or something before it counts towards your vote pool; that might be enough to frustrate a lot of the system-gaming, no?

                                1. 1

                                  The low-effort posts on popular topics get lots of votes. Probably won’t work.

                              2. 2

                                I’ve never thought about how lurkers skew voting until this thread, but it seems commonsensical now. You end up with the posters performing for a silent audience, instead of interacting with each other.

                                This is an empirical question worth empirically validating before believing. There is also a plausible just-so story that older users feel more confident voting strategically to enforce their political opinions, etc. Form your hypothesis, write a query, decide how to interpret possible results, and then send it to me to run.

                                1. 1

                                  That’s a neat idea and I’d be in favor of trying it. I don’t know to what extent that would affect the upvote/downvote dynamics of the site, but I’m interested in finding out, and I don’t think it’s an onerous requirement on people.

                                  1. 1

                                    a pool of votes that’s replenished when you post, and you spend from that pool when you up or down a story or comment; one post (submission or comment) could earn you 10 votes or something.

                                    I think that this is great idea. Personally I would go with 1-2 votes per submission but whatever the number I think we should try it.

                                    1. 1

                                      Yeah; I originally said 10 because voting serves a real purpose, and I’d worry that only getting one vote per comment could reduce the quality of the front page, because people would hoard their precious votes. I’m no expert on this stuff, though.

                                    2. 1

                                      This idea sounds great. I’m not sure what the dynamics would look like, but it’s be interested in trying it out.

                                    3. 6

                                      but they can still screw up conversations by voting with a knee-jerk reaction

                                      Yes, voting does screw up conversations. If I had my way, lobsters wouldn’t have votes on comments, exactly because I don’t think that meaningful conversations should be democratized like that. Lobsters isn’t a very good system for conversations in my very humble opinion (I keep linking to Discourse.org as the model to live up to for a reason). But I don’t think lurkers are necessarily any worse at knee-jerk voting than active commenters.

                                      Lobsters is, however, pretty much the gold standard for link aggregation, for surfacing content from elsewhere. Voting, flagging, and submitting articles without ever commenting is something I think we should be encouraging, because that’s what the Lobsters software is actually good at. Less conversations, more stories.

                                  2. 5

                                    voting satisfies the “me too” impulse. absent that, I suspect you’d see a lot more actual me too comments.

                                    1. 4

                                      If you change the rules to ‘post or get out!’, I suspect you will see:

                                      1. People who are slow to integrate into the community but will eventually post good stuff lose their connection to lobsters and go elsewhere instead of slowly ramping up from just looking to joining to voting to commenting/submitting.
                                      2. Lots of comments along the lines of “I have nothing to say right now, I’m just trying to say something so I don’t get purged”
                                      1. 4

                                        Voting lurkers could indeed be problematic. Perhaps adding a min-karma-threshold for upvoting (similar to flagging), could be a useful experiment.

                                        1. 1

                                          Is the problem with vote lurking the up or down votes?

                                        2. 3

                                          Downvotes are inaccessible until a user reaches a certain karma threshold. Would it make sense to do the same thing for upvotes too, reducing the pool of users that can vote?

                                          I don’t think outright purging users is very helpful, since reading for a while before posting is a common practice (and probably not something that should be discoraged). I agree having a silent voting majority is potentially quite harmful to a forum.

                                          1. 1

                                            reading for a while before posting is a common practice (and probably not something that should be discoraged)

                                            You don’t need an account to read.

                                            1. 2

                                              You don’t, but there are features that are useful for people who are only reading (tag filtering, hiding stories).

                                            2. 1

                                              That inversion is worth thinking on more. The political folks currently do more upvoting of political stuff than submissions or comments. It isn’t limited to them. We see the same thing in the technical threads for some people or types of comments.

                                              1. 1

                                                I was under the impression votes were anonymous, is this not correct?

                                                1. 1

                                                  The site won’t tell other users what your votes are, but it needs to know, both to prevent multiple votes and to show you what you’ve voted on. Obviously the site administrators, who have direct database access, can query that information.

                                                  1. 1

                                                    This is accurate, and I’ve written elsewhere in this thread about that access and the practices around it.

                                                  2. 1

                                                    They usually vote and comment together. So, you know who some of the likely voters are.

                                              2. 2

                                                how about limiting the votes one has? dota2 does that for reports to keep the reporting system valuable. one of:

                                                • fixed number of votes per time-unit (easiest, but limited impact i think)
                                                • votes per time-unit limited by karma, eg. votes * karma / maxKarma (could become a lobsters ingame currency)
                                                • votes per time-unit limited by submission count (facilitates spamming)
                                                • votes per time-unit limited by combined submission count and karma (i don’t have an idea for a good function to do that ;)

                                                this should at least limit the lurker influence. i for one wouldn’t care if i’d have to manage my votes a bit.

                                                edit: haldean had posted this idea before me, i should have read this thread more thoroughly :)

                                                1. 3

                                                  If the intent is to limit the effect of upvotes, and avoid knee-jerk voting, one could also make it mirror the current downvote choices and simply make a user think about why they are up-voting a comment. So an upvote arrow should offer choices such as [technical|meta|..].

                                                  1. 1

                                                    Or “MAS” for “mutual appreciation society” ;)

                                                2. 2

                                                  Wouldn’t that just cause stupid posts like “not lurker” or “first” to trigger account “lock in” – possibly even on very old threads.

                                                  1. 1

                                                    My concern with a negative eye towards people like myself who don’t post much is that it suggests posting is mandatory regardless of quality or relevance. I am a lurker, but only because I don’t want to clutter up threads with poorly informed or nontechnical content. I wish I had the depth of experience that some more frequent posters have; should I be excluded for being more of a generalist?

                                              1. 3

                                                The hope that 90% of mutants would be killed by a type-checker might be optimistic. We did a study on Python using before and after MyPy signatures were added (the study is rather tiny but useful as an indicator). – Look in Table II for the comparison between types and tests in killing mutants.

                                                1. 4

                                                  I don’t know if the author is being sarcastic, but it fits COBOL to a T including the fixed point calculations, and it is even listed. Can OP confirm?

                                                  1. 3

                                                    The buttons at the bottom are clickable.

                                                    1. 4

                                                      The post is an extreme case of burying the lede.

                                                      1. 1

                                                        Ah, missed that :(

                                                    1. 14

                                                      The author has a pretty explicit bias toward Earley parsing, and using LL(k) over LR(k) methods (which I mostly share). There’s a wealth of information here, but he’s missing a couple other significant strands of generalized parsing research outside of Marpa’s particular Earley variant (perhaps because they don’t fit within his narrative).

                                                      GLR (Generalized LR):

                                                      • Tomita, LR parsers for natural languages, 1984.
                                                      • Tomita, Efficient parsing for natural language, 1986.
                                                      • Scott and Johnstone, Generalised bottom up parsers with reduced stack activity, 2005. This segues into their later work with Earley and GLL.

                                                      GLL (Generalized LL):

                                                      • Scott and Johnstone, GLL Parsing, 2010.
                                                      • Afroozeh and Izmaylova, Faster, Practical GLL Parsing, 2015.

                                                      Outside of Earley, GLL seems like a very practical generalized parsing approach. Instaparse is one implementation.

                                                      Earley / Shared Packed Parse Forests:

                                                      • Scott, SPPF-style parsing from Earley recognisers, 2008 (and several related papers by Scott and Johnstone). Note: I’ve never been able to get the approach described in the paper implemented correctly, and not for lack of trying.

                                                      Earley Intersection Parsing (not sure if there’s a canonical name for this):

                                                      • Bar-Hillel, Perles, and Shamir, On formal properties of simple phrase structure grammars in Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 1961. Proves some important results about intersecting automata and context free grammars.
                                                      • Chapter 13 (“Parsing as Intersection”) of Grune and Jacobs (also cited elsewhere in his timeline), particularly 13.4 (pgs. 437-439): This describes Bar-Hillel’s Intersection Parsing approach using contemporary CS & parsing terminology, and then suggests combining it with Earley parsing. While the basic intersection parsing method produces an astonishing amount of dead-end nodes to garbage-collect later, Earley parsers limit searching to productions reachable from the start symbol. If the completer is modified to produce non-terminals in the intersection parsing format (which is easy), intersection parsing nicely handles producing the parse forest (with structural sharing, when ambiguity produces multiple derivations).

                                                      I’ve been working on a C library for Earley Intersection parsing, and an awk-like, pattern-match-directed language based on it, but working on testing them thoroughly tends to lead me to working on theft instead.

                                                      1. 3

                                                        Thanks for details about alternatives.

                                                        I have played around with Bison’s GLR option and have almost managed to create a C parser that does not require special handling of typedefs in the lexer (ambiguities still appearing at runtime for a few constructs).

                                                        Grune and Jacobs’ Parsing Techniques - A Practical Guide is sitting in a pile waiting to be read (the first edition was manageable, the second looks daunting)

                                                        1. 1

                                                          I have a print copy of the second, and it’s never struct me as daunting – reading it cover to cover, perhaps, but it’s well suited to dipping into for just specific techniques, with a well-curated bibliography for further details.

                                                        2. 2

                                                          packrattle is another example of a GLL parser.

                                                          The times I’ve read about Earley parsers, they seem like the same solution as GLL, but attacking the problem from a different angle. Hopefully someone in academia will eventually prove that.

                                                          1. 2

                                                            I’m particularly interested in Earley parsers because some use cases of mine assume ambiguity (reverse-engineering binaries, scraping damaged data), and the chart that Earley parsers build up supports several operations beyond just recognizing & building parse trees:

                                                            • what terminals/nonterminals would advance the current parse(s) in progress? (i.e., autocomplete)
                                                            • if we don’t assume where the starting position is, what overlapping instruction encodings are there?
                                                            • if we inserted a fake nonterminal here, would enough of the parse have completed to match any instances of (some structure) between here and there?
                                                            • incremental re-parsing of changing input, since earlier columns in the chart are constant once the parser has moved on.
                                                          2. 2

                                                            Thank you for adding this. I understand why everybody is excited about Earley parsing, though I personally don’t feel sufficiently grounded in it to share that excitement, but at the very least other lines of research deserve to be mentioned.

                                                            1. 1

                                                              I apologize for reviving this thread, but do you know if Joop Leo’s improvements for Right Recursion allows for optimizing indirect right recursion also?

                                                              1. 1

                                                                Not sure, sorry.

                                                            1. 3

                                                              sṛṣṭi (सृष्टि) (close enough in pronunciation I think :) ) means creation in Sanskrit.

                                                              1. 1

                                                                nice catch! (srshti)

                                                              1. 33

                                                                RSS is an ancient technology from Web 1.0 (“the naïve Web?”)

                                                                I’m sorry, but blogging, as an early social network based on RSS, was the hallmark of what people started to call “Web 2.0”, as opposed to “Web 1.0” which consisted of simply human-readable web sites.

                                                                1. 6

                                                                  Yeah, that’s what I thought, too. The RSS was the new stuff. Old stuff were hyperlinks and CSS menus.

                                                                  1. 5

                                                                    Wasn’t Web 2.0 about AJAX and XMLHttpRequest? i.e making the web page interactive with asynchronous requests? I would consider gmail to have started web 2.0 in 2004, which was much later than RSS

                                                                    1. 6

                                                                      I don’t think there was ever a definitive definition of Web 2.0. Part of it was the flashy client parts, but part was also APIs that made it possible for others to interact to services.

                                                                      For me, the quintessential Web 2.0 site was/is Flickr, where you have a content organizer, rich API, tagging, and social features. And there’s RSS too, I still subscribe to some seldom-updated streams in my feed reader.

                                                                      We’ve moved past that now. The site/service that defeated Flickr is Instagram, which disables a lot of stuff that Flickr enabled - except tags. You can’t organize your images in folders, you can’t even decide in what order to show them, and you cannot upload other than through the app. The app is the service, the web view is basically read-only.

                                                                      1. 2

                                                                        That is true as well! Here’s what I vaguely remember.

                                                                        Even though RSS as a format was invented earlier, the “blogosphere” exploded in popularity at about 2005-2006 (with popular services like Wordpress, Technorati). And at that point people loosely used the moniker “Web 2.0” for everything that elevated the Web platform above the old boring (D)HTML tag soup:

                                                                        • separation of (X)HTML and CSS
                                                                        • AJAX and first Web apps
                                                                        • Content aggregation through RSS/Atom
                                                                        • Microformats
                                                                        • “Semantic Web” (nobody knew what that was, and it never materialized)

                                                                        (Now we back to the tag soup, although it now includes all of HTML, CSS, JS and a lot of ads.)

                                                                    1. 2

                                                                      I am interested in elegant algorithms.

                                                                      In an older thread on HN, a user commented that unlike Earley and GLR, GLL is extremely easy.

                                                                      But, Earley is already easy enough. A python implementation for the recognizer would be about 50 lines:

                                                                      class State(object):
                                                                          def __init__(self, name, expr, dot, s_col):
                                                                              self.name, self.expr, self.dot, self.s_col = name, expr, dot, s_col
                                                                          def finished(self): return self.dot >= len(self.expr)
                                                                          def shift(self, bp=None): return State(self.name, self.expr, self.dot+1, self.s_col)
                                                                          def symbol(self): return self.expr[self.dot] if len(self.expr) > self.dot else None
                                                                          def _t(self): return (self.name, self.expr, self.dot, self.s_col)
                                                                          def __hash__(self): return hash(self._t())
                                                                          def __eq__(self, other): return  self._t() == other._t()
                                                                      
                                                                      class Column(object):
                                                                          def __init__(self, i, token):
                                                                              self.token, self.states, self._unique, self.i = token, [], {}, i
                                                                      
                                                                          def add(self, state, bp=None):
                                                                              if state in self._unique: return
                                                                              self._unique[state] = state
                                                                              state.e_col = self
                                                                              self.states.append(state)
                                                                      
                                                                      def predict(col, sym, grammar):
                                                                          for alt in grammar[sym]:
                                                                              col.add(State(sym, tuple(alt), 0, col))
                                                                      
                                                                      def scan(col, state, token):
                                                                          if token == col.token:
                                                                              col.add(state.shift())
                                                                      
                                                                      def complete(col, state, grammar):
                                                                          parent_states = [st for st in state.s_col.states if st.symbol() == state.name]
                                                                          for st in parent_states:
                                                                              col.add(st.shift())
                                                                      
                                                                      def parse(words, grammar, start):
                                                                          #epsilon = nullable(grammar)
                                                                          alt = tuple(*grammar[start])
                                                                          chart = [Column(i,tok) for i,tok in enumerate([None, *words])]
                                                                          chart[0].add(State(start, alt, 0, chart[0]))
                                                                      
                                                                          for i, col in enumerate(chart):
                                                                              for state in col.states:
                                                                                  if state.finished():
                                                                                      complete(col, state, grammar)
                                                                                  else:
                                                                                      sym = state.symbol()
                                                                                      if sym in grammar:
                                                                                          predict(col, sym, grammar)
                                                                                          #if sym in epsilon: # Aycock fix
                                                                                          #    col.add(state.shift())
                                                                                      else:
                                                                                          if i + 1 >= len(chart): continue
                                                                                          scan(chart[i+1], state, sym)
                                                                          #states = [st for st in col.states if st.name == start]
                                                                          return chart
                                                                      

                                                                      Converting it to a parser is not hard either (adds another 50 lines)

                                                                      def parse_forest(chart, state, grammar):
                                                                          if not state.expr: return (state.name, [])
                                                                          pathexprs = parse_paths(state.expr, chart, state.s_col.i, state.e_col.i, grammar)
                                                                          paths = []
                                                                          for pathexpr in pathexprs:
                                                                              paths.append([(parse_forest(chart, varexpr, grammar)
                                                                                  if isinstance(varexpr, State)
                                                                                  else (varexpr, [])) for varexpr in pathexpr])
                                                                          return (state.name, paths)
                                                                      
                                                                      def parse_paths(expr_, chart, frm, til, grammar):
                                                                          var, *expr = expr_
                                                                          starts = None
                                                                          if var not in grammar:
                                                                              starts = ([(var, frm + len(var))] if frm < til and chart[frm + 1].token == var else [])
                                                                          else:
                                                                              starts = [(s, s.e_col.i) for s in chart[frm].states if s.name == var]
                                                                      
                                                                          paths = []
                                                                          for state, start in starts:
                                                                              if not expr:
                                                                                  paths.extend([[state]] if start == til else [])
                                                                              else:
                                                                                  res = parse_paths(expr, chart, start, til, grammar)
                                                                                  paths.extend([[state] + r for r in res])
                                                                          return paths
                                                                      
                                                                      def extract_a_tree(forest_node):
                                                                          name, paths = forest_node
                                                                          if not paths:
                                                                              return (name, [])
                                                                          return (name, [extract_a_tree(p) for p in paths[0]])
                                                                      
                                                                      def reverse(table):
                                                                          f_table = [Column(c.i, c.token) for c in table]
                                                                          for col in table:
                                                                              finished = [s for s in col.states if s.finished()]
                                                                              for s in finished:
                                                                                  f_table[s.s_col.i].states.append(s)
                                                                          return f_table
                                                                      
                                                                      def get_tree(words, grammar, start):
                                                                          table = parse(words, grammar, start)
                                                                          f_table = reverse(table)
                                                                          start = next(s for s in table[-1].states if s.finished())
                                                                          return extract_a_tree(parse_forest(f_table, start, grammar))
                                                                      
                                                                      START = '<start>'
                                                                      grammar = {'<start>': [['<expr>']],
                                                                              '<expr>': [
                                                                                  ['<expr>','+', '<expr>'],
                                                                                  ['<expr>','-', '<expr>'],
                                                                                  ['(','<expr>',')'],
                                                                                  ['<I>']],
                                                                              '<I>': [['<digit>','<I>'], ['<digit>']],
                                                                              '<digit>': [['0'], ['1'], ['2'], ['3'], ['4'], ['5'], ['6'], ['7'], ['8'], ['9']]}
                                                                      
                                                                      import sys
                                                                      print(get_tree(list(sys.argv[1]), grammar, START))
                                                                      

                                                                      And this algorithm (particularly the recognizer part) strikes me as rather elegant and concise, and the parts are really self explanatory. So, my question is, how does the GLL implementation compare? (I am not really a Haskell guy, so I am at a loss trying to decrypt that paper.)

                                                                      1. 1

                                                                        Very interesting question. I’m neither a compiler expert nor Haskell user. So, I had to research this question a bit focusing on GLL vs Earley. Here’s something worth considering:

                                                                        “The Earley parser executes in cubic time in the general case O(n^3), where n is the length of the parsed string, quadratic time for unambiguous grammars O(n^2), and linear time for all LR(k) grammars. It performs particularly well when the rules are written left-recursively.” - Wikipedia article

                                                                        “runs in worst case cubic time; runs in linear time on LL grammars and which also allows grammar rule factorisation, with consequential speed up.” - this paper

                                                                        Any differences might imply an advantage. For instance, one says cubic general case and one worst case. There’s different wording on LL grammars worth looking into. One mentions LR(k), one doesn’t, likely cuz it’s LL-based. This might affect something depending on whether LL or LR styles are better fit for grammar itself. Just guessing around here. Your question did lead me to an improvement on GLL I’ll be submitting Wednesday.

                                                                        1. 1

                                                                          Your question did lead me to an improvement on GLL I’ll be submitting Wednesday.

                                                                          Are you submitting a paper at a conference? or just on lobster?

                                                                          LL(k) is a subset of LR(k) right? so if Earley performs O(N) on LR(k), it is linear on LL(k) too.

                                                                          1. 1

                                                                            Submitted here. I can’t answer the other question cuz I forgot too much about parsing. I just used GOLD back in the day with some GLR for harder experiments.

                                                                      1. 6

                                                                        For me, the single piece of code that had the largest impact was the ddmin from Andreas Zeller which is used to isolate failure causing input from much larger input.

                                                                        import random
                                                                        import string
                                                                        
                                                                        def complement(s, i, l): return s[:i] + s[i + l:]
                                                                        
                                                                        def some_complement_is_failing(s, npartitions, testfn):
                                                                            subset_length = len(s) // npartitions
                                                                            items = range(0, len(s), subset_length)
                                                                            complements = (complement(s, i, subset_length) for i in items)
                                                                            return next((i for i in complements if testfn(i)), None)
                                                                        
                                                                        def update_input(s, npartitions, fn):
                                                                            v = some_complement_is_failing(s, npartitions, fn)
                                                                            if v:  return v, max(npartitions - 1, 2)
                                                                            else:  return s, min(npartitions * 2, len(s))
                                                                        
                                                                        def ddmin(s, fn):
                                                                            npartitions = 2
                                                                            while s:
                                                                                s, n1 = update_input(s, npartitions, fn)
                                                                                if npartitions == len(s): break
                                                                                npartitions = n1
                                                                        return s
                                                                        

                                                                        Given a failure causing input, it is able to narrow the string down to the simplest sequence of characters necessary to cause the same failure. It is not limited to just strings but to any set of input conditions, and has been extended numerous times – The same algorithm lies behind git bisect. However, the core is still just this much.

                                                                        As an example, run the following code: The test succeeds if the input contains “()”. So, we progressively trim the input until we find the smallest input that can cause the test to succeed.

                                                                        def test(s):
                                                                            print("%s %d" % (s, len(s)))
                                                                            return set('()') <= set(s)
                                                                        
                                                                        def fuzzer(): return ''.join(random.choices(string.digits +
                                                                                                                    string.ascii_letters +
                                                                                                                    string.punctuation, k=1024))
                                                                        
                                                                        mystr = fuzzer()
                                                                        minimized = ddmin(mystr, test)
                                                                        print("%s => %s" % (mystr, minimized))
                                                                        

                                                                        Edit: here is a better ddmin code than the one above. The above is kind of mangled by me.

                                                                        1. 1

                                                                          (late reply) Thanks a lot for the link to this code! I ended up playing with it here:

                                                                          https://github.com/oilshell/blog-code/tree/master/ddmin

                                                                          (I tried your version first, but couldn’t get it to run.)

                                                                          I put it in my blog-code repo to share with somebody… I won’t write a blog post until I’ve actually used it for something. But it was a fun exercise to learn about this interesting algorithm!

                                                                          It’s interesting how much Python has changed since 2005 or so, when that code was written! Or maybe it was 2002 or earlier, and the latest timestamp is just 2005.

                                                                          1. 1

                                                                            Ah thank you for the reply!.

                                                                            Any idea what was wrong with my version?

                                                                            1. 1

                                                                              I ran it but I don’t remember what the error was! Since you linked to the original I decided to work with that instead.

                                                                              I like rewriting code to understand it, and the original was obviously re-writable :)

                                                                              1. 1

                                                                                Ah I see!. As you can see from my example, I like rewriting code to understand it too :)

                                                                                You seem to have kept the original function mostly as is. Is that better to understand rather than the multiple function structure I have adopted?

                                                                        1. 1

                                                                          By the way, the next mutation workshop is in China, Xian. Watch this space for more information as it becomes available. We would love to have more industry participation, especially with regards to your experience using program mutation.

                                                                          1. 1

                                                                            Hmm. A nice theory, but I’m not convinced it adds much. I think each test should be an explicit statement about the intended behavior. Set something up, execute something, check the result. The random “mutation” shouldn’t be considered in isolation - our code doesn’t randomnly mutate > to >=, someone usually intentionally (or indirectly) makes that change. If the test suite is well designed (nothing to do with code coverage), it should catch a case where logic is changed. This should be caught by a responsible code review when the initial code is written - ensuring every branching behavior is documented in the test suite. Focus on the design skills, not relying on a code mangler to simulate how your code base evolves over time. Also, I hope the title is a joke :) Nothing ever dies in the software world, it just goes round-and-round.

                                                                            1. 7

                                                                              our code doesn’t randomnly mutate > to >=, someone usually intentionally (or indirectly) makes that change.

                                                                              The idea is that human beings are fallible, and it is possible that such a change could have slipped through in the code (even under reasonable reviews and static analysis). Say the code with >= was the correct one, and what you have currently is >, the question is: was your test suite sufficient to distinguish the behavior of the correct from the incorrect. Mutation testing has nothing to do with how code evolves over time.

                                                                              1. 3

                                                                                Just to add on to what vrtha said.

                                                                                If the test suite is well designed (nothing to do with code coverage), it should catch a case where logic is changed.

                                                                                Exactly. Mutation testing can be a tool for helping identifying if a test or suite isn’t well designed. Mutation testing is for testing your tests if you will. A pathological example might be testcase() { test.assert(true) }, when you have a bunch of mutants (versions of your code where that test still passes), you’re inclined to take a look at the test and realize it’s not really testing anything.

                                                                                I have discovered “Real Bugs” by using mutation testing, I think in an environment where it is lightweight enough it absolutely adds value.

                                                                                1. 1

                                                                                  Good points there, I don’t mean to imply it’s a bad idea. It’s a very interesting concept trying to assess the quality of a test suite in an automated way - and technically sophisticated. My point is that writing code is a very human endeavour (for now) - the intent is everything. The code is just an expression of a set of instructions.

                                                                                  The idea is that human beings are fallible, and it is possible that such a change could have slipped through in the code

                                                                                  Because it’s humans writing it, of course there will be errors, and more relevant to this discussion, oversights in practically every program. For example, assumptions that are proved invalid under certain conditions. But the combination of descriptive tests and the code itself should be the baseline for assessing the correctness. Altering the code under test to explore gaps in the test suite seems wrong to me as the aim should be to have a test suite that explicitly states the behaviour of the code (bear in mind I’m TDD brainwashed). A human (the developer, the reviewer) should be able to assert whether the test suite meets this requirement.

                                                                                  when you have a bunch of mutants (versions of your code where that test still passes), you’re inclined to take a look at the test and realize it’s not really testing anything

                                                                                  I like this idea - identifying tests that provide false positives or don’t add value. I’d say it’s a rare enough occurrence though to justify another level of testing. Sure it happens all the time, but most of these should be caught by regular reviews of the test suite - and may not be causing harm if they’re not caught other than hygiene.

                                                                                  I’m not against automation. I totally see the value of generating exploratory test data and particularly like the idea of property-based testing (QuickCheck, FsCheck etc.). But changing the code under test is like moving the goalposts. It’s no longer what was expressed at design time, so the test suite is no longer valid, with respect to what is being tested.

                                                                                  Ultimately I think it’s an intellectually appealing idea, but not that useful in a practical sense for general purpose development. I could see it being useful if you were writing a math library or something very deterministic in nature - something where the range of possible inputs and variations could be huge and not easily expressed in static tests.

                                                                                  This is all just my two cents, totally appreciate it could be a very worthwhile approach for others. And to be fair I haven’t tried it outside noddy examples. Just jars with my world view :)

                                                                              1. 18

                                                                                Use both please. Mutation testing still has a number of problems. The biggest one is the so called equivalent mutants (or immortals in keeping with the whole mutant killing theme :) ). These are mutants that are semantically equivalent to the original program, and hence not killable.

                                                                                The trouble is that the number of equivalent mutants in a project is highly variable, and not very predictable based on other things like LOC, complexity etc. So, if you get a mutation score of say 70%, it is not very clear if the remaining 30% is killable or not. If they are not, then you might be wasting everyone’s time by looking at the 30% mutants. Now, the interesting thing is that provided your test cases have sufficient assertions, your statement coverage is a pretty good proxy for the actual mutation score. That is, most mutants get killed if they get covered. So, that itself will reduce the number of mutants a developer has to look at.

                                                                                I would recommend that people report both: statement coverage and mutation score of the covered statements. i.e A project has 70% statement coverage, and the tests kill 90% of all mutants they cover. This gives a better sense of the test suite completeness and quality than just saying that the test suite has (say) 65% mutation score.

                                                                                1. 2

                                                                                  If one is satisfied with clicking a link to get to the comments, One could simply add a gist (the liquid tag is {% gist %} ) corresponding to the post, and do the comments there. For example, here is one of my blog posts, and you can scroll down to the last, where it says Comments and click on it to get to a gist where you can enter comments.

                                                                                  1. 3

                                                                                    Or you can create an actual link:

                                                                                    <a href="https://gist.github.com/rahulgopinath/76e328c07bc7518f8330969959befa67#file-2018-09-16-writing-a-mutation-engine-markdown">Comments</a>
                                                                                    

                                                                                    The way you’re doing it pulls in a JavaScript library for rendering Gists when it literally only contains the word “Comments” in it. That’s kinda inefficient.

                                                                                    1. 1

                                                                                      Of course! You are right.

                                                                                  1. 2

                                                                                    I am curious about one thing. The image model of small talk, which is considered one of its strengths is similar to the notebook state in Jupyter notebooks right? How does smalltalk get away from all the associated disadvantages?

                                                                                    What am I missing when people talk about image based development?

                                                                                    1. 11

                                                                                      The main thing I think you’re missing is that the actual Smalltalk development experience looks nothing like the experience of using Jupyter.

                                                                                      First, Smalltalk, unlike every other language I’m aware of, and unlike Jupyter notebooks, is not written as files. Instead, you work entirely in a tool called the browser, where you can see a list of classes, those classes’ definitions, and a list of all methods within a class. The order in which these classes and methods were defined is irrelevant and invisible. This alone eliminates a huge swath of the notebook complaints.

                                                                                      Second, with few exceptions, the version of objects you’re seeing is always live; while you might have a Workspace (REPL) open with a test and output, that’s visibly very distinct from an Inspector, which shows you the current state of any given variable. The Browser likewise only shows you the current definitions of a class or method. (Previous versions are usually available through what you can think of as a very powerful undo tool.) You thus don’t end up with the current v. previous value skew issues with notebooks.

                                                                                      Notebooks are suffering because they’re trying to bring a kind of Smalltalky experience to a world where code is written linearly in an editor. I agree, that’s tough, and that’s really the underlying issue that the post you linked is hitting. Smalltalk (and Common Lisp environments) instead focus on saying that, if you’re gonna have an image-style situation, you need tooling that focuses on the dynamic nature of the preserved data.

                                                                                      [Edit: all that said, by the way, I think you can make a case that some specifics of how Smalltalk’s image works in relation to its dev environment are a bit off. One way to fix that is the direction Self went with Morphic, where truly everything is live, and direct object manipulation rules the day. The other direction you can go is what Common Lisp and Factor do, which is to say that the image contains code, but not data—in other words, making images more like Java .jars/.NET assemblies/Python .pyc files. The purist in me prefers Self’s approach, the practical part of me prefers Factor/Common Lisp’s.]

                                                                                      1. 2

                                                                                        Thank you for your response.

                                                                                        I am not sure I understand. Say I have an object A that contains another object B constructed in a particular way as a member. Now, if I edit the constructor of the object B, so that the variables inside B are initialized slightly differently, would it reflect on the already constructed object inside A?

                                                                                        1. 5

                                                                                          Not in that specific case, no. For other things (e.g., changing variable names, class shape, changing a method definition, etc.), the answer is yes. The best way to think of it: the Smalltalk image is a database, and like most databases, if you change the schema or a stored procedure, that takes effect immediately, but existing data would need to be backfilled or changed. This is exactly the thing I was flagging on why Common Lisp and Factor do not do this: their images are code only, no data. (Or Self, for which the answer to your question is, at least in some cases, actually yes.)

                                                                                          This is different than the main thing highlighted in the article, though. There, the concern wasn’t as much about state existing at all, as much as the fact that the presentation made it look as if the output accurately reflected the input, which wasn’t guaranteed. Thus, in a notebook,

                                                                                          In[123]: x = 5
                                                                                          Out[123]: 5
                                                                                          

                                                                                          is potentially not accurately reflecting the real value of x, whereas in Smalltalk, if you went behind the scenes and changed x to, say, 10, then any Inspector on x would also change immediately to 10.

                                                                                          1. 2

                                                                                            Thanks again for the detailed response!

                                                                                            I understood the rest of the post, but I do not get it when you say that in Common Lisp (and Factor), image is code only, and not data. How is that different from storing everything in a file?

                                                                                            1. 2

                                                                                              Eh, I probably confused things. The images for Factor and Lisp are just compiled code, effectively, albeit stored in a high-level format that’s designed to make debugging a running app a lot easier. That’s why I said they’re equivalent to Java .class files or the like. I was more mentioning that because the word “images” means something very different in Smalltalk/Self v. Common Lisp/Factor-like systems.

                                                                                              1. 1

                                                                                                So, what does it mean when Factor says it has an image based model? Is it simply saying that it is compiled?

                                                                                                Going back to the original question; My experience is with R, where the image is frequently not a boon, unless one is very careful not to reuse variable names during data cleaning. And the linearity of notebooks isn’t a factor when you work in the command line. R does let you see the current definitions of everything but that is not as helpful as it first seemed to be. Hence my question.

                                                                                            2. 1

                                                                                              Interesting! So if I define a class, instantiate it, change a method definition, and call that method on the class, the new definition is called?

                                                                                              That would indeed be much more useful than a notebook-like approach.

                                                                                              1. 2

                                                                                                That’s completely correct, and also applies to certain other refactorings on the class (e.g., renaming instance variables or the like).

                                                                                          2. 1

                                                                                            One problem I could see with image files is servicing them; how do you patch several image files, or worse, deal with divergences in each due to patching of behaviour?

                                                                                        1. 3

                                                                                          Some people masquerade as Open Source maintainers but are, in fact, mob leaders. Be careful of these people, they are abusive and their goal is dominance. Privileged members of a toxic community feel it is their moral duty to create do-not-hire lists,

                                                                                          I have been out of the loop. Are there examples of this sort of victimization that I can read up on?

                                                                                          1. 2

                                                                                            8,000 characters in five hours is 1,600 characters per hour, or 27 characters per minute.

                                                                                            Do we specify the typing speed required by a secretary in words typed per day?

                                                                                            1. 1

                                                                                              I did not find this explicitly mentioned, but surely the name resolution in OOP is dynamic rather than lexical?

                                                                                              1. 2

                                                                                                Depends on the language doesn’t it?

                                                                                                1. 1

                                                                                                  I would argue that message passing, with the objects deciding what to do about a message received at runtime captures the essence of dynamic binding.

                                                                                                  However, I agree with your point. OO is somewhat amorphous at this point, and one may very well implement a language without late binding and call it OO.

                                                                                                  Edit: To clarify what I meant; dynamic binding is where the execution path rather than the lexical ordering determines how a method or variable is resolved. When an object receives a message, the message resolution is determined by the execution path that involved that object. That is, one can not predict in advance using lexical knowledge alone what method will be called.

                                                                                                  1. 8

                                                                                                    The post talks both about name resolution (lexical and dynamic scope) and method resolution, and sometimes seems to conflate the two.

                                                                                                    All mainstream OO languages that I’m familiar with use lexical scope, while most of them use dynamic binding.

                                                                                                    1. 1

                                                                                                      Yes, agreed. Got muddled :/

                                                                                              1. 1

                                                                                                Do people have multiple accounts in different mastodon servers? If not, what is a good home base for someone from lobsters? Does lobsters have a mastodon presence?

                                                                                                1. 3

                                                                                                  Do people have multiple accounts in different mastodon servers?

                                                                                                  Most people are only active on one account, I sometimes see people with two or three accounts, where each account is related to a specific topic. Another thing is people sometimes “move” from one instance to another, by pointing from their old account to their new one, in case they change their mind about an instance (eg. my old account, my current account).

                                                                                                  If not, what is a good home base for someone from lobsters?

                                                                                                  My advice would be to check out a general purposes instance first, unless you are very active in a specific community (eg. BSD Users like https://bsd.network, …). From there start following people you might find a bit interesting, since they will lead you to people you might find more interesting. Pining certain hashtags has helped me too. It takes a bit to get into it, since there is no thorough recommendation system, but then it clicks it’s comfortable – in my case far more than twitter ever was.

                                                                                                  Does lobsters have a mastodon presence?

                                                                                                  There was a thread a while back where people shared their accounts, that’s when I really started using Mastodon.

                                                                                                1. 5

                                                                                                  On a related note, there used to be a customized Inferno distribution that was just acme + a few bits called acme-sac – a single binary that ran acme on Inferno on fullscreen. I loved that I could run Inferno as my OS and use it as my daily editor with the ability to mount remote filesystems, see current processes in the editor and such (a sort of Emacs on steroids).

                                                                                                  Unfortunately, the project seems to have stalled.