Speaking as someone who wrote a Python 3 interpreter:
Be careful! Python’s simplicity is a bit of a siren song. It feels simple from a user perspective because it stays out of your way, but that means nothing when it comes to simplicity of implementation. Worse, if you’re expecting to replicate the semantics of existing Python code you might be faced with actually implementing all the weird stuff it does in order to use your existing code.
I had several moments on Hython where I seriously doubted if I could finish it to a satisfactory degree, and whether it was worth it. In retrospect, I have to say it was not worth it; I should have just worked on my own language instead.
Thanks for the feedback. Yes I’ve long realized that Python is misleading – it hides a very large language with some tricky semantics underneath that clean syntax!
I didn’t emphasize this enough in the blog post for space, but I have already run my unit tests under the 4 components. It parses and compiles them into runnable format. So the remaining work is to port the Python interpreter loop to C++, and fill in all the metacircular stuff like dicts, lists, sets, etc.
There is some hidden work like probably implementing the marshal format for bytecode. I have an idea for punting on the garbage collector if necessary.
But I only have to make 12,000 lines of well-tested Python work. I think I can do that. The main question in my mind is if this OPy language will be useful at all to users, or if it will just be an implementation detail for me.
It’s possible it will fail, but I’ve only spent 10 days on it and gotten quite far.
Hm I just looked at your code, and it’s under 3,000 lines total??? Depending on what kinds of programs it can run, I would see that as an advertisement for Python’s simplicity. It looks pretty clean in Haskell.
I’m aiming for about 8,000 lines of Python and 3,000 - 5,000 lines of C++ for OPy, and I only have to write the C++ part.
Remember the shell doesn’t need much of a standard library. It only makes like 10 system calls – fork/exec/wait/open/read/write/close/pipe/dup2/fcntl is almost all of it.
This is a really impressive project! Did you implement your own dict and list or use an existing implementation from Haskell?
I’m also in the boat of writing a Python (variant) interpreter. I think its a pretty good idea to try it if you’re not aiming for 100% compatibility. I did hit a few complexity of implementation here and there, but later I realized I didn’t need them (yet).
Can you list some of the weird stuff you encountered? For me it was setting up the environment on a function call (with keyword arguments, *, **).
For what its worth, here are my current line count.
Component files blank comment code
Parser 1 40 166 300
Interpreter & debugger 2 68 22 598
AST nodes semantic 2 33 15 181
Library 2 83 6 361
I think writing the interpreter in Python itself made the process easier because I could import missing pieces and pass it to my interpreter in a variable to visualize the final result instead of trying to get something working in one pass. In the end, I’m hoping to only depend on ctypes so I can port the parser and interpreter parts to other languages.
(Right now its able to run the first chunk of its own parser but not all variables use the list and dict types defined in my library.)
IIRC, dict/list are hybrids: implemented mostly in Python for convenience, but most operations simply invoke interpreter primitives, which lets them interface with Haskell.
As for weird stuff, let me think on that. Right off the bat I’ll say exception handling is a bitch. :)
I remember the first time around, I had to make a lot of changes to add exception to distinguish between the returned and raised values, but don’t remember them being particularly difficult.
Exceptions are actually pretty important for this project because I wanted to be able to continue execution after an uncaught exception is raised.
In the rewrite (with the line counts above), I just put a blatant hack: a .raised parameter in error objects to indicate if they’re raised or returned. (If someone returns an error with a manually set .raised = True, I’ll consider the semantics to mean that they actually want to raise an error.)
Exceptions made me think of the break and continue statements which were really difficult for me (the first time) and ended up being implemented as exceptions!
Do share if you remember more of this stuff so we’ll be better prepared!
I just found this post of yours:
If there are others (especially about the internals), I’d be interested in getting some links.
Okay, I remember where a lot of the weirdness came from: I used continuations in the implementation, which are great, but you have to be very careful when using them. I vaguely recall I had to write lots of tests to verify that exception handling’s interaction with while loops handled all cases correctly. Mostly this involves careful modification of state.
Also, I added modules in late, and they always felt a bit shoved in. Conceptually they aren’t that weird, but the myriad of import statements means you need to write a very flexible implementation of import and then tailor it based on what is used.
Be careful of loops where the interpreter calls Python code which calls the interpreter again, too. I’m not sure I even handle that properly. It’s kind of nasty.
Thanks for coming back and sharing!
That’s funny, I even have a Python Exception subclass called “Continuation” (for return, break, continue). But from the sound of it, you also implemented regular loop iterations using continuations.
I remember a python dev saying on a blog that they wish from x import y just meant `import x; y=x.y’ or something like that. So that’s what I ended implementing for import. Otherwise, they were just a different scope. Even Classes were just scopes with extra values preset.
from x import y
Be careful of loops where the interpreter calls Python code which calls the interpreter again, too. I’m not sure I even handle that properly. It’s kind of nasty.
Oh yes, this, definitely this. Actually, this is why I can’t explain how some of things that shouldn’t work actually ran (correctly or apparently correctly) in the first version.
Have you considered using an alternate python implementation designed for small size and fast startup? Like micropython: https://micropython.org/
Yes MicroPython looks cool. I had it on my machine and just built it in a few seconds.
I guess the main issue is that I want to be able to make aggressive global changes, especially to fork the language. I don’t need stuff like complex numbers, multiple inheritance, and coroutines. But I do want to add ASDL because that has been so heavily used.
At 40K lines of C in the py/ directory, MicroPython is compact, but still a little more than “one-person” sized. I think that having a bootstrapped parser is a big benefit in this respect, and MicroPython doesn’t do that.
Still, I have MicroPython on my machine and will keep it in the back of my mind in case something fails :)
I would rather just rewrite the thousands of lines of code. You already have the architecture laid out so this is a well-defined and easy (albeit boring) task. Writing a Python interpreter, even a small one, will require some research. And when I read
OPy will reflect this evolution. It could even end up a specialized language for writing languages — a meta-language — rather than a general-purpose language like Python.
I wonder if you’re keeping your eyes on the prize, no offense but this sounds like the kind of dicking around I like to do when I’m bored of whatever I’m supposed to be working on. You’re trying to make a shell, not a meta-language, right?
Especially since multiple meta-languages already exist, such as SML, Ocaml, and I’d include Haskell here. And for Python-like language, the PyPy people made a simple meta-language.
The upside to these is that they come with optimizing compilers, debuggers, community, existing libraries, and semantics.
If the goal is small, surely something like Lua makes more sense than writing ones own Python implementation.
On top of that, they (esp Ocaml) come with libraries specifically for writing compilers faster.
After using ASDL a lot, I definitely wondered if I should have written it in OCaml. In 2013, I did another parsing project in Python, and I actually resolved to write future projects like that in OCaml.
But there are a couple problems with OCaml. I would say it’s a meta-language for semantics and not syntax. ANTLR/yacc are meta-languages for syntax. In particular:
The second problem is that OCaml is not a good meta-language for itself. I talked about that in “Type Checking vs. Metaprogramming” .
So yes I’m daydreaming about a meta-language that’s a cross between ML/Lisp/Yacc/ANTLR – some details here . But it is toward a somewhat practical end. I actually have to implement multiple languages (OSH, Oil, awk/make). If it can implement those languages, I’ll call it done.
Now, I’m not necessarily saying that OCaml would be worse than Python. I certainly worked around a lot of Python’s quirks, no doubt. But they both have flaws for this application, and I chose the one I was more familiar with and which had a bigger toolset.
I am fairly familiar with some lexing/parsing tools for OCaml and I wonder if some of the limitations you mention have not been lifted by some advanced users encountering similar needs.
I don’t know exactly what you mean by “novel parsing algorithm for bash” (would you have more precise pointers), but for example your blog mentions the idea of having several lexical states, and it happens that this question has been studied by people writing parsers for multi-languages (say HTML+CSS, etc.) using standard lex+yacc-inspired parsing tools. See Dario Texeira’s on-the-fly lexer switching with Menhir for example.
Menhir recently gained an incremental parsing mode where, instead of the parser taking a (lexbuf -> token) lexer as argument, there is an explicit representation of the parser state as a persistent value (a state may be accepting, an error, expect further tokens, etc.), and a function that takes a state and a token and returns the next parsing state. This is advanced and not as convenient as the usual approach, but it gives you a lot of flexibility (you can inspect the state in the middle, and for example decide to switch to a different lexer if you want) that would allow to do lexer-switching in a nicer way than described above.
(lexbuf -> token)
Finally, while Menhir would be my goto-choice of parser generator technology for OCaml – and I haven’t understood yet whether bash parsing is fundamentally non-LR(1) – you may want to experiment with Dypgen, a GLR parser that also explicitly supports scannerless parsing.
Off the top of my head, the difficulties with parsing shell are:
Both of these make bottom-up algorithms unsuitable IMO. You want control over how much input you are reading. And you want to know which production you are looking at earlier rather than later.
Some other issues:
for i in x y z
I should have really said that I don’t want to use a parser generator at all. It might be possible, but I don’t think it would save any work over a hand-written parser due to the work above. The blog post you linked seems to indicate that the approaches used are a little hacky.
I have some daydreams about writing a custom parser generator to express it.
But as mentioned, OCaml is close and I wondered if I should have used it. One of the thing that tips it is that not only don’t want a runtime dependency on a Python interpreter, I don’t want any build time dependencies besides a C compiler either. In the world of shell, configure && make && make install is still useful and I think an expected workflow.
configure && make && make install
EDIT: I also have the usual complaints about custom error messages and parser generators. I don’t have any specific experience with Menhir, but there are definitely places in shell where the location of the parse error is not the one you want to point out to the user.
Ocaml is only one of the languages I mentioned, there are other options. In case you are not aware, though, there are other parser libraries such as menhir and stream combinator parsers. I’d also wager it’s easier to make a specialized parser DSL in Ocaml than building a Python implementation. The types alone mean you can get a well typed AST out of it.
I’m also very surprised that you claim Python is better than something like Ocaml for parsing, that has not been my experience at all. I still can’t really imagine that a home-brewed mini-Python is better in terms of all the other aspects that come with a language implementation such as optimizations and debugging.
But it’s your project, so have fun!
Menhir is still a LALR(1) generator as far as I know, and that algorithm doesn’t work with shell at all. I always see people recommending parser combinators, but I can literally think of zero production quality languages that use them. And I’ve looked at perhaps 30 “real” parsers.
As far as I can tell, OCaml, SML, and Haskell all have the same thing going for them – algebraic data types and pattern matching. I don’t know of any non-toy or non-academic programs written in SML. OCaml seems to be what you use if you need to write something production quality. Between OCaml and Haskell, OCaml actually has a Unix tradition and has better tools as far as I can tell. Supposedly Haskell is not efficient with strings.
I claimed that C is better than OCaml for writing hand-written lexers. I explicitly didn’t say that Python is better – I said the opposite. OCaml would probably be the #2 choice. The first post describes how I ended up with Python:
I started with 3000 lines of C++ and realized I would never finish at that rate. The prototype became production… but I think that will end up being a good thing.
“cross between ML/LISP/YACC/ANTLR”
Closest I know is sklogic’s tool at:
He mixes and matches about every technique in the book in one system for building arbitrary applications in this space. The commercial use is a product for static analysis. He’s always DSL'ing problems. His main development is done in a mix of LISP, Standard ML, and Prolog. He threw SML in for stuff he wants done safely with easy analysis. Prolog is obviously for anything done easy with logic programming. He has LISP for everything else with the full information of the compiled app available to assist him. So, that’s the sort of thing you can do if you start with the right foundation in a language that can embed the rest on a use-case by use-case basis.
Now, one like that outside .NET platform building on something like Racket Scheme might be quite interesting. Even better, something like PreScheme available at least selectively with added benefits of dynamic safety & concurrency of Rust. There’s a lot of options out there that build fundamentals on languages with strong ecosystems and prior work before one attempts to do fully-custom stuff.
That’s exactly what I linked to in my post :)
Yeah I don’t like the .NET platform and have some other reservations as you can see in the conversation there. I think doing something fully general purpose may be infeasible, but if you narrow the goals to expressing/ reimplementing the DSLs in Unix, of which there are many, it can probably be done.
Oh I missed that. I thought it was a Reddit elaboration of Oil or something. Ok, you looked at the toolkit, I agree on .NET, and there were disputes about what it could handle. Most of these I’d have said try and see. I see many people argue with him about whether his method could handle a job he’s already used it for. I think he said on HN he uses it to build static analyzers for C and Verilog that his company sells… written in a smooth combo of something like 4-8 DSL’s. I usually say, “Seriously? You think he doesn’t know how to parse weird stuff?” Your case is an exception where it’s not clear cut because Bash might be the pain you describe from my own research.
I remember looking up a formal semantics. Plus, an off-hand comment by David Wheeler in our verifiable builds discussion got me thinking about using it for a bootsrapping of untrusted compiler since bash is always there & already trusted. So, needed to see if people wrote compilers in it or even formalized it (aka precisely describe it). I found a partial formalization that talked about how difficult it was since it was like several languages at once with static and dynamic aspects. That’s exactly what you said on Reddit. I thought, “How we going to cleanly parse or verify it if they can’t friggin describe its semantics?” The other thing I found were compilers and such that the people semi-regretted writing. Reluctantly shared with caveats haha. Still, even if not sklogic’s method, such a thing made me think you might get best results using two or more fundamental techniques that are each fast instead of one-size-fits-all method. At least keep it in back of your mind as he does it regularly and high-assurance did too. They did sequential stuff in ASM’s and concurrency in SPIN. I could also definitely see Bash done in a rewriting logic with primitives or heuristics for corner cases since they can handle about anything if it’s parsing and transforms. Maude & K framework come to mind.
So, I guess I can’t offer any specific help on this one past remember using more than one tool might be better on occasion for parsing. Not always but sometimes. In any case, I’m glad you at least got to look into such hybrid tools. Bash is also hard enough to cleanly model that it might be worth some PhD’s throwing effort at to produce a tool like GOLD parsing engine, Semantic Design’s, or sklogic’s that can handle it cleanly w/ straight-forward grammar + embedded functions + synthesis. Just can’t let ourselves be stopped by the challenge of something as crude and old as Bash. ;)
What would you want to accomplish with a formal semantics of shell?
I have looked at papers for formalizing Python  and problems with formalizing ANSI C. The Python one gives examples of using it to express securities contracts and this pox networkiing project . However AFAICT they didn’t say what you could actually prove about those types of programs.
I’m all for formalization but I would want it to be in service of something practical. Lamport’s TLA+ seems like a good example. People used it to find real bugs.
I suppose you could do some sort of taint analysis on programs to prove that data isn’t used as code, as in shellshock. However I think it is easier not to write such a silly bug in the first place :) That just feels too obvious.
Also, one of the reasons I want to keep it in Python is precisely because it’s easier to reason about the semantics of the interpreter than it would be in C or C++. For one, you get the memory safety guarantee for free. What other properties would you want to prove about shell?
Shell is definitely foundational, in that you generally need it to configure and build C programs. However I’m not sure the work on C foundations has borne that much fruit yet. I feel like dynamic analysis has been more practical and widely used (valgrind, ASAN).
“What would you want to accomplish with a formal semantics of shell?’
Old rule is how can one get a simple, elegant solution to something one can’t even formally express? Among other things like verification. The formal semantics problems were just evidence it might be hard to parse or reason about.
“I’m all for formalization but I would want it to be in service of something practical. “
I agree. I’m thinking on parser generators with efficient synthesis in this case. Gotta formally describe it somehow, though.
If I were just trying to make one shell, that would be true. But I’m writing an old shell OSH, a new shell Oil. The new one is bigger because it will have the functionality of Awk and Make.
So yeah there is some yak shaving, but so far my yak shaving has paid off, like writing ASDL and the test framework.
Of course, OSH is open source. I will be happy if someone doubts my plan, and instead wants to rewrite it in C or C++ for me. :) It’s true it isn’t that much code – around 12K lines of Python. It will probably expand into 20K lines of C++ or 30K lines of C if you’re up for it.
Also, the meta-language thing is speculating about evolution AFTER getting OSH working on a normal Python interpreter.
The new one is bigger because it will have the functionality of Awk and Make.
Huh, this sounds a lot like mash: http://www.vitanuova.com/inferno/man/1/mash.html , http://www.vitanuova.com/inferno/man/1/mash-make.html
Woah are there THREE mash shells? That’s a record. I listed two here! https://github.com/oilshell/oil/wiki/ExternalResources
If there are any lessons I should take from that mash shell, please let me know.
It feels obvious that make and shell should be the same program. The point of both is to start processes, and 90% of the lines of a Makefile are literally shell, or they’re simple assignments that can be expressed in shell.
In case you missed it I wrote about that here: http://www.oilshell.org/blog/2016/11/13.html
This makes it obvious: http://www.oilshell.org/blog/2016/11/14.html
[Comment removed by author]
Yeah, someone brought that up. It does solve one or two of the six problems , but not all of them.
Nuitka compiles to a binary that uses the PythonXX.so as far as I understand. It might start faster because of this. But it will still have the problems with signals, unicode, and the Python-C API.
Have you considered Rust or another systems language?
I’m personally not interested in Rust because I don’t like anything that takes a long time to compile. I would never finish the project.
I mentioned an idea related to Rust here:
Oil already influenced the Ion shell in Redox OS (linked in that blog post). So hypothetically, a quicker way to get a memory safe shell would be to port my (nonexistent) 5,000 line interpreter loop to Rust, and then get the rest of Oil for free.
It would probably be 20K to 30K lines of a high-level memory-safe language for “free”. But yes this is all far in the future :) It’s possible they will just write it all in Rust, but personally I wouldn’t do that.