I think we should remember that the whole point is less about a particular language but rather about homoiconicity.
In systems where means for abstraction is the basis of themselves, abstraction becomes a no-op
My point is there is no need to stop with this at a mere computer language when we have every opportunity to go deeper
This seems like a hot topic, only the other day were we discussing AST editors, Intentional Programming and AOP.
I’d never heard the term “Design by Introspection” before, and that certainly may be a term that Andrei coined, but he certainly didn’t invent this, and he certainly didn’t invent static if, although he is owed much gratitude for taking it from its initial inception to the beautiful implementation in DLang. Nice to see C++ finally catching up btw, which is ironic because AFAIK Visual C++ 6.1 was the first compiler from a mainstream vendor to feature both static if and Design by Introspection, (well that and AspectJ).
As much as I love the meta-programming facilities in DLang, there is a difference to how Andrei approached it and a less tightly bound approach. In DLang, it’s hard for Design by Introspection to be done by someone other than the same programmer who wrote the regular code. That is to say there is a cross-cutting concern.
Another approach is that the allocator is chosen by external forces - enzymes that do deep introspection of the local or global AST.
What you do is open up the API to the compiler at the point just after the AST has been built and instead of hard-coding optimizer algorithms, you provide an open plug-in model where your “Intentional Programming” enzymes go and get busy on the code. These enzymes can be totally hidden and autonomous, or they can be exposed as optionally parameterized meta-tags that allow the application programmer to tweak the settings to guide the intentions.
In this way, things like optimizers are just a small class of things that can be written. You can just as easily have a rule that looks at your code in the middle of the compilation and sends a mail to your boss if you forgot to use Hungarian notation.
Now you have that, selecting allocators is left to people who know about selecting allocators, and you get to write
std::vector<SomeType> myBigFatVector;
or if you want to be explicit
[ fix_my_allocator ]
std::vector<SomeType> myBigFatVector;
or
[ meta::allocator::hint(meta::allocator::type::slab) ]
std::vector<SomeType> myBigFatVector;
… knowing that someone else, who knows a lot more about allocation strategies and the big picture has built something smart that does the right thing. these are examples not real code
More importantly, the policy that goes with the enzyme is not embedded in the code that rides with the implementation of either vector or a specific allocator, and that policy can be changed at the drop of a hat. Bye-bye cross-cutting concerns.
Anyway, there is a lot more to it than this, and I’ve barely touched on it here, but yes, “Design by Introspection” is extremely powerful and opens many doors where you only need provide your intention.
We had it to the point where I could mark up a class like so:
[ Window ]
class MyWindow
{
onMouseDown(auto mouseEvent) { ... }
};
That thing would add you window creation code, message handler code, wire up the message handlers, tell the linker to add additional resources and a whole lot more, working not only inside the language, but across the entire tool-chain.
We even had Bjarne and a bunch of the committee bought in, but politics and time constraints led to a less than adequate solution, that’s not to say it still isn’t worth pushing for, and honestly this technique has nothing to do with C++ or any particular language of course.
Now, back to the title “Abstracting is NOT about names”, well abstracting is about elevating the communication and implementation of concepts (i’m not talking C++ concepts here). The names of the abstractions are important, and so is the implementation of how the concepts get manifested, and to me they kind of go hand in hand.
That said, concepts are generally not their own implementors, but for implementations to come-a-running, it’s necessary to fully convey the idea and that means the aspects of their identity.. to which you can give a name.
Sorry for the long post - just my 2c
I like the article by the way, and I’m very happy to see this stuff being explored.
This is so wonderful.
It got me thinking though, CGL can be time-stepped with a pretty simple GPU program. But now I’m thinking, why stop at the rules of CGL and go the whole hog representing actual transistors, gates and other higher level things while you are at it using different texel colors and even multiple-layers with through-hole interconnects. Maybe people do this already, I have no idea.
In any case, what a tour de force - wow!
Far as matching features to objectives, those I know that were designed for sure are Ada, Wirth’s stuff (esp Oberon’s language & system), and Noether. Ada was done by systematically analyzing the needs of programmers plus how they screwed up. The language’s features were there to solve problems from that analysis. The Wirth languages aimed at the minimal amount of language features that could express a program and compile fast. Cardelli et al did a nice balancing job in Modula-3 for a language easy to analyze, easier to compile than C++, handles small stuff, and large stuff. Noether addresses design constraints more than any I’ve seen so far by listing many of them then balancing among them.
https://cr.yp.to/bib/1995/wirth.pdf
https://en.wikipedia.org/wiki/Modula-3
https://tahoe-lafs.org/~davidsarah/noether-friam4.pdf
I don’t know about Smalltalk. It had a nice design from what I’ve seen but I don’t know about the process that went into it. It could’ve been cleverly hacked together for all I know. Scheme’s and Standard ML’s languages seem to lean toward clean designs that try to balance underlying principles/foundations against practicality with some arbitrary stuff thrown in there. There’s also variants of imperative and functional languages designed specifically for easy verification in a theorem prover. They have to subset and/or superset them to achieve that.
Smalltalk was (and is!) very much a designed language, with strong vision and principles. Alan Kay has plenty to say about this, but the best source is Dan Ingalls: http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html
It also dates from an era before the language/system divide, so is unfortunately misunderstood by most contemporary “language” people. Richard Gabriel has a good essay about this: http://www.dreamsongs.com/Files/Incommensurability.pdf
I love this part:
A way to find out if a language is working well is to see if programs look like they are doing what they are doing. If they are sprinkled with statements that relate to the management of storage, then their internal model is not well matched to that of humans.
Couldn’t agree more.
More generally: Programming languages are supposed to translate human concepts to machine concepts, in the most efficient way possible and without hand-holding.
Of course all current programming languages completely fail in this regard at the moment, but I believe we should still keep our eyes on this as the ultimate goal.
It’s very hard for programming languages to do this at present because we don’t have a clean way to express human concepts to machines. Current language syntax and grammar is a very poor channel to communicate these things, since we’re using machine level formalisms, not human formalisms as the foundation for design.
I believe if we do more research into how to express and channel human concepts, then future programming languages will have a much better chance at succeeding in this endeavor.
Also it’s very interesting how Alan Kay and Dan Ingalls thought back then. As @minimax pointed out, the essays were written “before the language/system divide”.
People really did think in much higher level ways regarding Human Computer Interaction back then. Somewhere along the line we forgot about philosophy and the human component. It would be nice to get back to that at some point.
We’ve been making great strides with NLP over the decades but NLP still doesn’t help us with “the bit in the middle”.
Nothing against Rust, but for example, I really don’t give a damn about the borrow checker, and nor should anyone. we shouldn’t have to
Also Go. See https://talks.golang.org/2012/splash.article
We are excited to continue experimenting with this new editing paradigm.
That’s fine, but this is not new.
Structured editors (also known as syntax-directed editors) have been around since at least the early 80s. I remember thinking in undergrad (nearly 20 years ago now) that structured editing would be awesome. When I got to grad school I started to poke around in the literature and there is a wealth of it. It didn’t catch on. So much so that by 1986 there were papers reviewing why they didn’t: On the Usefulness of Syntax Directed Editors (Lang, 1986).
By the 90s they were all but dead, except maybe in niche areas.
I have no problem with someone trying their hand at making such an editor. By all means, go ahead. Maybe it was a case of poor hardware or cultural issues. Who knows. But don’t tell me it’s new because it isn’t. And do yourself a favour and study why it failed before, lest you make the same mistakes.
Addendum: here’s something from 1971 describing such a system. User engineering principles for interactive systems (Hansen, 1971). I didn’t know about this one until today!
Our apologies, we were in no way claiming that syntax-directed editing is new. It obvious has a long and storied history. We only intended to describe as new our particular implementation of it. That article was intended for broad consumption. The vast majority of the users with whom we engage have no familiarity with the concepts of structured editing, so we wanted to lay them out plainly. We certainly have studied and drawn inspiration from many of the past and current attempts in this field, but thanks for those links. Looking forward to checking them out. We are heartened by the generally positive reception and feedback – the cloud era offers a lot of new avenues of exploration for syntax-directed editing.
This is an interesting relevant video: https://www.youtube.com/watch?v=tSnnfUj1XCQ
The major complaint about structured editing has always been a lack of flexibility in editing incomplete/invalid programs creating an uncomfortable point and click experience that is not as fluid and freestyle as text.
However that is not at all a case against structured editing. That is a case for making better structured editors.
That is not an insurmountable challenge and not a big enough problem to justify throwing away all the other benefits of structured editing.
Thanks for the link to the video. That’s stuff from Intentional Software, something spearheaded by Charles Simonyi(*). It’s been in development for years and was recently acquired by Microsoft. I don’t think they’ve ever released anything.
To be clear, I am not against structured editing. What I don’t like is calling it new, when it clearly isn’t. And the lack of acknowledgement of why things didn’t work before is also disheartening.
As for structured editing itself, I like it and I’ve tried it, and the only place I keep using it is with Lisp. I think it’s going to be one of those “worse is better” things: although it may be more “pure”, it won’t offer enough benefit over its cheaper – though more sloppy – counterpart.
(*) The video was made when he was still working on that stuff within Microsoft. It became a separate company shortly after, in 2002.
I mentioned this in the previous discussion about isomorf.
Here is what I consider an AST editor done about as right as can be done, in terms of “getting out of my way”
Friend of mine Rik Arends demoing his real-time WebGL system MakePad at AmsterdamJS this year
Right, so I’ve taken multiple stabs at research on this stuff in various forms over the years, everything from AST editors, to visual programming systems and AOP. I had a bit of an exchange with @akent about it offline.
I worked with Charles a bit at Microsoft and later at Intentional. I became interested in it since there is a hope for it to increase programmer productivity and correctness without sacrificing performance.
You are totally right though Geoff, the editor experience can be a bugger, and if you don’t get it right, your customers are going to feel frustrated, claustrophobic and walk away. That’s the way the Intentional Programming system felt way back when - very tedious. Hopefully they improved it a lot.
I attacked it from a different direction to Charles using markup in regular code. You would drop in meta-tags which were your “intentions” (using Charles’ terminology). The meta-tags were parameterized functions that ran on the AST in-place. They could reflect on the code around them or even globally, taking into account the normal programmer typed code, and then “insert magic here”.
Turned out it I basically reinvented a lot of the Aspect Oriented Programming work that Gregor Kiczales had done a few years earlier although I had no idea at the time. Interestingly Gregor was the co-founder of Intentional Software along with Charles.
Charles was more into the “one-representation-to-rule-them-all” thing though and for that the editor was of supreme importance. He basically wanted to do “Object Linking and Embedding”… but for code. That’s cool too.
There were many demos of the fact that you could view the source in different ways, but to be honest, I think that although this demoed really well, it wasn’t as useful (at least at the time) as everyone had hoped.
My stuff had its own challenges too. The programs were ultra powerful, but they were a bit of a black-box in the original system. They were capable of adding huge gobs of code that you literally couldn’t see in the editor. That made people feel queasy because unless you knew what these enzymes did, it was a bit too much voodoo. We did solve the debugging story if I remember correctly, but there were other problems with them - like the compositional aspects of them (which had no formalism).
I’m still very much into a lot of these ideas, and things can be done better now, so I’m not giving up on the field just yet.
Oh yeah, take a look at the Wolfram Language as well - another inspirational and somewhat related thing.
But yes, it’s sage advice to see why a lot of the attempts have failed at least to know what not to do again. And also agree, that’s not a reason not to try.
From the first article, fourth page:
The case of Lisp is interesting though because though this language has a well defined syntax with parenthesis (ignoring the problem of macro-characters), this syntax is too trivial to be more useful than the structuring of a text as a string of characters, and it does not reflect the semantics of the language. Lisp does have a better structured syntax, but it is hidden under the parenthesis.
KILL THE INFIDEL!!!
Jetbrains’ MPS is using a projectional editor. I am not sure if this is only really used in academia or if it is also used in industry. The mbeddr project is build on top of it. I remember using it and being very frustrated by the learning curve of the projectional editor.
Ummm, I’m old and crotchety, and one of my cats is sick, so take this with pinch of salt.
“SRE, Go Programmer, Mathematician.”
I beg of you, Software Engineers, Computer Scientists or anyone in the field, pleeeease don’t self-identify with one language. It breaks my heart. There are vast tracts of land to explore!
As far as the post though, it’s weird though, although I enjoy programming in loosely typed or dynamically typed languages occasionally, I’ve generally had just as much, if not more productivity in statically typed environments. Especially in these days of editors that fully understand the AST and type flow.
So, as far as increasing compile times… I don’t even need to compile very often these days - well not until I can already see that the thing compiles - the tools are that good.
Even without the fancy editors though, seeing really strong concrete types explicitly in the code is a wonderful form of documentation that is really beneficial to others that have to maintain things years down the line. Personally I only begrudgingly accepted auto’s and var’s in the last five years although I do have to concede they do make refactoring more convenient and save key presses.
All in all, it seems like he is really torn up and like he’s trying to convince himself or his boss one way or the other. I kind of feel for the guy actually :/
I beg of you, Software Engineers, Computer Scientists or anyone in the field, pleeeease don’t self-identify with one language. It breaks my heart. There are vast tracts of land to explore!
I’m still trying to convince our hiring people that advertising for “ruby programmer” or “Javascript programmer” is like advertising for “hammer user” instead of “carpenter”
So, as far as increasing compile times… I don’t even need to compile very often these days - well not until I can already see that the thing compiles - the tools are that good.
I think this is a fascinating point. Strongly typed languages, especially those that separate side effects, don’t really need to be fully compiled to reason about, at least in byte-sized pieces.
This is an excellent writeup on the benefits you see from having the application language be the application.
I’d also like to add a little to this. For some time, I have been working on a multi-user version of LISP scalable to tens of thousands of concurrent sessions on a single machine.
In this system, given sufficient privilege, any user or agent in the system may inspect, reflect or inject definitions in one, any or all of the other environments, whether they are connected and running or not.
As you can imagine, being able to do this from a simple REPL provides amazing power to you the application developer, but certainly comes with huge responsibility. When you push a definition out, it really is live in that moment.
The nice thing is, as this author says, you can play in your sandbox and keep micro-testing until you feel comfortable, then try giving the new definition to one user agent to see how it works before finally committing it to everyone.
That is the beauty of the REPL.
It still scares the pants off me though!
Immutability. (…) This means you’ll tend to program with values, not side-effects. As such, programming languages which make it practical to program with immutable data structures are more REPL-friendly.
Top-level definitions. Working at the REPL consists of (re-)defining data and behaviour globally.
These two points are in furious contradiction, since redefining top-level definitions is pretty much the ultimate side effect. Every other top-level function can see this “action at a distance”.
Let me be perfectly clear: ML and Haskell allow you to program using Lisp’s “every definition is subject to revision” style. Just stuff all your top-level definitions into mutable cells. The reason why it’s not done as frequently as in Lisp-land is because, perhaps, perhaps, this is actually a bad idea.
redefining top-level definitions is pretty much the ultimate side effect.
Funny, I’d consider the alternative, restarting a process, to be “the ultimate side effect”.
The reason why it’s not done as frequently as in Lisp-land is because, […]
Because defaults matter. Wrapping every definition with performUnsafeIO/readMVar would be prohibitive ceremony.
perhaps, perhaps, this is actually a bad idea.
Why? My experience is that it’s a really great idea.
Funny, I’d consider the alternative, restarting a process, to be “the ultimate side effect”.
How so? There’s no local reasoning about the old process being defeated, precisely because you have discarded the old process.
Wrapping every definition with performUnsafeIO/readMVar would be prohibitive ceremony.
It pales in comparison to the ceremony of re-proving a large chunk of your program correct, because a local syntactic change had global consequences on the program’s meaning.
Why? My experience is that it’s a really great idea.
Because… how do you guarantee anything useful about something that doesn’t have a stable meaning?
There’s no local reasoning about the old process being defeated
Does your program exist in isolation? Useful programs needs to deal with stateful services, such as a database. Why shouldn’t your compiler/language/runtime offer tools for dealing with the same set of problems?
It pales in comparison to the ceremony of re-proving a large chunk of your program correct
I don’t understand this point at all. If your “proof” is a type checker, then just run the type checker on the new code…
how do you guarantee anything useful about something that doesn’t have a stable meaning?
You don’t. You stop changing the meaning when you want to make guarantees about it. You don’t need all guarantees at all times, especially during development.
Again, consider stateful services. Do you run tests against the production database? Or do you use a fresh/empty database? If you need something to stand still, you can hold it still.
Useful programs needs to deal with stateful services, such as a database.
When they’re running, not when I’m writing them.
Why shouldn’t your compiler/language/runtime offer tools for dealing with the same set of problems?
It isn’t immediately clear to me what kind of problems you’re thinking of, that are best solved by arbitrarily tinkering with the state of a running process.
If your “proof” is a type checker, then just run the type checker on the new code…
It is not. Some things I prefer to prove by hand, since it takes less time.
You don’t need all guarantees at all times, especially during development.
It’s during development that I need those guarantees the most, since, after a program has been deployed, it’s too late to fix anything.
When they’re running, not when I’m writing them.
Your production system is always running. Is it not?
what kind of problems you’re thinking of, that are best solved by arbitrarily tinkering with the state of a running process
Who says it is “arbitrary”? I very thoughtfully decide when to change the state of my running processes. When you change one service out of 100 in a production environment, is that not “rebinding” a definition? Why should programming in the small be so different than programming in the large?
Have you ever worked on a UI with hot-loading? It’s so nice to re-render the view without having to re-navigate to where I was. Or to write custom code to snapshot and recover my state between program runs.
What about a game? What if I want to tweak a monster’s AI routine without having to find through a dozen other monsters to get to the exact situation I want to test? I should be able to change the monster’s behavior at runtime. Great idea to me.
Proof is useful, but it’s not everything, and it’s not even clear that it’s meaningfully harmed by having added dynamism. Instead of proving X, you can prove X if static(Y).
This thread is a direct illustration of the incommensurability between the systems paradigm and the programming language paradigm, as described in Richard Gabriel’s “The Structure of a Programming Language Revolution” https://www.dreamsongs.com/Files/Incommensurability.pdf
I’d just like to chip in here with one word
“Facebook”
The Facebook “Process” is a great example of something that never needs restarting but rather features and services are changed or added to the application while it is running in front of our eyes - no page refresh required in the Web version at least.
I’d say that from a customer’s perspective this is a really good thing, and continuous end-user improvement is the long tail of continuous integration where nobody need do a reinstall ever again.
I realize that this is largely philosophical at this point, but we are starting to have the tools to make this possible in a more general setting.
For me as a user it’s a good thing I think, since I don’t need to lose context in huge point releases. Oh look, edit mesh just appeared on my toolbar. I wonder what that does?
So I guess I’m with Brandon on this one.
Immutability here refers to data and how the state is managed in the application. Since Clojure uses immutable data structures, majority of functions are pure and don’t rely on outside state. This makes it easy to reload any functions without worrying about your app getting in a bad state.
It’s a very worthy goal, and my hat is off to you and really anyone who gets into this area of research, personally I do think it is the future.
I’ve run similar experiments using the AST as the primary editing medium and there is so much to like about it.
We were trying it once back at Microsoft on the C++ AST, it was pretty neat… zero time compiles at least for debug builds since we could simply execute the AST itself. Pretty fun.
Charles Simonyi even left Microsoft to work on just this and created Intentional Software. He had a nifty editor that could render in a number of languages, buf it certainly wasnt cloud based or modern, but then it goes back a long time.
Another friend of mine, is making a project called MakePad. That is an absolutely gorgeous editor that works at the AST level aimed at teaching kids WebGl. Definitely worth checking out.
Personally I’m doing a lot of research in the area of extensible meta-circular semantic protocols and object models. This work ties in very nicely as well with these kinds of ideas.
I’m very interested and impressed with your project and look forward to seeing your progress.
Very nice work!
Thanks for the encouragement!
We see so much promise in this concept of editing, and the separation of concerns between storage/view/edit. The more time you spend on it, the more you realize that so much of what is baked into a language can just be considered sugar. It would seem we should be rigorous about what is defining logic and what is making the logic easier to read or write.
We welcome the comparison to Simonyi — his work has been suggested to us before. We definitely consider it auspicious to be standing on the shoulders of such giants. I think you are right that since his efforts, cloud infrastructure has really opened up many possibilities on this front.
Where we think we can really add to the picture is through the incorporation of more functional/purity/stateless concepts. Once you start paring down the AST because you have relegated much to sugar, you end up with a structure that is very amenable to analysis. Adding functional purity and referential transparency to this equation enables not just structural analysis but also behavioral analysis. We are excited at the idea that in real-time we could alert a user that he has written a function that is identical to however many more across the world, either through structural analysis or through empirical behavioral analysis. We think this could dramatically increase reuse and reduce redundant work.
The teaching angle is an interesting one. There definitely seems to be a lot of momentum toward visual coding aimed at teaching kids early. We’ve considered the possibility that our project could be applied as an instructional environment, letting students learn the concepts of coding without struggling with the command line and other integration/configuration headaches.
We are definitely looking for users and advisors at this stage, so we’d love to chat further and get more of your thoughts/feedback! We’d also be interested to learn more about your research, which sounds very applicable.
I like parts of the transaction model. It’s how I would/do do it - the commit collection, batching and asynchronous reply model.
That said, why are we still using a decades old domain specific language from 1979 (SQL) to interact with databases.
A language that has an extremely low impedance match with “data”, when we have a perfectly good language from two decades earlier… 1958 (LISP) that does a better job, and doesn’t require an ad-hoc query planner that trys (and fails) to outsmart the person planning the query.
Not only that but clearly someone didn’t get the memo that relational database models are so “1995”.
I applaud the efforts, and cynicism aside it really looks like they are doing their best here, and appreciate there is still some time to go before SQL falls away.
Unfortunately I can’t really like the fact that the authors are working this hard on what really is a dead paradigm.
Very well made paper however.
Obviously this comment comes off as authoritative, snarky and denigrating to the people who did the work.
That’s really not my intention however; it’s more like
“just sayin…”
Probably because it’s the least dead paradigm in the data world. Tools are useful to the extent that they can be employed to solve problems, and the first part of that is minimizing the number of new concepts the user must learn before they can solve problems.
Just to go further in agreement in relation to your comment.
Select * from Customers where Balance < 0
I just made that up so it probably isn’t valid SQL. It’s years since I wrote a lot of SQL.
Sure, that is easy and as you said, it helps people get going and solve problems.
But look what they just did. They learned a DSL that wasn’t needed. Transducers like Map/Filter/Reduce (or better) are much clearer for them and the machine.
Furthermore those translate to many other languages and compose much more easily.
I’m not convinced it is easier to learn SQL than just learning operators based on basic set theory.
Not only that, but extending SQL to graph databases, geospatial etc.
Sure, it can be done, and has been, but only at the cost of vendor specific lock-in or extremely obtuse syntax.
I used SQL a great deal back in the day. Before I “saw the light”. It’s just there are things that are equally expressive but compose much better and honestly not that much more difficult.
I think that the problem is a matter of how principles are introduced.
Oh you need to do data? You need to use SQL.
It doesn’t work like that.
Yes I get that and you are right of course.
When SQL started it really seemed quite good and worked awesomely.
Then it got extended - a lot.
It got extended to such a great extent that when I look at SQL now, I feel like the thousands of people who have spent so much time in learning it properly have been painted into a corner, and I really feel sorry for them.
I have good friends who are top notch SQL DBA’s but they can’t transfer their skills easily. They are upset and have been going through the five stages of grief for some time.
Data is not flat relational tables anymore, (you could argue it never was), and I really feel bad that they really did do “Computer Science” to a high level on a very obscure technology that is destined to the same fate as COBOL.
Obviously they get paid a lot. So there is that.