Looks like most of this should be taken with a grain of salt (after reading the authors edits). Nonetheless the comments about the different ways of working of enterprise teams, vs open source collaborators are interesting. I have experienced similar tendencies in several companies, teams are often very reluctant to accept contributions from outside their team.
Runtime type checking is not necessary in the example provided by the author. A better approach is to generate typescript definitions from the API contract you have with a backend, either from GraphQL or from a Swagger definition. This way there is no overhead in the runtime but you still get the benefit of having static types for server domain models. Allowing one to easily perform refactors for frontend code once the models change.
Often, people want to do simple things, such as renaming a function, adding an argument to a method or splitting a module up. None of these is hard to do, but with the size of our codebase it becomes impossible for a human to find every line that needs modifying
This sounds like quite a trivial refactoring for an IDE to perform on a typed language? Lately it has even been possible to perform such refactorings on Javascript/TypeScript codebases using Intellij. I haven’t written production Python code in a while, but it seems Python is lacking behind in this space? Even though it does support type annotation by now.
Reading the homepage I can not really distill the benefits of Moon over the rendering model of React. It would be beneficial for our ui libraries to be smaller in kb size, but we shouldn’t underestimate the complexity of state of the art virtual dom rendering. I wonder to what extend Moon can compete with React on performance oriented features (e.g. react fiber or async rendering)
At its core, Moon handles every single side effect with a driver — Moon uses a pretty fast virtual DOM diffing algorithm that performs better than React in benchmarks, but a topic I haven’t covered much is that you can use React with Moon. I haven’t tried it much, but it is pretty simple to make a driver where you can output to React directly and get features like fiber or async rendering.
The benchmarks were ran locally on my computer, but benchmarks for previous versions of Moon are available in js-framework-benchmark and there is a PR for the latest version. I’ll update the graphs when it gets merged.
It was merged and the benchmarks were updated! Check out the current results here. Moon is only non-keyed at the moment because you manage local DOM state yourself instead of Moon preserving them through reordering nodes.
Stateful serverless seems an active area of research. I wonder how common it will be to leverage stateful serverless in enterprise software in the future. Speaking from experience it would be really cool if you could perform persistent actor based event sourcing without having to operate any cluster yourself.
This is what I hate about well-known Scala code bases like Apache Spark (I was into Delta Lake code last night). They use helper traits aggressively. It makes it difficult to track down exactly where a method implementation is without an IDE.
One of the speakers antipatterns was the Monorepo.
I think monorepos provide a lot of benefits. However, none of the open source infrastructure available works well with them especially GitHub. In fact, if anyone wants to compete with GitHub providing tools for working monorepos might be one way to differentiate.
Yeah, both git and hg have been doing a lot of great work to make monorepos workable. Like I mentioned elsewhere the missing piece isn’t really the VCS. It’s all the rest of the tooling around it.
What is the issue with GitHub monorepos? We have our entire company in one repo at work and I’m not seeing many pain points (ok, we recently made marketing go and use a CDN instead of checking in hundreds of megabytes of images, but nothing other than that).
Interesting! Could you share how many languages you have in your codebase, how many different projects? What do you use for your build pipeline? Asking as I had pretty negative experiences with GitHub, large monorepo featuring 30ish different, independent projects, being tests via Jenkins – we were constantly running over GitHub API quota, and that was only the tip of the iceberg. You mentioned another pain point – storing large artifacts – we all wanted to do vendoring, but with git it was not easy, especially if there were binary artifacts involved.
Don’t have real experience with a monorepo but some tooling already falls flat when you want it to build “more than 1 project per repo”, one really broken one didn’t even work when trying to use a subdirectory at all.
Sorry for lack of specifics, this was at my last job in 2017–. I just don’t think this will be universally supported all the time by everything.
So one point definitely is that the nice integration into github with webhooks and so on. If you rely on a third party tool, they must support this very basic step already.
Github issues don’t scale real well with a github monorepo. Most of the github ci/cd systems also don’t scale real well with a mono repo.
It’s not github specific but many people running a monorepo on github also aren’t really using a build tool that works well with mono-repos. You really wanting a tool like bazel, buck, or pants with a build cache to get the best result.
I feel you need to integrate with a build system to support monorepo’s better. An inherent part of building a monorepo efficiently is understanding what parts need to rebuild as an effect of a change. Google and facebook (bazel is one i think) have build systems capable of this, but i have never tried any yet.
AFAIK Bazel is just part of what the complete build / source control system is, which I believe is called Piper. But I can tell you that Plastic has enough functionality to hold repos as big as Google’s and, as you say, handle complex build processes with modules and so forth.
The VCS isn’t the missing piece here. I can have a Monorepo in Hg, Git, and a few other tools if I want to. The missing piece is all of the other tooling that needs to work with a Monorepo.
Kubernetes is just not the right abstraction for many applications. If you are building a stateless HTTP application either serverless technology or knative / cloud run is more suitable, since they provide a higher abstraction (i.e. no nodes). If you are building software that does require a more advanced operations model (e.g. scheduling tasks), Kubernetes is potentially still too low level. Consider whether it is more suitable for your platform team to leverage Kubernetes to build a platform runtime tailored for your domain.
Good to see someone implementing something more than a PoC application on top of Scuttlebutt. I personally never got around to understanding the choice for pub servers and their specific role in the P2P network. The pub nodes form a major hassle for on boarding new users, especially the ones that are not as tech-savvy.
Aside from the control flow aspect, I think promises are poisoned by exceptions which are very problematic in a dynamically typed language. Here’s my blog post on the problem with try catch which gets brought along into promises, which get brought along into async/await. The only solution I’ve seen proposed for this is bounce.
Interesting read. Another solution could be to yield a promise from a generator and re-enter the generator with a list of two items. The first representing a potential error (or undefined) and the second the actual resolved value. This way we could handle errors ala Golang and avoid mixing exceptions with operational errors.
I have been using Dokku for a while on my personal server. This gives you a similar setup as to the one in your post (swapping out caddy for nginx) but spares you the configuration of the proxy and gives you the option to deploy using git + heroku buildpacks next to Dockerfile deploys.
I think none of these approaches cancel/close the underlying resource, except for using observables? Is there any timeline on when fetch is going to support aborting, now that the cancel proposal for Promises has been rejected.
AbortSignal and AbortController are in the WHATWG DOM spec now, and Firefox and Edge already support it. So I suppose the timeline is “whenever the other major browsers implement it”.
It is not clear to me what is the reason Chrome will start enforcing this HTTPS redirect on .dev domains. Neither the article nor the referenced commit explains this.
Actually, it does: .dev is a legit domain ending (owned by Google, for what it’s worth), and so they’re just adding it to the preloaded HSTS via the normal mechanism. This honestly seems entirely rational to me.
We are excited to continue experimenting with this new editing paradigm.
That’s fine, but this is not new.
Structured editors (also known as syntax-directed editors) have been around since at least the early 80s. I remember thinking in undergrad (nearly 20 years ago now) that structured editing would be awesome. When I got to grad school I started to poke around in the literature and there is a wealth of it. It didn’t catch on. So much so that by 1986 there were papers reviewing why they didn’t: On the Usefulness of Syntax Directed Editors (Lang, 1986).
By the 90s they were all but dead, except maybe in niche areas.
I have no problem with someone trying their hand at making such an editor. By all means, go ahead. Maybe it was a case of poor hardware or cultural issues. Who knows. But don’t tell me it’s new because it isn’t. And do yourself a favour and study why it failed before, lest you make the same mistakes.
Our apologies, we were in no way claiming that syntax-directed editing is new. It obvious has a long and storied history. We only intended to describe as new our particular implementation of it. That article was intended for broad consumption. The vast majority of the users with whom we engage have no familiarity with the concepts of structured editing, so we wanted to lay them out plainly. We certainly have studied and drawn inspiration from many of the past and current attempts in this field, but thanks for those links. Looking forward to checking them out. We are heartened by the generally positive reception and feedback – the cloud era offers a lot of new avenues of exploration for syntax-directed editing.
The major complaint about structured editing has always been a lack of flexibility in editing incomplete/invalid programs creating an uncomfortable point and click experience that is not as fluid and freestyle as text.
However that is not at all a case against structured editing. That is a case for making better structured editors.
That is not an insurmountable challenge and not a big enough problem to justify throwing away all the other benefits of structured editing.
Thanks for the link to the video. That’s stuff from Intentional Software, something spearheaded by Charles Simonyi(*). It’s been in development for years and was recently acquired by Microsoft. I don’t think they’ve ever released anything.
To be clear, I am not against structured editing. What I don’t like is calling it new, when it clearly isn’t. And the lack of acknowledgement of why things didn’t work before is also disheartening.
As for structured editing itself, I like it and I’ve tried it, and the only place I keep using it is with Lisp. I think it’s going to be one of those “worse is better” things: although it may be more “pure”, it won’t offer enough benefit over its cheaper – though more sloppy – counterpart.
(*) The video was made when he was still working on that stuff within Microsoft. It became a separate company shortly after, in 2002.
Right, so I’ve taken multiple stabs at research on this stuff in various forms over the years, everything from AST editors, to visual programming systems and AOP. I had a bit of an exchange with @akent about it offline.
I worked with Charles a bit at Microsoft and later at Intentional. I became interested in it since there is a hope for it to increase programmer productivity and correctness without sacrificing performance.
You are totally right though Geoff, the editor experience can be a bugger, and if you don’t get it right, your customers are going to feel frustrated, claustrophobic and walk away. That’s the way the Intentional Programming system felt way back when - very tedious. Hopefully they improved it a lot.
I attacked it from a different direction to Charles using markup in regular code. You would drop in meta-tags which were your “intentions” (using Charles’ terminology). The meta-tags were parameterized functions that ran on the AST in-place. They could reflect on the code around them or even globally, taking into account the normal programmer typed code, and then “insert magic here”.
Turned out it I basically reinvented a lot of the Aspect Oriented Programming work that Gregor Kiczales had done a few years earlier although I had no idea at the time. Interestingly Gregor was the co-founder of Intentional Software along with Charles.
Charles was more into the “one-representation-to-rule-them-all” thing though and for that the editor was of supreme importance. He basically wanted to do “Object Linking and Embedding”… but for code. That’s cool too.
There were many demos of the fact that you could view the source in different ways, but to be honest, I think that although this demoed really well, it wasn’t as useful (at least at the time) as everyone had hoped.
My stuff had its own challenges too. The programs were ultra powerful, but they were a bit of a black-box in the original system. They were capable of adding huge gobs of code that you literally couldn’t see in the editor. That made people feel queasy because unless you knew what these enzymes did, it was a bit too much voodoo. We did solve the debugging story if I remember correctly, but there were other problems with them - like the compositional aspects of them (which had no formalism).
I’m still very much into a lot of these ideas, and things can be done better now, so I’m not giving up on the field just yet.
Oh yeah, take a look at the Wolfram Language as well - another inspirational and somewhat related thing.
But yes, it’s sage advice to see why a lot of the attempts have failed at least to know what not to do again. And also agree, that’s not a reason not to try.
The case of Lisp is interesting though because though this language has a well defined syntax with parenthesis (ignoring the problem of macro-characters), this syntax is too trivial to be more useful than the structuring of a text as a string of characters, and it does not reflect the semantics of the language. Lisp does have a better structured syntax, but it is hidden under the parenthesis.
Jetbrains’ MPS is using a projectional editor. I am not sure if this is only really used in academia or if it is also used in industry. The mbeddr project is build on top of it. I remember using it and being very frustrated by the learning curve of the projectional editor.
Thank you very much for your visit. Yesterday, my hosting server denied accessing from foreign countries for six hours because of too many accesses. This is the specification of my rental server to avoid crash by DoS; thus, I can do nothing. If there are a lot of access, it will deny accessing again on the weekend. Please visit after next week if you cannot find this site.
Looks like most of this should be taken with a grain of salt (after reading the authors edits). Nonetheless the comments about the different ways of working of enterprise teams, vs open source collaborators are interesting. I have experienced similar tendencies in several companies, teams are often very reluctant to accept contributions from outside their team.
Runtime type checking is not necessary in the example provided by the author. A better approach is to generate typescript definitions from the API contract you have with a backend, either from GraphQL or from a Swagger definition. This way there is no overhead in the runtime but you still get the benefit of having static types for server domain models. Allowing one to easily perform refactors for frontend code once the models change.
This sounds like quite a trivial refactoring for an IDE to perform on a typed language? Lately it has even been possible to perform such refactorings on Javascript/TypeScript codebases using Intellij. I haven’t written production Python code in a while, but it seems Python is lacking behind in this space? Even though it does support type annotation by now.
Reading the homepage I can not really distill the benefits of Moon over the rendering model of React. It would be beneficial for our ui libraries to be smaller in kb size, but we shouldn’t underestimate the complexity of state of the art virtual dom rendering. I wonder to what extend Moon can compete with React on performance oriented features (e.g. react fiber or async rendering)
At its core, Moon handles every single side effect with a driver — Moon uses a pretty fast virtual DOM diffing algorithm that performs better than React in benchmarks, but a topic I haven’t covered much is that you can use React with Moon. I haven’t tried it much, but it is pretty simple to make a driver where you can output to React directly and get features like fiber or async rendering.
Can you provide some more details about the benchmarks?
The benchmarks were ran locally on my computer, but benchmarks for previous versions of Moon are available in js-framework-benchmark and there is a PR for the latest version. I’ll update the graphs when it gets merged.
It was merged and the benchmarks were updated! Check out the current results here. Moon is only non-keyed at the moment because you manage local DOM state yourself instead of Moon preserving them through reordering nodes.
Stateful serverless seems an active area of research. I wonder how common it will be to leverage stateful serverless in enterprise software in the future. Speaking from experience it would be really cool if you could perform persistent actor based event sourcing without having to operate any cluster yourself.
This elegantly puts to words my problem with a inheritance heavy codebase I currently have to work on.
You should also realize that classes without an explicit interface still have an implicit interface, and thus the same statement still applies.
This is what I hate about well-known Scala code bases like Apache Spark (I was into Delta Lake code last night). They use helper traits aggressively. It makes it difficult to track down exactly where a method implementation is without an IDE.
One of the speakers antipatterns was the Monorepo.
I think monorepos provide a lot of benefits. However, none of the open source infrastructure available works well with them especially GitHub. In fact, if anyone wants to compete with GitHub providing tools for working monorepos might be one way to differentiate.
We’re working on it! https://vfsforgit.com/
Yeah, both git and hg have been doing a lot of great work to make monorepos workable. Like I mentioned elsewhere the missing piece isn’t really the VCS. It’s all the rest of the tooling around it.
What is the issue with GitHub monorepos? We have our entire company in one repo at work and I’m not seeing many pain points (ok, we recently made marketing go and use a CDN instead of checking in hundreds of megabytes of images, but nothing other than that).
Interesting! Could you share how many languages you have in your codebase, how many different projects? What do you use for your build pipeline? Asking as I had pretty negative experiences with GitHub, large monorepo featuring 30ish different, independent projects, being tests via Jenkins – we were constantly running over GitHub API quota, and that was only the tip of the iceberg. You mentioned another pain point – storing large artifacts – we all wanted to do vendoring, but with git it was not easy, especially if there were binary artifacts involved.
Don’t have real experience with a monorepo but some tooling already falls flat when you want it to build “more than 1 project per repo”, one really broken one didn’t even work when trying to use a subdirectory at all. Sorry for lack of specifics, this was at my last job in 2017–. I just don’t think this will be universally supported all the time by everything.
So one point definitely is that the nice integration into github with webhooks and so on. If you rely on a third party tool, they must support this very basic step already.
Github issues don’t scale real well with a github monorepo. Most of the github ci/cd systems also don’t scale real well with a mono repo.
It’s not github specific but many people running a monorepo on github also aren’t really using a build tool that works well with mono-repos. You really wanting a tool like bazel, buck, or pants with a build cache to get the best result.
There is plenty of competitors.
I’m not aware of any that really support the monorepo model though.
If you are taking about a VCS capable of handling huge monorepos I can mention https://www.plasticscm.com/ for one.
I feel you need to integrate with a build system to support monorepo’s better. An inherent part of building a monorepo efficiently is understanding what parts need to rebuild as an effect of a change. Google and facebook (bazel is one i think) have build systems capable of this, but i have never tried any yet.
AFAIK Bazel is just part of what the complete build / source control system is, which I believe is called Piper. But I can tell you that Plastic has enough functionality to hold repos as big as Google’s and, as you say, handle complex build processes with modules and so forth.
The VCS isn’t the missing piece here. I can have a Monorepo in Hg, Git, and a few other tools if I want to. The missing piece is all of the other tooling that needs to work with a Monorepo.
Actually no, I’m talking about Issue trackers, Build systems, Continuous Integration products, all of the other tooling that a Monorepo needs.
Kubernetes is just not the right abstraction for many applications. If you are building a stateless HTTP application either serverless technology or knative / cloud run is more suitable, since they provide a higher abstraction (i.e. no nodes). If you are building software that does require a more advanced operations model (e.g. scheduling tasks), Kubernetes is potentially still too low level. Consider whether it is more suitable for your platform team to leverage Kubernetes to build a platform runtime tailored for your domain.
Good to see someone implementing something more than a PoC application on top of Scuttlebutt. I personally never got around to understanding the choice for pub servers and their specific role in the P2P network. The pub nodes form a major hassle for on boarding new users, especially the ones that are not as tech-savvy.
Amazed how remarkably Shadertoy (webgl) runs on mobile devices.
Aside from the control flow aspect, I think promises are poisoned by exceptions which are very problematic in a dynamically typed language. Here’s my blog post on the problem with try catch which gets brought along into promises, which get brought along into async/await. The only solution I’ve seen proposed for this is bounce.
You should submit your blog post.
Interesting read. Another solution could be to yield a promise from a generator and re-enter the generator with a list of two items. The first representing a potential error (or undefined) and the second the actual resolved value. This way we could handle errors ala Golang and avoid mixing exceptions with operational errors.
I have been using Dokku for a while on my personal server. This gives you a similar setup as to the one in your post (swapping out caddy for nginx) but spares you the configuration of the proxy and gives you the option to deploy using git + heroku buildpacks next to Dockerfile deploys.
I think none of these approaches cancel/close the underlying resource, except for using observables? Is there any timeline on when fetch is going to support aborting, now that the cancel proposal for Promises has been rejected.
AbortSignal and AbortController are in the WHATWG DOM spec now, and Firefox and Edge already support it. So I suppose the timeline is “whenever the other major browsers implement it”.
It is not clear to me what is the reason Chrome will start enforcing this HTTPS redirect on .dev domains. Neither the article nor the referenced commit explains this.
Actually, it does:
.dev
is a legit domain ending (owned by Google, for what it’s worth), and so they’re just adding it to the preloaded HSTS via the normal mechanism. This honestly seems entirely rational to me.That’s fine, but this is not new.
Structured editors (also known as syntax-directed editors) have been around since at least the early 80s. I remember thinking in undergrad (nearly 20 years ago now) that structured editing would be awesome. When I got to grad school I started to poke around in the literature and there is a wealth of it. It didn’t catch on. So much so that by 1986 there were papers reviewing why they didn’t: On the Usefulness of Syntax Directed Editors (Lang, 1986).
By the 90s they were all but dead, except maybe in niche areas.
I have no problem with someone trying their hand at making such an editor. By all means, go ahead. Maybe it was a case of poor hardware or cultural issues. Who knows. But don’t tell me it’s new because it isn’t. And do yourself a favour and study why it failed before, lest you make the same mistakes.
Addendum: here’s something from 1971 describing such a system. User engineering principles for interactive systems (Hansen, 1971). I didn’t know about this one until today!
Our apologies, we were in no way claiming that syntax-directed editing is new. It obvious has a long and storied history. We only intended to describe as new our particular implementation of it. That article was intended for broad consumption. The vast majority of the users with whom we engage have no familiarity with the concepts of structured editing, so we wanted to lay them out plainly. We certainly have studied and drawn inspiration from many of the past and current attempts in this field, but thanks for those links. Looking forward to checking them out. We are heartened by the generally positive reception and feedback – the cloud era offers a lot of new avenues of exploration for syntax-directed editing.
Looks like you’ve been working hard on it. Encouraging!
This is an interesting relevant video: https://www.youtube.com/watch?v=tSnnfUj1XCQ
The major complaint about structured editing has always been a lack of flexibility in editing incomplete/invalid programs creating an uncomfortable point and click experience that is not as fluid and freestyle as text.
However that is not at all a case against structured editing. That is a case for making better structured editors.
That is not an insurmountable challenge and not a big enough problem to justify throwing away all the other benefits of structured editing.
Thanks for the link to the video. That’s stuff from Intentional Software, something spearheaded by Charles Simonyi(*). It’s been in development for years and was recently acquired by Microsoft. I don’t think they’ve ever released anything.
To be clear, I am not against structured editing. What I don’t like is calling it new, when it clearly isn’t. And the lack of acknowledgement of why things didn’t work before is also disheartening.
As for structured editing itself, I like it and I’ve tried it, and the only place I keep using it is with Lisp. I think it’s going to be one of those “worse is better” things: although it may be more “pure”, it won’t offer enough benefit over its cheaper – though more sloppy – counterpart.
(*) The video was made when he was still working on that stuff within Microsoft. It became a separate company shortly after, in 2002.
I mentioned this in the previous discussion about isomorf.
Here is what I consider an AST editor done about as right as can be done, in terms of “getting out of my way”
Friend of mine Rik Arends demoing his real-time WebGL system MakePad at AmsterdamJS this year
Right, so I’ve taken multiple stabs at research on this stuff in various forms over the years, everything from AST editors, to visual programming systems and AOP. I had a bit of an exchange with @akent about it offline.
I worked with Charles a bit at Microsoft and later at Intentional. I became interested in it since there is a hope for it to increase programmer productivity and correctness without sacrificing performance.
You are totally right though Geoff, the editor experience can be a bugger, and if you don’t get it right, your customers are going to feel frustrated, claustrophobic and walk away. That’s the way the Intentional Programming system felt way back when - very tedious. Hopefully they improved it a lot.
I attacked it from a different direction to Charles using markup in regular code. You would drop in meta-tags which were your “intentions” (using Charles’ terminology). The meta-tags were parameterized functions that ran on the AST in-place. They could reflect on the code around them or even globally, taking into account the normal programmer typed code, and then “insert magic here”.
Turned out it I basically reinvented a lot of the Aspect Oriented Programming work that Gregor Kiczales had done a few years earlier although I had no idea at the time. Interestingly Gregor was the co-founder of Intentional Software along with Charles.
Charles was more into the “one-representation-to-rule-them-all” thing though and for that the editor was of supreme importance. He basically wanted to do “Object Linking and Embedding”… but for code. That’s cool too.
There were many demos of the fact that you could view the source in different ways, but to be honest, I think that although this demoed really well, it wasn’t as useful (at least at the time) as everyone had hoped.
My stuff had its own challenges too. The programs were ultra powerful, but they were a bit of a black-box in the original system. They were capable of adding huge gobs of code that you literally couldn’t see in the editor. That made people feel queasy because unless you knew what these enzymes did, it was a bit too much voodoo. We did solve the debugging story if I remember correctly, but there were other problems with them - like the compositional aspects of them (which had no formalism).
I’m still very much into a lot of these ideas, and things can be done better now, so I’m not giving up on the field just yet.
Oh yeah, take a look at the Wolfram Language as well - another inspirational and somewhat related thing.
But yes, it’s sage advice to see why a lot of the attempts have failed at least to know what not to do again. And also agree, that’s not a reason not to try.
From the first article, fourth page:
KILL THE INFIDEL!!!
Jetbrains’ MPS is using a projectional editor. I am not sure if this is only really used in academia or if it is also used in industry. The mbeddr project is build on top of it. I remember using it and being very frustrated by the learning curve of the projectional editor.
On that last point, the Chrome debugger just became a whole lot better with v60: https://developers.google.com/web/updates/2017/05/devtools-release-notes#step-into-async. This is going to reduce the effort of debugging async JS code by so much.
I get a 403 error.
Me too, probably couldn’t handle the traffic. The link is correct.
Maybe they should switch to using a Postgres as a backend. :)
According to the website:
What was the reason to switch to Python to Go, I don’t think I read that in this particular article?