I’m skeptical of the value of piggy-backing on something as complex as natural language. In my experience, programming languages designed to look like natural languages are more frustrating to learn rather than less, because there’s so much that seems possible or equivalent that isn’t.
(SQL is a good example of a still-heavily-used language that is designed to look like english. Even though it’s much less powerful than non-query languages, it’s extremely strict about clause ordering in a way that is made difficult to remember simply because, as an english speaker, clause ordering doesn’t matter.)
Until we solve the hard problem, programming will involve a certain amount of learning. It makes more sense to me to minimize the minimum amount of learning necessary to do all important things, rather than replacing it with un-learning / ignoring important parts of a known system.
For instance, the basic set of redirection operators in unix shell is small and simple, and one can pick up competence slowly in an interactive system with a shell. Alternately, it’s fast to learn all of the operations necessary to do real computation in FORTH.
Minimizing the number of ideas necessary to do useful things & then making it easy to gradually pick up new ideas (engineering the difficulty curve to be flat) is my preferred method of getting rid of the early-programming difficulty spike, since it opens up the grey area between programmer and non-programmer to the point where it extends almost all the way to professional.
I don’t think using a formal language with syntax closer to formal written English, rather than one with syntax like that of currently-extant programming languages, does anything to help reduce the complexity of programming computers.
Here’s a random function (in Scala) from one of the codebases I work on. I picked this because it was relatively short and in a file I happened to have open anyway:
And let’s imagine that, instead of a programming language like Scala, I wanted to write this function in a language with syntax similar to formal written English:
SarkLogin in a function. It has two arguments, a Session called ‘session’ and a String called ‘authCode’. In the context of a Future, it does the following things in order:
It takes the authCode and exchanges that sark code for tokens, calling the result ‘sarkAuthRespose’. This might fail in the way that Futures know how to handle.
It makes a SarkAccount.Data out of the sarkAuthResponse’s access_token and refresh_token, calling it ‘data’. This might fail, but not in a way that Futures know how to handle, but rather the old way from before they came up with Futures.
It sets the user service with the session’s userId and the SarkAccount, and then it takes the function that returns and immediately calls with with the data from earlier. If that didn’t fail in the way that Futures know how to handle, throw the returned result away.
If that all worked, yield a Done wrapped in a Future.
Does the latter seem any easier to understand than the former? I don’t think so - because most of the cognitive work a programmer needs to do to understand this isn’t in grokking the syntax, but rather knowing what all the specific technical terms mean.
To understand this code you need to know what a Scala Future is (and how it is a monadic type that has its own idiosyncratic behavior when, hence the “fail in a way that Futures know how to handle” verbiage). You need to know what a Sark is (it’s a custom service that someone else wrote, and I don’t really know what it does other than that the clients of this software need to talk to it sometimes). You need to know what a couple of different data structures involving Sarks do, and you need to know that there’s a distinction between an “authCode” and a “token”, and that’s why that exchangeSarkCodeForTokens function exists (I learned about this from reading documentation on Sark months ago when I wrote this originally and forget why that mattered now).
In short, there’s nothing about writing out this code as a precise technical description of what this function is doing (which even this isn’t, not unless I told the compiler exactly what all the natural English phrases I used like “This might fail in the way that Futures know how to handle” mean) that makes the functionality easier to understand for anyone, and it definitely took longer to type because I didn’t use the convenient math-like notation that is a programming language.
I think the author would benefit from looking at Quorum which was designed with syntax and semantics chosen based on research into what made languages easier or harder for people to learn (see: https://quorumlanguage.com/evidence.html).
There haven’t been many attempts at reproducing the results that were used to drive the design of the language, but it still seems like a more worthwhile starting place than COBOL and english grammar.
I’m afraid there is an inherent issue forgot here.
How can you talk with someone who does not know any of the languages you speak?
And what if you want to talk with two persons each with her own language ignored by the others?
You need to build a new shared language, the hard way.
This is what happens with programming.
On one end you have a machine, on the other end another human.
You write a letter to both.
Just like with people of different languages, developing a good programming language that can communicate well with both humans and machines is an iterative process that can only be done by trials and errors.
And yes, we are still far, far, from the solution.
But note that both partecipants evolve meanwhile.
People evolve to hackers, computers evolve to… cheaper computers. :-)
I have this criticism of the other major use of pseudo-natural-language in UX – conversational interfaces.
Over time, just as a person naturally develops a jargon with their coworkers, all interactive user interfaces based on the exchange of words naturally become more and more like a unix shell or another time-worn REPL system – provided the facilities exist for the machine to evolve along with the user! When the machine can’t support a convenient shorthand, it is abandoned in favor of something that can.
(You don’t need to be a programmer for this to happen. Consider the people, initially enamored of Siri’s conversational nature, who after six months have switched to shouting gramatically-unlinked sequences of keywords at it. They have discovered how to game the natural language processing to get their desired result with less conversation. My father, who enlists my aid in changing his desktop wallpaper despite using windows for 20 years, figured out the keyword trick naturally.)
The thing is, if you use the system enough, it’ll become yours – and incomprehensible to outsiders. This is normal. (You have in-jokes with friends, too.) It runs counter to Big Computing since in-jokes don’t scale, but I don’t think Big Computing is compatible with user-empowerment anyhow.
Although I’m sure many people have had these same thoughts and have even gone down this route, I generally like this idea and would love to see it developed further for educational purposes. I’m a big fan of Inform and I think it’s pretty amazing what you can do with it given that 99% of the time you are writing in plain English (that actually makes sense to read). The flip side of this is that Inform can be just as frustrating to use as conventional programming languages, because at the end of the day, it’s still a language, and that language has certain rules you must follow and certain ways you have to say what you want to say to achieve some sort of result. With any system, it is natural to build a mental model of how that system works, and so when there is a dissonance between what you have in your head and what the language actually is, then there is opportunity for frustration. I am not sure if you can avoid this, because that is going to happen no matter how nice you make the language, but you can build tools around the language to help people as much as possible. Indeed, Inform does have very comprehensive documentation (complete with working examples!) and helpful, readable errors, but sometimes I do wish it were more forgiving in terms of the syntax, or at least better about offering suggestions for how I might should phrase something. On the whole, though, I think this is a pretty fascinating exercise at least.
Leaving aside the difficulties of actually reading code in “natural language” programming models, even if one were to master such, at some point you’re going to need to do something that hasn’t been defined in your nice cosy universe, and you’ll have to learn the implementation language underneath– but you’ll have almost no skills that are transferable, it’ll be like having to start all over again.
I’m skeptical of the value of piggy-backing on something as complex as natural language. In my experience, programming languages designed to look like natural languages are more frustrating to learn rather than less, because there’s so much that seems possible or equivalent that isn’t.
(SQL is a good example of a still-heavily-used language that is designed to look like english. Even though it’s much less powerful than non-query languages, it’s extremely strict about clause ordering in a way that is made difficult to remember simply because, as an english speaker, clause ordering doesn’t matter.)
Until we solve the hard problem, programming will involve a certain amount of learning. It makes more sense to me to minimize the minimum amount of learning necessary to do all important things, rather than replacing it with un-learning / ignoring important parts of a known system.
For instance, the basic set of redirection operators in unix shell is small and simple, and one can pick up competence slowly in an interactive system with a shell. Alternately, it’s fast to learn all of the operations necessary to do real computation in FORTH.
Minimizing the number of ideas necessary to do useful things & then making it easy to gradually pick up new ideas (engineering the difficulty curve to be flat) is my preferred method of getting rid of the early-programming difficulty spike, since it opens up the grey area between programmer and non-programmer to the point where it extends almost all the way to professional.
I don’t think using a formal language with syntax closer to formal written English, rather than one with syntax like that of currently-extant programming languages, does anything to help reduce the complexity of programming computers.
Here’s a random function (in Scala) from one of the codebases I work on. I picked this because it was relatively short and in a file I happened to have open anyway:
And let’s imagine that, instead of a programming language like Scala, I wanted to write this function in a language with syntax similar to formal written English:
Does the latter seem any easier to understand than the former? I don’t think so - because most of the cognitive work a programmer needs to do to understand this isn’t in grokking the syntax, but rather knowing what all the specific technical terms mean.
To understand this code you need to know what a Scala Future is (and how it is a monadic type that has its own idiosyncratic behavior when, hence the “fail in a way that Futures know how to handle” verbiage). You need to know what a Sark is (it’s a custom service that someone else wrote, and I don’t really know what it does other than that the clients of this software need to talk to it sometimes). You need to know what a couple of different data structures involving Sarks do, and you need to know that there’s a distinction between an “authCode” and a “token”, and that’s why that
exchangeSarkCodeForTokensfunction exists (I learned about this from reading documentation on Sark months ago when I wrote this originally and forget why that mattered now).In short, there’s nothing about writing out this code as a precise technical description of what this function is doing (which even this isn’t, not unless I told the compiler exactly what all the natural English phrases I used like “This might fail in the way that Futures know how to handle” mean) that makes the functionality easier to understand for anyone, and it definitely took longer to type because I didn’t use the convenient math-like notation that is a programming language.
I think the author would benefit from looking at Quorum which was designed with syntax and semantics chosen based on research into what made languages easier or harder for people to learn (see: https://quorumlanguage.com/evidence.html).
There haven’t been many attempts at reproducing the results that were used to drive the design of the language, but it still seems like a more worthwhile starting place than COBOL and english grammar.
I didn’t know about this, chur!
I’m afraid there is an inherent issue forgot here.
How can you talk with someone who does not know any of the languages you speak?
And what if you want to talk with two persons each with her own language ignored by the others?
You need to build a new shared language, the hard way.
This is what happens with programming.
On one end you have a machine, on the other end another human.
You write a letter to both.
Just like with people of different languages, developing a good programming language that can communicate well with both humans and machines is an iterative process that can only be done by trials and errors.
And yes, we are still far, far, from the solution.
But note that both partecipants evolve meanwhile.
People evolve to hackers, computers evolve to… cheaper computers. :-)
I have this criticism of the other major use of pseudo-natural-language in UX – conversational interfaces.
Over time, just as a person naturally develops a jargon with their coworkers, all interactive user interfaces based on the exchange of words naturally become more and more like a unix shell or another time-worn REPL system – provided the facilities exist for the machine to evolve along with the user! When the machine can’t support a convenient shorthand, it is abandoned in favor of something that can.
(You don’t need to be a programmer for this to happen. Consider the people, initially enamored of Siri’s conversational nature, who after six months have switched to shouting gramatically-unlinked sequences of keywords at it. They have discovered how to game the natural language processing to get their desired result with less conversation. My father, who enlists my aid in changing his desktop wallpaper despite using windows for 20 years, figured out the keyword trick naturally.)
The thing is, if you use the system enough, it’ll become yours – and incomprehensible to outsiders. This is normal. (You have in-jokes with friends, too.) It runs counter to Big Computing since in-jokes don’t scale, but I don’t think Big Computing is compatible with user-empowerment anyhow.
Although I’m sure many people have had these same thoughts and have even gone down this route, I generally like this idea and would love to see it developed further for educational purposes. I’m a big fan of Inform and I think it’s pretty amazing what you can do with it given that 99% of the time you are writing in plain English (that actually makes sense to read). The flip side of this is that Inform can be just as frustrating to use as conventional programming languages, because at the end of the day, it’s still a language, and that language has certain rules you must follow and certain ways you have to say what you want to say to achieve some sort of result. With any system, it is natural to build a mental model of how that system works, and so when there is a dissonance between what you have in your head and what the language actually is, then there is opportunity for frustration. I am not sure if you can avoid this, because that is going to happen no matter how nice you make the language, but you can build tools around the language to help people as much as possible. Indeed, Inform does have very comprehensive documentation (complete with working examples!) and helpful, readable errors, but sometimes I do wish it were more forgiving in terms of the syntax, or at least better about offering suggestions for how I might should phrase something. On the whole, though, I think this is a pretty fascinating exercise at least.
Leaving aside the difficulties of actually reading code in “natural language” programming models, even if one were to master such, at some point you’re going to need to do something that hasn’t been defined in your nice cosy universe, and you’ll have to learn the implementation language underneath– but you’ll have almost no skills that are transferable, it’ll be like having to start all over again.