There is a technical definition of syntactic sugar proposed over 30 years ago by Matthias Felleisen. Syntactic sugar can be equated with his notion of “macro-definability,” which means that a feature can be implemented using a single local rewrite rule. So, “+=” is syntactic sugar because it can be implemented by the rewrite rule +=(A,B) -> =(A, +(A,B)).
Under this definition, async/await is not syntactic sugar, because it is a global program transformation, albeit a very regular one.
x[y] += z is a bit of a special case, because the LHS is an expression, whereas assignments generally expect variables on the LHS. For example, randrange(4) += 1 is not a valid expression in Python.
Typically you deal with array assignments specially, with a rule such as
The thing is, isn’t every language feature beyond the bare minimum of what makes a language turing-complete syntax sugar?
I generally think of syntactic sugar as language features that could be implemented using macros. This definition by @Loup-Vaillant adds some more precision. I do think it is a useful thing to bear in mind as language designers, as a way of reducing the complexity of programming languages.
Almost any language feature can be implemented via macros, though, so that doesn’t seem like a great defnition.
For example, Lisp has the concept of “special forms”, which are the intrinsic operations provided by the compiler or runtime, and (in theory) everything else in the language is implemented via those primitive constructs. Common Lisp has 24 special forms, but simpler Lisps can get away with as few as 5 or 6. Common Lisp implementations are given leeway to implement things more efficiently, but for the most part you can treat Lisp code as if it will eventually be deconstructed into special forms.
I should have been more precise (was trying to avoid jargon, ack!). I was intending to allude to “macro-definability” (see this sibling comment), and less to the more advanced forms of macros found in some languages.
I always understood the word “just” as meaning that this thing is not a separate new concept, but can be understood in terms of an already familiar syntactic construct. Never thought of it as “less important”.
Same here. To think improved syntax is somehow unimportant seems silly to me.
However, there’s the old saying “syntactic sugar causes cancer of the semicolon” in that layering too much syntactic sugar on top of underlying “real” syntax may cause problems with understanding what’s really happening. Or too much variation in ways of expressing the same thing can cause confusion in learners or amongst teams.
Syntactic sugar is opinionated. When you make something easier or more elegantly expressed in your language, you are actively encouraging people to do that thing.
For example, the pipe (|>) operator in Elixir is a very elegant way to express “take the output of this function, and feed it into the next”
foo()
|> bar()
|> baz(a)
replacing
x = foo()
y = foo(x)
z = baz(y,a)
Including this functionality encourages users to express their functions as compositions of other functions, and encourages a convention of making the most important input or the thing that’s being ‘changed’ as the first argument. These are desirable traits in the language, so it’s a good language feature. On the other hand, if you add syntactic sugar to make something undesirable in the language easier, you’re encouraging your users to do that undesirable thing, and thus are undermining your own language.
For example, elixir doesn’t let you index into lists like
x = [1,2,3]
a = x[1]
Because it is not behavior that should be encouraged. On the other hand, if you want the head, that’s easy
There is a technical definition of syntactic sugar proposed over 30 years ago by Matthias Felleisen. Syntactic sugar can be equated with his notion of “macro-definability,” which means that a feature can be implemented using a single local rewrite rule. So, “+=” is syntactic sugar because it can be implemented by the rewrite rule
+=(A,B) -> =(A, +(A,B))
.Under this definition, async/await is not syntactic sugar, because it is a global program transformation, albeit a very regular one.
A += B
is not equivalent toA = A + B
.x[y] += z
is a bit of a special case, because the LHS is an expression, whereas assignments generally expect variables on the LHS. For example,randrange(4) += 1
is not a valid expression in Python.Typically you deal with array assignments specially, with a rule such as
where
x
,y
andz
are fresh variables.[Comment removed by author]
I generally think of syntactic sugar as language features that could be implemented using macros. This definition by @Loup-Vaillant adds some more precision. I do think it is a useful thing to bear in mind as language designers, as a way of reducing the complexity of programming languages.
Almost any language feature can be implemented via macros, though, so that doesn’t seem like a great defnition.
For example, Lisp has the concept of “special forms”, which are the intrinsic operations provided by the compiler or runtime, and (in theory) everything else in the language is implemented via those primitive constructs. Common Lisp has 24 special forms, but simpler Lisps can get away with as few as 5 or 6. Common Lisp implementations are given leeway to implement things more efficiently, but for the most part you can treat Lisp code as if it will eventually be deconstructed into special forms.
I should have been more precise (was trying to avoid jargon, ack!). I was intending to allude to “macro-definability” (see this sibling comment), and less to the more advanced forms of macros found in some languages.
I always understood the word “just” as meaning that this thing is not a separate new concept, but can be understood in terms of an already familiar syntactic construct. Never thought of it as “less important”.
Same here. To think improved syntax is somehow unimportant seems silly to me.
However, there’s the old saying “syntactic sugar causes cancer of the semicolon” in that layering too much syntactic sugar on top of underlying “real” syntax may cause problems with understanding what’s really happening. Or too much variation in ways of expressing the same thing can cause confusion in learners or amongst teams.
Syntactic sugar is opinionated. When you make something easier or more elegantly expressed in your language, you are actively encouraging people to do that thing.
For example, the pipe (|>) operator in Elixir is a very elegant way to express “take the output of this function, and feed it into the next”
replacing
Including this functionality encourages users to express their functions as compositions of other functions, and encourages a convention of making the most important input or the thing that’s being ‘changed’ as the first argument. These are desirable traits in the language, so it’s a good language feature. On the other hand, if you add syntactic sugar to make something undesirable in the language easier, you’re encouraging your users to do that undesirable thing, and thus are undermining your own language.
For example, elixir doesn’t let you index into lists like
Because it is not behavior that should be encouraged. On the other hand, if you want the head, that’s easy
because that is encouraged behavior.