As a mathematician, I object to the general notion of this article and want to state that pretty much all of the given examples either are just examples of peculiar notation (e.g. the Euler sum notation) or are not a problem at all. A polynomial f(x) (from R[x]) is different from its evaluation function. Given this is all treated the same by non-mathematicians and many mathematicians who are not well-versed with algebra it leads to these misunderstandings.
You think summation notation is very peculiar or outlandish?
As I understand it, Jeremy isn’t saying that these are real problems. He’s just trying to tell programmers that they have the wrong concept of how mathematicians work. We totally say let x = 5, no, wait, let’s make x = 6 instead, and it’s ok. We’re not some kind of Haskellian logical machine where every symbol has a fixed meaning. We often use the same symbol to mean different things. We’re humans. We rely on context and habit. When programmers hold mathematicians up to some idealised perfection, they’re doing us a disservice. We don’t work like that.
Postpostscript: embarrassingly, I completely forgot about Big-O notation and friends (despite mentioning it in the article!) as a case where = does not mean equality! f(n) = O(log n) is a statement about upper bounds, not equality! Thanks to @lreyzin for keeping me honest.
Interesting enough we were tought that it is absolutely wrong to use equality in this case, and one should instead use the “element of” symbol. The only reason it’s used is because “people understand what is meant”.
That’s what I was taught also, but that’s a later attempt at notation cleanup, and Landau’s original notation (still widely used) used the equals instead. Here’s how Tourlakis’s Theory of Computation book explains the notation (p. 339):
Given g : ℕ→ℕ, O(g(n)) is the set of all f : ℕ→ℕ such that, for some constant C, we have f(n) ≤ Cg(n) a.e. The notation f(n) = O(g(n)), introduced by the number-theorist E. Landau, means f(n) ∈ O(g(n)) and is called big-O notation. Expressions in big-O notation are read from left to right. In particular, O(h(n)) = O(g(n)) is abuse of notation for O(h(n)) ⊆ O(g(n)).
As far as I can tell, the standard textbooks primarily use the equals notation rather than the set notation, e.g. Sipser’s Introduction to the Theory of Computation (I believe the most popular textbook of its kind) does so exclusively.
In my book = in programming is first and foremost an assignment operator and not equality. So either the author’s argument goes right over my head or I’m otherwise confused :)
As a mathematician, I object to the general notion of this article and want to state that pretty much all of the given examples either are just examples of peculiar notation (e.g. the Euler sum notation) or are not a problem at all. A polynomial f(x) (from R[x]) is different from its evaluation function. Given this is all treated the same by non-mathematicians and many mathematicians who are not well-versed with algebra it leads to these misunderstandings.
You think summation notation is very peculiar or outlandish?
As I understand it, Jeremy isn’t saying that these are real problems. He’s just trying to tell programmers that they have the wrong concept of how mathematicians work. We totally say let x = 5, no, wait, let’s make x = 6 instead, and it’s ok. We’re not some kind of Haskellian logical machine where every symbol has a fixed meaning. We often use the same symbol to mean different things. We’re humans. We rely on context and habit. When programmers hold mathematicians up to some idealised perfection, they’re doing us a disservice. We don’t work like that.
Don’t know what the problem is, obviously f = 2 + 3/x
Interesting enough we were tought that it is absolutely wrong to use equality in this case, and one should instead use the “element of” symbol. The only reason it’s used is because “people understand what is meant”.
That’s what I was taught also, but that’s a later attempt at notation cleanup, and Landau’s original notation (still widely used) used the equals instead. Here’s how Tourlakis’s Theory of Computation book explains the notation (p. 339):
As far as I can tell, the standard textbooks primarily use the equals notation rather than the set notation, e.g. Sipser’s Introduction to the Theory of Computation (I believe the most popular textbook of its kind) does so exclusively.
Mocking javascript is a long & hallowed tradition, but 30 seconds of research would have revealed to the author that “xor” doesn’t mean “pow”.
That’s a good point, but it still does let you xor lists with numbers…
In my book = in programming is first and foremost an assignment operator and not equality. So either the author’s argument goes right over my head or I’m otherwise confused :)