I never found this unintuitive, even long before I encountered category theory.

From what we know about sums, it’s clear that the only sane value of an empty sum is the additive identity, 0. The value of a sum is a property of the (unordered and possibly duplicate) summands, and Sum(A^B) = Sum(A) + Sum(B) where “^” is a concatenation operator. Where ε is an empty list, we note that

Empty product equalling 1 is a little less intuitive insofar as we’re used to thinking of “nothing” as zero, while the inert multiplicand (i.e. multiplicative identity) is not zero but one. But the same arguments work for Product(ε) = 1, All(ε) = True, and Any(ε) = False.

The much more counterintuitive fact is that (∀x) 0 = 1 can be true– if the domain for x is the empty set. In other words, “for all x in {}, $something_false” is true because there are no x in {}. This special case annoys the crap out of model theorists everywhere.

More generally, you want to extend a semigroup homomorphism to a monoid homomorphism, which is very a reasonable thing to want to do.

On the other hand, I don’t see why model theorists should find ∀ (x : ⊥). P for arbitrary P annoying. It’s the principle of explosion, which logicians are familiar with, having invented it themselves! The ones who should find it annoying are we programmers, since its computational significance is that you’re creating a control flow path in your program text, only to prove it unreachable later on. By all means believe in the principle of explosion, but please don’t rely on it when writing code!

By “annoying”, I mainly mean that it’s a special case that has to be addressed sometimes. For example, you can’t say ~(∀x) ~(x = x) unless you assert that there’s at least one x. In fact, this statement is equivalent to ∃x (x = x) which is not true in an empty model.

I never found this unintuitive, even long before I encountered category theory.

From what we know about sums, it’s clear that the only sane value of an empty sum is the additive identity, 0. The value of a sum is a property of the (unordered and possibly duplicate) summands, and

`Sum(A^B) = Sum(A) + Sum(B)`

where “^” is a concatenation operator. Where ε is an empty list, we note thatEmpty product equalling 1 is a little less intuitive insofar as we’re used to thinking of “nothing” as zero, while the inert multiplicand (i.e. multiplicative identity) is not zero but one. But the same arguments work for

`Product(ε) = 1`

,`All(ε) = True`

, and`Any(ε) = False`

.The much more counterintuitive fact is that

`(∀x) 0 = 1`

can be true– if the domain for`x`

is the empty set. In other words, “for all x in {},`$something_false`

” is true because there are no x in {}. This special case annoys the crap out of model theorists everywhere.More generally, you want to extend a semigroup homomorphism to a monoid homomorphism, which is very a reasonable thing to want to do.

On the other hand, I don’t see why model theorists should find

`∀ (x : ⊥). P`

for arbitrary`P`

annoying. It’s the principle of explosion, which logicians are familiar with, having invented it themselves! The ones who should find it annoying are we programmers, since its computational significance is that you’re creating a control flow path in your program text, only to prove it unreachable later on. By all means believe in the principle of explosion, but please don’t rely on it when writing code!By “annoying”, I mainly mean that it’s a special case that has to be addressed sometimes. For example, you can’t say

`~(∀x) ~(x = x)`

unless you assert that there’s at least one`x`

. In fact, this statement is equivalent to`∃x (x = x)`

which is not true in an empty model.Exactly. This reasoning extends to

anygroup and addition and multiplication are just special cases. The same holds for matrices, functions, etc.Rigorous proof for

1 + 1 = 2

in Principia Mathematica (and covers the title).