This seems to be recorded lectures from some kind of university course.
This course is also available on the author’s blog in a text format: https://bartoszmilewski.com/2014/10/28/category-theory-for-programmers-the-preface/
Having dabbled in Rust and read some blog-posts about Haskell, I already knew some of this stuff (like the existence of sum and product types, or Void and unit), but having them all described with all the connections between them is kind of blowing my mind. For example, I knew algebraic data types were “algebraic” because they’re a bit like “addition” and “multiplication”, but the video pulls out a whole bunch of high-school algebra, translates it into data-type notation and shows that it not just works, but that it’s actually useful things a programmer would recognise.
It’s a long series of videos, and I’m only about half-way through (just finished watching 5.2 Algebraic Data Types) but I’m more than happy to have spent the time to watch them on my Christmas holidays.
You’re slowly stepping into the dark side, you’ll soon realize that the precise answers to all your software architecture related questions lie somewhere in abstract math :)
Algebraic data types are called “algebraic” because they’re initial algebras for some endofunctor. The fact that, in a strict language, sum types and tuple types behave, respectively, like “addition” and “multiplication” (in that the latter distributes over the former) is unrelated, but also accounted for in category theory. This kind of structure is known as a distributive monoidal category.
Typeful languages without substructural types (e.g., Haskell, ML) invite you to think of their type structure as some sort of constructive moral analogue of the category of sets. The analogy isn’t perfect - it breaks down badly when you take general recursion and nontermination into account - but most of the time you only want to work with terminating procedures anyway, so it’s a useful starting point for designing abstractions with a good power/cost ratio.