1. 1

This article has no type. At least it is not sound. I think serious writing about type theory starts with studying the aptly named untyped lambda calculus. Terms in the untyped lambda calculus have no type.

1. 7

That’s one way to look at it. From another perspective, the terms of the “untyped” (some would say “uni-typed”) lambda calculus have only one type. That’s how TAPL presents it, for example. I consider that serious enough.

But would agree that studying the differences between Church’s 1936 “untyped” and 1940 “simply typed” lambda calculus is quite illustrative.

1. 2

It seems utterly pointless to talk about a type system where everything has one and the same type. The only point of a type system is to distinguish between different types. Otherwise you just . . . don’t have a type system.

1. 6

Sure, as long as you’re only talking about that one “untyped” language and its properties, there’s no point. But when you want to compare languages and their features, it’s all of a sudden very helpful.

This happens all the time in the history of math, by the way: as a theory develops and gets elaborated, something previously thought of as distinct gets recognized as a degenerate special case of something else. For example, natural numbers are included in the integers when we recognize the (not necessarily intuitive!) concept of negation, which arises out of subtraction, which in turn is desirable as a way of undoing addition.

1. 1

It can be argued that there’s no point to a type-system, it’s just a formal system, whether intended to be or not. It’s useful to know what these formal systems represent.

1. 1

It’s actually a very useful perspective for understanding the semantics of such languages: Dana Scott’s reflexive objects are essentially just a codification of the fact that untyped can be seen as unityped.

2. 1

Reminds me of that Guy Steele talk where he notices that type theorists primarily use an ad-hoc, unspecified, dynamically typed language to describe the denotational semantics of their language.

1. -2

Untyped programs do exist. The author is just looking at the wrong direction. He’s looking down to hardware when he should look upwards toward abstract programming languages.

Here’s an example of an “untyped” program, it’s the ‘id’ function from Haskell: `(λx.x) : (∀a. a -> a)`

It works for every variable you pass in and the variable ranges over an infinitely large domain. The type in the program says that for every variable, you can get a variable and return the same type of a variable.

If you loosen from the idea that a program must represent a valid proof for some logical formula, then you can describe large variety of programs. The validity of the program is determined when the domain is selected. That’s how you get runtime errors and dynamic typing.

The original post is parroting of Robert Harper’s dumbest ideas. It’s equivalent to taking my “Dynamic typing is good” -post from the year 2014 and using it as the basis to refute the use of type systems.

It appears that the motive for writing the post came from a short discussion with somebody else. He hasn’t gotten to the agreement about the subject so he decided to write a very assertive blog post to drive his point.

1. 7

The haskell `id` function is typed (it is polymorphic, but it has a universal type).

1. 4

I’m afraid I don’t understand what definition of “typed” you are using here, could clarify a little?

I’m not sure what connection the proofs-as-programs idea has here, this merely states that some interesting type systems happen to mirror some interesting proof theories. You can trivially go from a type system to a logic, though the result is rarely worth studying. Going in the opposite direction is usually more interesting but seems irrelevant.

1. 2

Type systems not only mirror proof theories; they are isomorphic. And the isomorphism is very useful because theorems about programs can be stated in the same language as the programs themselves, as seen in languages such as Coq, Idris, Agda, Epigram… It gives verification powers to languages, and also gives implementation laboratories for logics.

1. 1

I avoided the use of the word “isomorphic” because it’s not clear in what category such an isomorphism should live. But yes, it is a very important symmetry!

2. 4
``````GHCi, version 8.0.1
λ :t id
id :: a -> a
``````

Looks like a type to me… but that’s just Haskell’s type system. Are you trying to make a more general claim?

1. 3

The more general claim is that it’s idiotic to continue the static vs. dynamic typing debate because it is a false dilemma. And it’s harmful to propagate it.

If you take the role of a dynamic typing zealot, it means you’ve discarded the study of logic and proof theory that could enhance your usual ways of reasoning about programming. You don’t need to stop using dynamic languages to reap the benefits, but once you’ve figured it out you want to use something like Prolog, Coq or Haskell more often.

If you go and be a static typing zealot, then you do some sort of a fallacy. It involves putting some particular set of deduction rules to a pedestal and discard all methods of reasoning outside of whatever framework you’ve decided to use. Just like the author asserts how everything, including assembly language and C, has to have a well-defined type, even if the type was useless by failing to convey what the program is trying to solve. There are tons of logical systems and frameworks and nobody has decided that we should stick to intuitionistic logic.

Effectively both stances discard the study of logic in one form or an another. It literally harms scientific progress and the motion from theory into practice.

1. 3

I think there’s more than 2 other solutions to `turn`. Any permutation with all cycles of length two or four should work. So NWES satisfies the properties, as does NSWE and NE/WS.

1. 1

Yes. Cute isn’t it? If we apply turn N times I guess any divisor of 4 will do.

1. 2

I am curious if a similar thing could be done using Racket. Either way this is a terrific find and I hope it catches on.

1. 1

There is the Pie toy language that accompanies the book The Little Typer. It is dependently typed. I don’t know how it compares to latte.

1. 9

Simon Peyton Jones is cool and I love Haskell. Also David Thrane Christiansen gives good talks and has amazing work in dependent types.

1. 19

Seems like it’s mostly just “NumPy happened”. And people started building things on top of NumPy, and then things on top of these things…

Also, machine learning doesn’t need types as much as something like compilers, web frameworks or GUI apps. The only type that matters for ML is matrices of floats, they don’t really have complex objects that need properties and relationships expressed.

1. 11

Types can be used for more than just stating that layers are matrices - have a look at the Grenade Haskell library which lets you fully specify the shape of the network in types, and you get compile time guarantees that the layers fit together so you don’t get to the end of a few days of training to to find your network never made sense.

1. 2

I’ve always thought that Idris would be the best language ever for ML

1. 1

Sadly Idris is not great wrt. unboxed data types. Lots of the dependent stuff it implements involves lots of boxing and pointer chasing… not the greatest for high performance computing. That’s not inherent to dependent types, but it’s something language designers need to tackle in the future if they want to meet the needs of high performance computing.

1. 2

Ah well yes, I was thinking more about an API layer for stuff like Tensorflow or Torch, where the Idris type system validates a DAG of operations at compile time and then it’s all translated with the bindings.

2. 1

The exascale, project languages like Chapel were my guess since they (a) make parallelism way easier for many hardware targets and (b) were advertised to researchers in HPC labs. Didn’t happen. Still potential, there, as multicore gets more heterogenous.

2. 3

The only type that matters for ML is matrices of floats, they don’t really have complex objects that need properties and relationships expressed.

Is this fact inherent to the study of AI & ML, or is it just how we’ve decided to model things?

1. 3

I guess it’s inherent to the modern hardware. The reason for this deep learning hype explosion is that processors (fully programmable GPUs, SIMD extensions in CPUs, now also more specialized hardware) have gotten very good at doing lots of matrix math in parallel, and someone rediscovered old neural network papers and realized that with these processors, we can make the networks bigger and feed them “big data” and the result is pretty good classifiers

1. 1

On top of cheaper. You can get what used to be an SGI Origin or Onyx2 worth of CPU’s and RAM for new car prices instead of new, mini-mansion prices. Moores law with commodity clusters lowered barrier to entry a lot.

2. 2

It is inherent to problems that can be represented in linear algebra. But many problems have different representations, like decision tres for example. Regression and neural networks can be written as matrix operations mostly.

1. 1

I concede that matrices are the most fundamental and optimizable representation for ML. They are literal grids of values, after all; you can’t get much denser than that! However, is it still possible that they do not always lend themselves to higher-level modeling?

For instance, any useful general-purpose computation boils down to some equivalent of machine code—or a Turing machine, for the theoretically-minded. Despite this, we purposefully code in languages that abstract away from this fundamental, optimizable representation. We make some sacrifices in efficiency* in order to enables us to more effectively perform higher-order reasoning. Could (or should) the same be done for ML?

(*Note: sometimes, by letting in abstractions, we actually find new optimizations we hadn’t thought of before, as they require a higher-level environment to conceive of and implement conveniently & reasonably. See parallelism-by-default and lazy streams, as in Haskell. Parsing is yet another example of something that used to be done on a low-level, but that is now done more efficiently & productively due to the advent of higher-level tools.)

1. 2

ML is not limited to neural networks. Other ML models use different representations.

Matrices are an abstraction as well. It doesn’t really say that they are represented as dense arrays. In fact many libraries can use sparse arrays as needed. And performance comes not only from denser representation but from other effects like locality and less overhead compared to all the boxing/unboxing of higher level type systems or method dispatching from most OOP languages.

There is more abstraction at various levels. Some libraries allow the user to specify a neural network in terms of layers. Also matrices are algebraically manipulated as symbolic variables and that makes formulas look simpler.

I guess a few libraries support some kind of dataflow-ish programming by connecting boxes in a graph and having variables propagate as in a circuit. That is very close to the algebraic representation if you think of the formulas as abstract syntax trees for example.

Maybe more abstraction could be useful in defining not only the models but all the data ingestion, training policies, and production/operations as well.

1. 3

There is a lot to say about this, but I believe industrial grade production systems run in typed languages, even though the model development can be done in Python. For example you can use Python to explore in tensor flow, save the model and use it in a typed application.

1. 1

Where’s hard evidence?

1. 5

Learning type theory.

1. 1

Which book?

1. 2

Benjamin Pierce’s Types and programming languages, and The Little Prover

1. 1

1. 3

This looks amazing!

I look briefly at the source and saw that it compiles to scheme, I couldn’t look more carefully because I am on my phone, does it compile to scheme only? Is this a permanent target or just to speed up development? I thought it was very interesting to compile to scheme, very cool! Did Idris compile to scheme too?

1. 4

I’m guessing it’s probably to speed up dev. Idris was always easy to port to new platforms, and it seems the Chez Scheme backend was faster than Idris 1.0 backend, so it seems like it would make sense to leverage it for the time being.

1. 1

What I thought was interesting is that if you have a functional language as a target or IR you get closures for free :)

1. 1

Strongly agree that knowing different languages is great! I work mainly with functional programming and recently I learned about purely functional objects with recursive types and it blew my mind! I had forgotten about OOP after obsessing a bit about Haskell.

1. 3

Learning a bit of assembly since I mainly work with functional programming I think this will provide an interesting counter point.

1. 2

Asus zenbook3, pretty happy with it, specially because it is light as a feather!

1. 5

Very hard to navigate the site and find what Orleans actually is!

1. 4

Typical Microsoft/enterprise mentality. There were news about Azure Sphere a week ago or so, I could not determine what Azure Sphere is.

1. 2

Very nice. It would be very useful to have comments how it all works. Also, something like this for type and effect systems with regions would be great

1. 3
Python/Ruby
• performance 0 / 5

[…]

These languages are slow…

The Cython project may give you some pause.

Rust

[…]

• fun 4 / 5

Oh, just you wait… You’ll have a love/hate relationship with the borrow checker in no time. My impression is that Rust is very much still coming of age.

[…]

• fun ? / 5

…my impression is it may not be well suited to imperative operations like read buffers from a stream and write buffers to a stateful database. I could be totally wrong or ignorant and would like to one day address my ignorance.

Same caveats as with Rust. But I think you’ll find Haskell’s state encapsulation isn’t all too bad once you get used to it. Plenty unfamiliar, for sure—but the modularity it provides is priceless. Especially for projects of the sort you’re embarking on.

1. 5

I played around with things like Cython, never found a way to make Python fast. The guys from Sentry did write some parts in Rust and then embedded it in python, that probably works. https://blog.sentry.io/2016/10/19/fixing-python-performance-with-rust.html

1. 3

What shortcomings did you find in Cython?

2. 1

Agreed, Haskell code is super easy to rewrite.

1. 0

I am not pro or contra centralization per session but the author misses the point that economics is largely a function of technology - specially if we view institutions as social technology

1. 1

Interesting read. I wonder why people keep making a big deal of multiple return values (or so-called algebraic types) when it’s a kind of trivial programming style issue

if( !((x = f(y)).error))z(); else return x.error; // even decrepit C can use structures as return values.

1. 9

C allows you to have both or neither of them filled in. A Rust Result allows Ok or Err, but both is not allowed and neither is not allowed; it must be one or the other. Multiple return values in Go are also allowed to have both filled in, which is why Go’s error handling pattern is terrible.

1. 5

More helpfully: in Rust you can’t use the result without handling any potential error.

In C or Go you can write code that doesn’t check whether an error occurred and it will seem to work most of the time.

2. 1

Do you mean multiple return types?

1. 1

Thanks

1. 1

Yeah, I was thinking of you there. I decided just to submit the recent finds this week. Keep an eye out on front or in recent for them.

1. 2

Working through some PLT texts, looking for a msc theme in lightweight ormal methods