1. 35
    1. 12

      Kit is a new programming language that compiles to C with many modern PL features like – strong, static type system, traits, boxes , generics and polymorphism, term rewriting etc., while still providing low level features like raw pointers.

      Kit is written in Haskell. It is inspired by languages like Zig, Rust, Haxe, Scala etc.,

    2. 4

      Thanks for adding algebraic datatypes in!

    3. 3

      I think compile to ‘readable’ C would be a good choice for a lot of projects. It would make using a less mainstream language waaaay less risky.

      1. 4

        I think interoperability with C is much more important than generating readable C. You’re generally not going to be interacting with the generated code, but it should be easy to link against it from C and vice versa. You get the same risk mitigation either way.

        I found generating “readable” C to be tricky, since it’s such a simple language and has no namespacing.

      2. 3

        For adoption, my default recommendation now is using C’s types, its calling conventions, automatically recognizing its libraries in FFI, and compiling to readable C. Better to be using the C ecosystem than competing with it entirely.

        1. 4

          For adoption, my default recommendation now is using C’s types, its calling conventions, automatically recognizing its libraries in FFI, and compiling to readable C.

          I’ve focused on that in my last two compilers, it’s pretty fun to ship code to clients who don’t even know you’re writing in something else.

          1. 2

            Yeah. I used to do it with an enhanced BASIC. They were talking about me using a “real” language. Lulz.

    4. 2

      What is the concurrency story? I didn’t find any mention of “thread” or “concurrency” on the main page or the examples page.

      1. 1

        There’s nothing announced; for the moment you have libpthread for threading but no specific support. In the future I’d like to add coroutines, but it’s still in the design phase.

    5. 2

      Why would one use Kit over Rust?

      1. 8

        Creator here. Any of the reasons for using C over Rust would apply. A couple things stand out for me:

        • Seamless interop with C. Kit’s semantics map directly onto C’s, and bindings aren’t needed.
        • Sometimes you don’t need the borrow checker and it becomes an unnecessary hurdle. e.g. to implement a doubly linked list without heap allocation you’ll likely end up using raw pointers and unsafe blocks anyway. Any shared mutable state requires extra cognitive overhead in Rust.
        • Implicit values and term rewriting result in more concise abstractions.

        With that said, I love Rust and a lot of Kit’s features are directly inspired by Rust.

        1. 5

          Ooh I like the generalized newtype deriving:

          Abstracts inherit some semantics from their underlying type by default, including trait implementations and methods. The abstract can declare its own methods or trait implementations which will take precedence over that of the underlying type.

          I can see how the term rewriting is similar to ghc’s “stream fusion” pragmas, which were always very tacked-on to Haskell.

    6. 1

      As a genuine question from someone who hasn’t used procedural programming productively before, what would be the benefits of a procedural language to justify its choice?

      1. 3

        I would say less conceptual/cognitive overhead, but I don’t know if that’s something that can be said of this language as a whole, as I have no experience with it.

        By that I mean something like: I have a rough idea of what code I want from the compiler, how much mental gymnastics is required to arrive at the source-level code that I need to write?

        I would imagine that’s an important consideration in a language designed for game development.

        1. 4

          Yeah, it makes perfect sense.

          To dumb down Kit’s value prop, it’s a “Better C, for people who need C (characteristics)”.

      2. 2

        On top of alva’s comment, they compile fast and are easy to optimize, too.

      3. 1

        I looked this up for some other article on lobste.rs. I found wikipedia to have a nice summary

        https://en.wikipedia.org/wiki/Procedural_programming

        Imperative programming

        Procedural programming languages are also imperative languages, because they make explicit references to the state of the execution environment. This could be anything from variables (which may correspond to processor registers) to something like the position of the “turtle” in the Logo programming language.

        Often, the terms “procedural programming” and “imperative programming” are used synonymously. However, procedural programming relies heavily on blocks and scope, whereas imperative programming as a whole may or may not have such features. As such, procedural languages generally use reserved words that act on blocks, such as if, while, and for, to implement control flow, whereas non-structured imperative languages use goto statements and branch tables for the same purpose.

        My understanding is that if you use say C you are basically using procedural language paradigms.

        1. 2

          Interesting. So basically what was registering in my mind as imperative programming is actually procedural.

          Good to know. Thanks for looking it up!

          1. 2

            I take “imperative” to mean based on instructions/statements, e.g. “do this, then do that, …”. An “instruction” is something which changes the state of the world, i.e. there is a concept of “before” and “after”. Lots of paradigms can sit under this umbrella, e.g. machine code (which are lists of machine instructions), procedural programming like C (where a “procedure”/subroutine is a high-level instruction, made from other instructions), OOP (where method calls/message sends are the instructions).

            Examples of non-imperative languages include functional programming (where programs consist of definitions, which (unlike assignments) don’t impose a notion of “before” and “after”) and logic programming (similar to functional programming, but definitions are more flexible and can rely on non-deterministic search to satisfy, rather than explicit substitution)

            1. 1

              If functional programs don’t have a noton of before and after, how do you code an algorithm? Explain newton’s method as a definition.

                1. 1

                  both recursion and iteration say “do this, then do that, then do … “. And “let” appears to be assignment or naming so that AFTER the let operation a symbol has a meaning it did not have before.

                  open some namespaces
                  open System
                  open Drawing    
                  open Windows.Forms
                  open Math
                  open FlyingFrog
                  

                  changes program state so that certain operations become visible AFTER those lines are executed, etc.

                  1. 3

                    It is common for computation to not actually take place until the result is immediately needed. Your code may describe a complicated series of maps and filters and manipulations and only ever execute enough to get one result. Your code looks like it describes a strict order the code executes in, but the execution of it may take a drastically different path.

                    A pure functional programming language wouldn’t be changing program state, but passing new state along probably recursively.

                    1. 1

                      but you don’t really have a contrast with “imperative” languages - you still specify an algorithm. In fact, algorithms are all over traditional pure mathematics too. Generally the “state” being changed is on a piece of paper or in the head of the reader, but …

                  2. 1

                    so that AFTER the let operation

                    If we assume that let is an operation, then there is certainly a before and an after.

                    That’s not the only way to think about let though. We might, for example, treat it as form of linguistic shorthand; for example treating:

                    let x = somethingVeryLongWindedInvolving y in x * x
                    

                    as a shorthand for:

                    (somethingVeryLongWindedInvolving y) * (somethingVeryLongWindedInvolving y)
                    

                    There is no inherent notion of before/after in such an interpretation. Even if our language implements let by literally expanding/elaborating the first form into the second, that can take place at compile time, alongside a whole host of other transformations/optimisations; hence even if we treat the expansion as a change of state, it wouldn’t actually occur at run time, and thus does not affect the execution of any algorithm by our program.

                    Note that we might, naively, think that the parentheses are imposing a notion of time: that the above tells us to calculate somethingVeryLongWindedInvolving y first, and then do the multiplication on the results. Call-by-name evaluation shows that this doesn’t have to be the case! It’s perfectly alright to do the multiplication first, and only evaluate the arguments if/when they’re needed; this is actually preferable in some cases (like the K combinator).

              1. 2

                If functional programs don’t have a noton of before and after, how do you code an algorithm?

                Roughly speaking, we define each “step” of an algorithm as a function, and the algorithm itself is defined as the result of (some appropriate combination of) those functions.

                As a really simple example, let’s say our algorithm is to reverse a singly-linked-list, represented as nested pairs [x0, [x1, [x2, ...]]] with an empty list [] representing the “end”. Our algorithm will start by creating a new empty list, then unwrap the outer pair of the input list, wrap that element on to its new list, and repeat until the input list is empty. Here’s an implementation in Javascript, where reverseAlgo is the algorithm I just described, and reverse just passes it the new empty list:

                var reverse = (function() {
                  function reverseAlgo(result, input) {
                    return (input === [])? result : reverseAlgo([input[0], result], input[1]);
                  };
                  return function(input) { return reverseAlgo([], input); };
                })();
                

                Whilst Javascript is an imperative language, the above is actually pure functional programming (I could have written the same thing in e.g. Haskell, but JS tends to be more familiar). In particular, we’re only ever defining things, in terms of other things. We never update/replace/overwrite/store/retrieve/etc. This style is known as single assignment.

                For your Newton-Raphson example, I decided to do it in Haskell. Since it uses Float for lots of different things (inputs, outputs, epsilon, etc.) I also defined a bunch of datatypes to avoid getting them mixed up:

                module Newton where
                
                newtype Function   = F (Float -> Float)
                newtype Derivative = D (Float -> Float)
                newtype Epsilon    = E Float
                newtype Initial    = I Float
                newtype Root       = R (Float, Function, Epsilon)
                
                newtonRaphson :: Function -> Derivative -> Epsilon -> Initial -> Root
                newtonRaphson (F f) (D f') (E e) (I x) = if abs y < e
                                                            then R (x, F f, E e)
                                                            else recurse (I x')
                
                  where y  = f x
                
                        x' = x - (y / f' x)
                
                        recurse = newtonRaphson (F f) (D f') (E e)
                

                Again, this is just defining things in terms of other things. OK, that’s the definition. So how do we explain it as a definition? Here’s my attempt:

                Newton’s method of a function f + guess g + epsilon e is defined as the “refinement” r of g, such that f(r) < e. The “refinement” of some number x depends on whether x satisfies our epsilon inequality: if so, its refinement is just x itself; otherwise it’s the refinement of x - (f(x) / f'(x)).

                This definition is “timeless”, since it doesn’t talk about doing one thing followed by another. There are causal relationships between the parts (e.g. we don’t know which way to “refine” a number until we’ve checked the inequality), but those are data dependencies; we don’t need to invoke any notion of time in our semantics or understanding.

                1. 2

                  Our algorithm will start by creating a new empty list, then unwrap the outer pair of the input list, wrap that element on to its new list, and repeat until the input list is empty.

                  Algorithms are essentially stateful. A strongly declarative programming language like Prolog can avoid or minimize explicit invocation of algorithms because it is based on a kind of universal algorithm that is applied to solve the constraints that are specified in a program. A “functional” language relies on a smaller set of control mechanisms to reduce, in theory, the complexity of algorithm specification, but “recursion” specifies what to do when just as much as a “goto” does. Single assigment may have nice properties, but it’s still assignment.

                  To me, you are making a strenuous effort to obfuscate the obvious.

                  1. 3

                    Algorithms are essentially stateful.

                    I generally agree. However, I would say programming languages don’t have to be.

                    When we implement a stateful algorithm in a stateless programming language, we need to represent that state somehow, and we get to choose how we want to do that. We could use successive “versions” of a datastructure (like accumulating parameter in my ‘reverse’ example), or we could use a call stack (very common if we’re not making tail calls), or we could even represent successive states as elements of a list (lazy lists in Haskell are good for this).

                    A strongly declarative programming language like Prolog can avoid or minimize explicit invocation of algorithms because it is based on a kind of universal algorithm that is applied to solve the constraints that are specified in a program.

                    I don’t follow. I think it’s perfectly reasonable to say that Prolog code encodes algorithms. How does Prolog’s use of a “universal algorithm” (depth-first search) imply that Prolog code isn’t algorithmic? Every programming language is based on “a kind of universal algorithm”: Python uses a bytecode interpreter, Haskell uses beta-reduction, even machine code uses the stepping of the CPU. Heck, that’s the whole point of a Universal Turing Machine!

                    “recursion” specifies what to do when just as much as a “goto” does.

                    I agree that recursion can be seen as specifying what to do when; this is a different perspective of the same thing. It’s essentially the contrast between operational semantics and denotational semantics.

                    I would also say that “goto” can be seen as a purely definitional construct. However, I don’t think it’s particularly useful to think of “goto” in this way, since it generally makes our reasoning harder.

                    To me, you are making a strenuous effort to obfuscate the obvious.

                    There isn’t “one true way” to view these things. I don’t find it “strenuous” to frame things in this ‘timeless’ way; indeed I personally find it easier to think in this way when I’m programming, since I don’t have to think about ‘time’ at all, just relationships between data.

                    Different people think differently about these things, and it’s absolutely fine (and encouraged!) to come at things from different (even multiple) perspectives. That’s often the best way to increase understanding, by find connections between seemingly unrelated things.

                    Single assigment may have nice properties, but it’s still assignment.

                    In name only; its semantics, linguistic role, formal properties, etc. are very different from those of memory-cell-replacement. Hence why I use the term “definition” instead.

                    The key property of single assignment is that it’s unobservable by the program. “After” the assignment, everything that looks will always see the same value; but crucially, “before” the assignment nothing is able to look (since looking creates a data dependency, which will cause that code to be run “after”).

                    Hence the behaviour of a program that uses single assignment is independent of when that assignment takes place. There’s no particular reason to assume that it will take place at one time or another. We might kid ourselves, for the sake of convenience, that such programs have a state that changes over time, maybe going to far as to pretend that these hypothetical state changes depend in some way on the way our definitions are arrangement in a text file. Yet this is just a (sometimes useful) metaphor, which may be utterly disconnected from what’s actually going on when the program (or, perhaps, a logically-equivalent one, spat out of several stages of compilation and optimisation!).

                    Note that the same is true of the ‘opposite’ behaviour: garbage collection. A program’s behaviour can’t depend on whether or not something has been garbage collected, since any reference held by such code will prevent it from being collected! Garbage collection is an implementation detail that’s up to the interpreter/runtime-system; we can count on it happening “eventually”, and in some languages we may even request it, but adding it to our semantic model (e.g. as specific state transitions) is usually an overcomplication that hinders our understanding.

                    1. 1

                      A lot of what you see as distinctive in functional languages is common to many non-functional languages. And look up Prolog - it is a very interesting alternative model.

                      1. 1

                        A lot of what you see as distinctive in functional languages is common to many non-functional languages.

                        You’re assuming “what I see”, and you’re assumption is wrong. I don’t know where you got this idea from, but it’s not from me.

                        I actually think of “functional programming” as a collection of styles/practices which have certain themes in common (e.g. immutability). I think of “functional programming languages” as simply those which make programming in a functional style easier (e.g. eliminating tail calls, having first-class functions, etc.) and “non-functional programming languages” as those which make those styles harder. Most functional programming practices are possible in most languages.

                        In other words, I agree that “A lot of [features of] functional languages is common to many non-functional languages”, but I have no idea why you would claim I didn’t.

                        Note that in this thread I’ve not tried to claim that, e.g. “functional programming languages are better”, or anything of that nature. I was simply stating the criteria I use for whether to call a style/language “imperative” or not; namely, if its semantics are most usefully understood as executing instructions to change the state of the (internal or external) world.

                        And look up Prolog - it is a very interesting alternative model.

                        I’m well aware of Prolog. The research group I was in for my PhD did some fascinating work on formalising and improving logic programming in co-inductive settings; although I wasn’t directly involved in that. For what it’s worth I’m currently writing a project in Mercury (a descendent of Prolog, with static types among other things).

                2. [Comment removed by moderator pushcx: Accidental double-post.]

        2. 1

          So procedural languages are similar to imperative languages, but with somewhat more abstraction?