1. 2

    100% Cool project. I feel like a project like this for me would turn into a billion revision. He must have been really practiced to even pull this off within a single week.

    1. 2

      this post only links to https://www.reddit.com/r/serverless/ not to an article.

      1. 2

        This looks like the intended article.

        Edit: Ok, having actually read the actual article now, uh, what? Is this satirical? I have no idea.

        Edit 2: Wait, this is posted on IOPipe’s site, which is a technology used in the article. So this is an ad? Weird.

      1. 4

        hey guys, it’s April 2nd turn it back now.

        1. 5

          It’s not a joke. It’s like this forever.

          1. 9

            It’s not a joke. It’s like this forever.

            If you want a picture of the future, imagine an HTML table stamping on a human face — forever.

            In seriousness, we’re down to the last couple minutes on this gag. I’m taking some screenshots and then cleaning up and resetting the server over the next hour or so.

        1. 4

          This is pretty one-sided. When it comes to building software (or anything for that matter), there’s both how we build that thing, and actually building the thing itself. Often how well we can build that thing or how well it can do whatever it’s job is is directly impacted by how we build it, which includes what tools and techniques we use to build it.

          It’s good for people out there to have this sort of attitude, because we need people on both sides of the equation, however, the way we build things and what those things are capable of advances because of the astronauts who are “wasting his time”. Those astronauts are who built the tools and patterns that let others just build the thing, and do so in a successful manner.

          I can see his side of the argument here. People who spend a lot of time thinking about abstractions and patterns may struggle to see direct application of those abstractions, which is why it’s good to have people who can take those abstractions and create useful products out of them. But to say those people are unproductive and wasting time is pretty insulting.

          1. 2

            The idea that abstractions are useless always reminds me of the haskell community (a popular community here on lobste.rs). Useless on purpose. Never trust something “useful”.

          1. 3

            Nice simple explanation of Dependent Types, proof-type checking that uses values to determine the types. Though I’m still a little confused. Anybody else have any other good papers that explain Dependent Types?

            1. 2

              is this tech related though….. I mean great read, but.

              1. 7

                An interesting question. It’s tagged “science,” but not tech. Is a tech tag implicit on everything posted to lobste.rs?

                1. 2

                  The author is nerd-famous for writing well about programming and Silicon Valley phenomena.

                  1. 1

                    FWIW I would like to continue to see articles such as this, even if they do not include tech.

                  1. 1

                    So, could you do this from the other end with a scheme-like syntax for Haskell itself?

                    You’d have to implement the macro language from scratch I guess.

                    1. 12

                      You would lose a lot of expressive power if you just tried to re-use the GHC typechecker. I’ve spent a lot of time explaining why this is the case in the /r/haskell thread for this blog post, so I will not re-explain here. Instead, I’ll reproduce part of that discussion.

                      Okay, I’ve been sort of avoiding getting into this particular discussion because it’s really complicated, but it seems like a point of confusion, so let me try and clear a couple things up.

                      First of all, GHC is not a single thing, obviously. It has a lot of different pieces to it: it has a parser, a typechecker, a desugarer, an optimizer (itself composed of many different pieces), and a set of backends. When you say “reuse GHC”, you have to be specific about which pieces of GHC you are talking about. Obviously, if you just reuse all of it, then you just have Haskell, so presumably you mean one of two things: reusing the typechecker or reusing the optimizer and backends. It’s possible you also mean both of those things, in which case the compilation strategy would basically just be “compile to Haskell, then run the whole compilation pipeline”.

                      However, let me make this perfectly clear: based on the way Hackett works, Hackett cannot reuse the GHC typechecker. The typechecking algorithms are fundamentally incompatible. If you are advising reusing GHC’s typechecker implementation, then the answer is “no, it can’t be done, no buts, full stop”. Why? Well, again, it’s the thing I keep referencing and quoting; Hackett requires typechecking to be interleaved with macroexpansion, but GHC’s typechecking algorithm is a whole-program analysis. These are incompatible ideas.

                      GHC’s current typechecking algorithm is obviously wildly different from classic Hindley-Milner, but it keeps the general technique of generating a big bag of constraints and solving them at appropriate times (generally just before generalization). This technique has some really good properties, but it also has some bad ones. The good properties are that it provides fantastic type inference for basically all programs, and it does not impose any particular program order since it is such a global transformation. However, the downsides are that error messages can be frustratingly nonlocal and that it requires a full-program traversal to know the types of anything at all.

                      For Haskell, this isn’t so bad. But what does it mean for macros? Well, keep in mind that a macro system wants all sorts of useful things, like the ability to inspect the type of some binding in order to direct its expansion. You can see this yourself in a highly limited form with Template Haskell, which has the reify and reifyModule. Of course, Template Haskell is not designed to be nearly as expressive as a macro system, but it still imposes severe constraints on the typechecker! From the section of the GHC Users Guide on Template Haskell:

                      Top-level declaration splices break up a source file into declaration groups. A declaration group is the group of declarations created by a top-level declaration splice, plus those following it, down to but not including the next top-level declaration splice. N.B. only top-level splices delimit declaration groups, not expression splices. The first declaration group in a module includes all top-level definitions down to but not including the first top-level declaration splice.

                      Each declaration group is mutually recursive only within the group. Declaration groups can refer to definitions within previous groups, but not later ones.

                      Accordingly, the type environment seen by reify includes all the top-level declarations up to the end of the immediately preceding declaration group, but no more.

                      Unlike normal declaration splices, declaration quasiquoters do not cause a break. These quasiquoters are expanded before the rest of the declaration group is processed, and the declarations they generate are merged into the surrounding declaration group. Consequently, the type environment seen by reify from a declaration quasiquoter will not include anything from the quasiquoter’s declaration group.

                      These are serious restrictions, and they stem directly from the fact that GHC’s typechecking algorithm is this sort of whole-program transformation. In Hackett, every definition is a macro, and macro use is likely to be liberal. This restriction would be far to severe. To combat this, Hackett uses a fundamentally different, bidirectional typechecking algorithm, very similar to the one that PureScript uses, which allows the implementation of a Haskell-style type system without sacrificing modularity and incremental typechecking.

                      My implementation of this type system has been remarkably successful given the novelty of the implementation and the amount of time I have spent on it, not least in part due to the availability of the PureScript source code, which has already solved a number of these problems. I don’t think that there’s reason to suggest that getting a large set of useful features will be difficult to achieve in a timely manner. The key point, though, is that the easy solution of “just call into GHC!” is a non-starter, and it is a dead end just for the reasons I mentioned above (and I haven’t even mentioned all the myriad problems with error reporting and inspection that sort of technique would create).

                      Okay, so using GHC’s typechecker is out. What about reusing the optimizer and compiler? Well, this is actually something I do want to do! As far as I know, this should be completely feasible. It’s a lot more work than just compiling to the Racket VM for now, though, since the Racket VM is designed to be easy to compile to. In general, I want to support multiple backends—probably at least Racket, GHC, and JavaScript—but that is a big increase in work and complexity. Building for the Racket ecosystem to start gives me a trivial implementation with acceptable speed, an easy ecosystem of existing code to leverage, a host of useful built-in abstractions for building language interoperation, a fully-featured IDE that automatically integrates with my programming language, and an extensible documentation tool that can be used to write beautiful docs to make my new programming language accessible. Building a new language on the Racket platform is an obvious choice from a runtime/tooling point of view, it’s only the typechecker that is a lot of work.

                      So, to summarize: reusing the typechecker is impossible, reusing the compiler optimizer/backend is feasible but extra work. If you have any additional suggestions for how I could take advantage of GHC, I’d love to hear them! But hopefully this explains why the simplest-looking route is not a viable choice for this project.

                      — me, on /r/haskell


                      Here’s some more context about what that additional expressive power actually is, from another part of the thread:

                      I’m particularly interested about the metaprogramming aspect. At which point are macros run? In particular, can I get access to type info in a macro? That would allow implementing things like idiom brackets as a macro which would be pretty cool.

                      — cocreature, on /r/haskell

                      This is a great question, and it’s absolutely key to the goal of Hackett. Hackett macros are run at compile-time, obviously, but importantly, they are interleaved with typechecking. In fact, it would probably be more accurate to say that typechecking is subsumed by macroexpansion, since it’s the macros themselves that are actually doing the typechecking. This technique is described in more detail in the Type Systems as Macros paper that Hackett is based on.

                      This means that yes, Hackett macros have access to type information. However, the answer is really a little trickier than that: since the Haskell type system is relatively complex but does not require significant type annotation, sometimes types may not be known yet by the time a macro is run. For example, consider typechecking the following expression:

                      (f (some-macro (g x)))
                      

                      Imagine that f and g both have polymorphic types. In this case, we don’t actually know what type g should be instantiated to until some-macro is expanded, since it can arbitrarily change the expression it is provided—and it can even ignore it entirely. Therefore, the inferred type of (g x) is likely to include unsolved type variables.

                      In many cases, this is totally okay! If you know the general shape of the expected type, you can often just introduce some new type variables with the appropriate type equality relationships, and the typechecker will happily try to solve them when it becomes relevant. Additionally, many expressions have an “expected type” that can be deduced from user-provide type annotations. In some situations, this is obvious, like this:

                      (def x : Integer
                        (my-macro (f y)))
                      

                      In this case, my-macro has access to the expected type information, so it can make decisions based on the expectation that the result expression must be an Integer. However, this information can also be useful in other situations, too. For example, consider the following slightly more complicated program:

                      (def f : (forall [a] (SomeClass a) => {(Foo a) -> a}) ...)
                      
                      (def x : Integer
                        (f (my-macro (g y)))
                      

                      In this case, we don’t directly know what the expected type should be just by observing the type annotation on x, since there is a level of application in between. However, we can deduce that, since the result must be an Integer and f is a function from (Foo a) to a, then the expected type of the result of my-macro must be (Foo Integer). This is a deduction that the typechecker already performs, and while it doesn’t work for all situations, it works for many of them.

                      However, sometimes you really need to know exactly what the type is, and you don’t want to burden users with additional type annotations. Typeclass elaboration is a good example of this, since we need to know the fully solved type of some expression before we can pick an instance. In order to solve this problem, we make a promise to the typechecker that our macro’s expansion has a particular type (possibly in terms of existing unsolved type variables), and the typechecker continues with that information. Once it’s finished typechecking, it returns to expand the macro, providing it a fully solved type environment. This is not currently implemented in a general way, but I think it can be, and I think many macros probably fit this situation.

                      In general, this is not a perfectly solvable problem. If a macro can expand into totally arbitrary code, the typechecker cannot proceed without expanding the macro and typechecking its result. However, if we make some restrictions—for example, by weakening what information the macro can obtain or by restricting the type of a macro’s expansion—we can create macros that can implement many different things while still being quite seamless to the user. I think implementing idiom brackets should not only be possible, but it should probably be a good test at whether or not the implementation is really as powerful as I want it to be.

                      For a little bit more discussion along these lines, see this section of a previous blog post.

                      — me, on /r/haskell

                      1. 1

                        Right, so the short version is: yes, but a naive implementation would be impoverished by comparison because in Hackett macro expansion is tightly integrated with type checking which means that macros have access to type information at expansion time which enables richer macro definitions.

                        If you rewrote the Haskell frontend to do that, then you’d have to re-write the type checker along the way in order to end up with something that looked a lot like the existing Hackett compiler.

                        I guess you’d also have to deal with all the questions about how the macro expansion would integrate with the endless extensions to the Haskell type system. Not a small task!

                        I’ll look forward to seeing more of Hackett in the future!

                      2. 2

                        Reading her prior posts on the subject, she talks about the tools the racket ecosystem provides for creating programming languages. So I’m guessing if she ever does implement it in haskell (for instance, make it easier to import haskell libraries) she’ll have to wait until she’s gathered a few more helping hands.

                        1. 4

                          (The author is not a he – as per her twitter, https://twitter.com/lexi_lambda, she identifies as she/her).

                          1. 2

                            My bad, I’ll correct it!

                        2. 1

                          haskell has template haskell, which is it’s version of macros/metaprogramming, so it might not necessarily need to be done from entirely from scratch.

                          1. 1

                            Sure, but template Haskell’s syntax is legendarily awkward compared to the equivalent macro in lisp / scheme. I dread to think what implementing a macro language in Template Haskell would look like.

                            But maybe I’m overthinking it :)

                        1. 3

                          I don’t know if I could ever accept VI descended from Emacs, which is what that graph looks like it’s doing.

                          1. 1

                            IDK, seems to me is that as you proceed further down the tree towards the bottom of the page, the editors get better and better, towards the top is primitive stuff for ancient hardware that no one uses, towards the bottom is the pinnacle of editordom.

                            :)

                          1. 1

                            wooh I have enough Karma already!

                            1. 8

                              I find the lack of dynamic allocation especially interesting. In what sort of situations would this be necessary?

                              1. 5

                                Embedded systems?

                                1. 1

                                  As an example though, what sort of system requires SSL and does not allow dynamic allocation? Not questioning the purpose of the feature, just curious :)

                                  1. 5

                                    It’s not about “not allowing dynamic allocation”.

                                    Statically allocating everything is more predictable, since you don’t have to worry about e.g. heap fragmentation. It reduces some amount of security issues, like failing to handle malloc errors, and use-after-frees, and double-frees. It makes it impossible (ish) to have memory leaks, which can also be security-relevant. It makes it easier to write thread-safe code.

                                    1. 1

                                      Ahhh, that makes a lot of sense. Thank you!

                                    2. 2

                                      You don’t know how annoying it can be when a library uses something you don’t have in some corner you have backed yourself into. its easier to just be careful from the start.

                                      I can imagine needing to use amazon IOT with client certificates on some bare metal platform for example.

                                      1. 1

                                        for when you need your refridgerator to make purchases with a credit card

                                        self-stocking fridge

                                    3. 1

                                      He wrote this book: http://www.prometheusbrother.com/ Which is interesting. All SSL hackers have very varied interests.

                                      1. 1

                                        On top of embedded, they can be designed to use fewer resources if it’s a fixed-allocation scheme or easier to analyze. In terms of analysis, you might be able to do timing analysis for covert channel mitigation, same analysis for real-time situations, static analysis with tools like Astree Analyzer to show absence of critical errors, or whole app analysis showing it follow your policies in every successful or failed state due to determinism. Always go for static designs when you can. If you can’t, then mix the two so at least portions can be exhaustively shown to be correct with extra eyeballs on the dynamic stuff. Similar to mixing pure and impure components in languages like Ocaml.

                                        1. 1

                                          Dumb CS question: if you restrict yourself to static allocation, are you Turing-complete?

                                          1. 2

                                            Individual programs are generally not Turing-complete (what would that even mean?) - the question only makes sense for language implementations. (Admittedly any program that accepts input is in some sense a language implementation, but for something that just decrypts SSL not really).

                                            A Turing-complete language implementation necessarily needs access to unbounded storage. In a sense such storage has to be “dynamic”, but it could e.g. use the C stack and potentially recurse to arbitrary depth (which I believe would qualify as “static allocation only” in the casual sense; it would be limited by stack size but you can set an arbitrarily large stack at runtime). Or use a fixed amount of memory and a file/database/etc. for storage.

                                            1. 2

                                              Unless you are statically allocating an infinite amount of memory, no

                                              1. 2

                                                …but this is true whether or not you restrict yourself to static allocation. There’s always an upper bound on memory on physical systems.

                                                1. 3

                                                  What do physical systems have to do with this? The same principle applies to Turing machines, you can’t simulate an arbitrary TM using a predetermined (finite) subset of the tape.

                                              2. 1

                                                Stack machines with just one stack and no other storage are not turing complete. Turing completeness is not a requirement for certain algorithms, though and sometimes avoided, because non-turing completeness does allow for more interesting proofs about the program at hand (e.g. liveness).

                                                Turing-completeness is also interesting because it is not hard to reach, making some things accidentally turing complete, such as Magic - the gathering.

                                            1. 4

                                              Here’s his book: http://zeromq.org/intro:read-the-manual Which I am happy to say I’ve read (up until Chapter 7, anyway).

                                              I feel like he’s leaving a lot behind, though I wonder why he choose “no last words”, or maybe, he just means those words aren’t for us.

                                              1. 4

                                                I took “no last words” (for public consumption) to be a sign of humility.

                                                1. 3

                                                  I took it as a sign of humility and a bit of humor: ‘no last words’ are last words.

                                              1. 3

                                                I use Pocket Casts to listen on my phone - I usually listen to podcasts on the move anyway, and the UI is pretty slick.

                                                As far as podcasts go, non-tech-wise:

                                                • 99 Percent Invisible, a great show on architecture, design, urban planning, and much more. I love their focus on the wider (sociological, political) implications of the topic discussed.
                                                • Thinking Allowed, a BBC 4 show mostly on sociology.

                                                Tech-wise:

                                                • Cognicast, the Cognitect podcast on Clojure,
                                                • Functional Geekery, a cool all-around show interviewing interesting folks from the functional world.
                                                1. 2

                                                  good link, if you like Functional Geekery, check out The Type Theory Podcast: http://typetheorypodcast.com/ Same idea, interviews with big names in functional programming.

                                                  1. 1

                                                    Thanks! I really liked the Type Theory Podcast, but they seem rather inactive – I’d love to hear more episodes.

                                                1. 5

                                                  Obligatory wat talk.

                                                    1. 1

                                                      I don’t think that chart is accurate. According to the chart [] should work as NaN does for reflective, but if you try it both in the game and in the JS console it doesn’t. So I think it may be safe to say, don’t trust that table…

                                                      1. 1

                                                        I think the chart is pretty accurate; according to the chart [] != [] and in fact it is (being the left and right different instances of Array), only NaN works in reflective because it is only the value that is different from itself, i.e. if a = [] then a == a holds true while if a = NaN it does not.

                                                  1. 2

                                                    the next big thing after http/2

                                                    1. 4

                                                      Looking forward to the blog posts now claiming http is faster than https.

                                                      1. 1

                                                        Surely you mean QUICker? Sorry, couldn’t resist…

                                                    1. 3

                                                      It has garbage collection so it is unsuitable for real-time systems.

                                                      This line is confusing for me. Wouldn’t garbage collection make it “suitable” for real-time systems, or does he mean something different from “real-time systems”?

                                                      1. 9

                                                        The common understanding of garbage collection implementation is that they use the stop the world approach, which is a no-no for a system that has to output control data constantly, without pauses.

                                                        I’m not sure on GHC’s implementation specifics so I can’t tell you if that’s a bogus concern or not.

                                                        1. 8

                                                          It’s a real concern. GHC’s GC is pretty smart, but it’s not hard realtime.

                                                        2. 2

                                                          He’s using the strict definition of real-time here, meaning “if this process doesn’t run a hundred times a second, the vehicle crashes.” Garbage collection pauses make this hard to guarantee.

                                                        1. 1

                                                          I think though, the comparison should be made between Template Haskell and C++ templates. C++ templates are also compile time only, I believe.

                                                          1. 33

                                                            The wording of the patent grant and exclusion is right there in the post. No version of the word ‘compete’ is in there; it’s the typical mutually-assured-destruction clause to preserve Facebook’s right to respond to a patent claim using its patents. (I’m not a lawyer or your lawyer, but you can read where it says “The license granted hereunder will terminate…” after the link.)

                                                            A comment by someone who says they work on React says Google lawyers were unhappy with a previous version of the patent grant, but Facebook revised it based on their feedback and teams at Google and MS are using it now.

                                                            1. 12

                                                              This thing is sort of common with tech articles these days, a sensationalist title to bring in clicks, but more than less ,an overblown claim.

                                                              1. 6

                                                                Its common in almost all news articles lately not just tech.

                                                                1. 1

                                                                  Not just lately.

                                                              2. 1

                                                                …teams at Google and MS are using it now.

                                                                Oh. Huh. I thought all three companies were working on their own separate frameworks and trying to take over the world, rather than cooperating.

                                                              1. 4

                                                                “In the opinion of the team behind haskell-lang.org, the tooling story and general ecosystem infrastructure for the Haskell community has accumulated enough baggage that a clean break is the best use of everybody’s time. ”

                                                                LOL

                                                                1. 1

                                                                  the comments are pretty funny, don’t forget to read the comments.

                                                                  1. 7

                                                                    I just want to point out that using UUIDs takes almost no thought in any other language.

                                                                    1. 6

                                                                      This is the thing that kills Haskell for me. I enjoyed learning the language, and on some level want to like it, but every little thing is an ordeal.

                                                                      1. 3

                                                                        as is everything in haskell. Yesod’s still awesome though. I’m so happy to see a Yesod article for once!

                                                                        1. 1

                                                                          I’m curious, why is Yesod awesome? In particular, what does it do better than all other web frameworks?

                                                                          1. 1

                                                                            fine, you got me there. Probably, just because it’s written in haskell.

                                                                            I suppose if there was something I could complain about, it’s use of Shakespearean Templates especially made it hard to get moving quickly in.

                                                                            But how about this, you have more assurances that you are writing a web application in pure functional code than you would be in any other framework, because of the language it’s written in.

                                                                        2. 1

                                                                          Yesod is notoriously opinionated (and, particularly their template haskelly ORM). If the author wanted to use them with other Haskell web frameworks (and less magic/opinionated database libraries), I doubt the author would have encountered any trouble at all.

                                                                          1. 1

                                                                            “Any other language” is a large set of languages, I am pretty sure there are some more where generating UUIDs requires some coding.

                                                                            That aside, are you talking about the libraries available, or about the language itself? Generating UUIDs in Haskell is not hard if you don’t need to bend them to the Yesod use case. On the other hand, generating random UUIDs would take some time if you don’t have libraries to deal with pseudorandom sequences and entropy generation, for example.