1. 9

    This is a bold statement, I do quite a bit of ssh -X work, even thousands of miles distant from the server. I do very much wish ssh -X could forward sound somehow, but I certainly couldn’t live without X’s network transparency.

    1. 6

      Curious, what do you use it for? Every time I tried it, the experience was pain-stakingly slow.

      1. 7

        I find it okay for running things that aren’t fully interactive applications. For example I mainly run the terminal version of R on a remote server, but it’s nice that X’s network transparency means I can still do plot() and have a plot pop up.

        1. 5

          Have you tried SSH compression? I normally use ssh -YC.

          1. 4

            Compression can’t do anything about latency, and latency impacts X11 a lot since it’s an extremely chatty protocol.

            1. 4

              There are some attempts to stick a caching proxy in the path to reduce the chattiness, since X11 is often chatty in pretty naive ways that ought to be fixable with a sufficiently protocol-aware caching server. I’ve heard good things about NX, but last time I tried to use it, the installation was messy.

              1. 1

                There’s a difference between latency (what you talk about) and speed (what I replied to). X11 mainly transfers an obscene amount of bitmaps.

                1. 1

                  Both latency and bandwidth impact perceived speed.

          2. 6

            Seconded. Decades after, it’s still the best “remote desktop” experience out there.

            1. 3

              I regularly use it when I am on a Mac and want to use some Linux-only software (primarily scientific software). Since the machines that I run it on are a few floors up or down, it works magnificently well. Of course, I could run a Linux desktop in a VM, but it is nicer having the applications directly on the Mac desktop.

              Unfortunately, Apple does not seem to care at all about XQuartz anymore (can’t sell it to the animoji crowd) and XQuartz on HiDPI is just a PITA. Moreover, there is a bug in Sierra/High Sierra where the location menu (you can’t make this up) steals the focus of XQuartz all the time:

              https://discussions.apple.com/thread/7964085

              So regretfully, X11 is out for me soon.

              1. 3

                Second. I have a Fibre connection at home. I’ve found X11 forwarding works great for a lot of simply GTK applications (EasyTag), file managers, etc.

                Running my IntelliJ IDE or Firefox over X11/openvpn was pretty painfully slow, and IntelliJ became buggy, but that might have just been OpenVPN. Locally within the same building, X11 forwarding worked fine.

                I’ve given Wayland/Weston a shot on my home theater PC with the xwayland module for backward compatibility. It works .. all right. Almost all my games work (humble/steam) thankfully, but I have very few native wayland applications. Kodi is still glitchy, and I know Weston is meant to just be a reference implementation, but it’s still kinda garbage. There also don’t appear to be any wayland display managers on Void Linux, so if I want to display a login screen, it has to start X, then switch to Wayland.

                I’ve seen the Wayland/X talk and I agree, X has a lot of garbage in it and we should move forward. At the same time, it’s still not ready for prime time. You can’t say, “Well you can implement RDP” or some other type of remote composition and then hand wave it away.

                I’ll probably give Wayland/Sway a try when I get my new laptop to see if it works better on Gentoo.

                1. 2

                  No hand waving necessary, Weston does implement RDP :)

              1. 7

                Laziness is neat. But just not worth it. It makes debugging harder and makes reasoning about code harder. It was the one change in python 2->3 that I truly hate. I wish there was an eager-evaluating Haskell. At least in Haskell, due to monadic io, laziness is at least tolerable and not leaving you with tricky bugs (as trying to consume an iterator in python twice).

                1. 6

                  I had a much longer reply written out but my browser crashed towards the end (get your shit together, Apple) so here’s the abridged version:

                  • Lazy debugging is only harder if your debugging approach is “printfs everywhere”. Haskell does actually allow this, but strongly discourages it to great societal benefit.

                  • Laziness by default forced Haskellers to never have used the strict-sequencing-as-IO hack that strict functional languages mostly fell victim to, again to great societal benefit. The result is code that’s almost always more referentially transparent, leading to vastly easier testing, easier composition, and fewer bugs in the first place.

                  • It’s impossible to appreciate laziness if your primary exposure to it is the piecemeal, inconsistent, and opaque laziness sprinkled in a few places in python3.

                  • You almost never need IO to deal with laziness and its effects. The fact that you are conflating the two suggests that you may have a bit of a wrong idea about how laziness works in practice.

                  • Haskell has the Strict language extension which turns on laziness by default. It’s very rarely used because most people experienced enough with Haskell to know about it prefer laziness by default. This is experimental evidence that laziness by default may actually be a good idea, once you’ve been forced to grok how it’s used in practice.

                  1. 1

                    Haskell has the Strict language extension which turns on laziness by default. It’s very rarely used because most people experienced enough with Haskell to know about it prefer laziness by default. This is experimental evidence that laziness by default may actually be a good idea, once you’ve been forced to grok how it’s used in practice.

                    I am not quite sure whether this is really evidence. I actually never tried to switch it on. Iwonder whether that option plays nicely with existing libraries, I gues not many are tested for not depending on lazy-evaluation for efficient evaluation. If you use Haskell and Hackage, I guess you are bound with rolling with the default.

                    1. 2

                      It works on a per-module basis. All your modules will be compiled with strict semantics, and any libraries will be compiled with the semantics they chose.

                  2. 3

                    Idris has strict evaluation. It also has dependent types, which are amazing, but strict evaluation is a pretty good perk too.

                    1. 2

                      I thought there were annotations for strictness in Haskell.

                      1. 3

                        yes, but I consider it to be the wrong default. I’d prefer having an annotation for lazy evaluation. I just remember too many cases where I have been bitten by lazy evaluation behaviour. It makes code so much more complicated to reason about.

                        1. 1

                          Do you happen to remember more detail? I enjoy writing Haskell, but I don’t have a strong opinion on laziness. I’ve seen some benefits and rarely been bitten, so I’d like to know more.

                          1. 1

                            I only have vague memories to be honest. Pretty sure some where errors due to non-total functions, which I then started to avoid using a prelude that only uses total ones. But when these occured, it was hard to exactly find the code path that provoked it. Or rather: harder than it should be.

                            Then, from the tooling side I started using Intero (or vim intero). (see https://github.com/commercialhaskell/intero/issues/84#issuecomment-353744900). Fairly certain that this is hard to debug because of laziness. In this thread there are a few names reporting this problem that are experienced haskell devs, so I’d consider this evidence that laziness is not only an issue to beginners that haven’t yet understood haskell.

                            PS: Side remark, although I enjoy haskell, it is kind of tiring that the haskell community seems to conveniently shift between “Anyone can understand monads and write Haskell” and “If it doesn’t work for you, you aren’t experienced enough”.

                      2. 2

                        Eager-evaluating Haskell? At a high level, Ocaml is (more or less) an example of that.

                        It has a sweet point between high abstraction but also high mechanical sympathy. That’s a big reason why Ocaml has quite good performance despite a relatively simple optimizing compiler. As a side effect of that simple optimizing compiler (read: few transformations), it’s also easy to predict performance and do low-level debugging.

                        Haskell has paid a high price for default laziness.

                        1. 2

                          As a side effect of that simple optimizing compiler (read: few transformations), it’s also easy to predict performance and do low-level debugging.

                          That was used to good effect by Esterel when they did source-to-object code verification of their code generator for aerospace. I can’t find that paper right now for some reason. I did find this one on the overall project.

                          1. 1

                            Yes, however I would like to have Typeclasses and Monads I guess, that’s not OCaml’s playing field

                            1. 1

                              OCaml should Someday™ get modular implicits, which should provide some of the same niceties as typeclasses.

                              1. 1

                                OCaml has monads so I’m really not sure what you mean by this. Typeclasses are a big convenience but as F# has shown are by no means required for statically typed functional programming. You can get close by abusing a language feature or two but you’re better off just using existing language features to accomplish the same end that typeclases provide. I do think F# is working on adding typeclasses and I think the struggle is of course interoperability with .Net, but here’s an abudantly long github issue on the topic. https://github.com/fsharp/fslang-suggestions/issues/243

                              2. 1

                                F# an open source (MIT) sister language is currently beating or matching OCaml in the for fun benchmarks :). Admittedly that’s almost entirely due to the ease of parallel in F#.
                                https://benchmarksgame.alioth.debian.org/u64q/fsharp.html

                              3. 1

                                Doesn’t lazy io make your program even more inscrutable?

                                1. 1

                                  well, Haskell’s type system makes you aware of many side-effects, so it is a better situation than in, for example, Python.

                                  Again, I still prefer eager evaluation as a default, and lazy evaluation as an opt-in.

                                2. 1

                                  Purescript is very close to what you want then - it’s basically “Haskell with less warts, and also strict” - strict mainly so that they can output clean JavaScript without a runtime.

                                1. 1

                                  This is great, lots of cool tricks I can steal from this!

                                  Is there a version that uses cabal new-build instead of stack? Does this support make -j8 for faster compiles?

                                  I haven’t yet tried doctest and hlint/weeder, that should help my dev process.

                                  1. 7

                                    In the other direction, there’s the Grammatical Framework that uses dependent types to translate among multiple languages.

                                    I used GF to build a small webapp to improve my Swedish vocabulary while I lived there. The code would generate random sentences in both Swedish and English and I’d enter the translation and check my input against the GF translation.

                                    1. 1

                                      Wow, that’s neat. Did you open source it, by any chance?

                                      1. 1

                                        Please share this with us. Would love to see it

                                        1. 1

                                          Sadly no, I think it’s gone forever. Probably wouldn’t be hard to recreate it though.

                                      1. 2

                                        As a counterpoint, there was recently an article on lobsters about doing advent of code in haskell. In it @rbaron mentioned several challenges caused by lazy evaluation he had to work around.

                                        1. 5

                                          From my experience, that’s culture instead of difficulty. Most programmers expect certain behavior, we’re just accustomed to things being a certain way. Because of that, learning to code in a lazy language is a good brain stretching exercise.

                                          1. 1

                                            I beg to differ, there may be some cases where indeed an experienced haskeller will not experience a penalty because Haskell is lazy by default (as compared to let’s say OCaml). However, I have seen my fair share of Haskell programs with space leaks and other issues that weren’t fixed, because the root cause was not found.

                                        1. 5

                                          This is a wonderful post, an adventurous read, and should be called something like hacking to satisfy the need for speed. I had a blast reading this!

                                          1. 1

                                            To me, this post supports having a ThinkPad-style trackpoint in the center of the keyboard. In place of arrow keys, I use emacs style text navigation, perhaps that’s a similar solution to the problem?

                                            1. 1

                                              How about a total of ten trackpoints?

                                              My proposed positions:

                                              1. ASZ
                                              2. ESD (or WES?)
                                              3. RDF
                                              4. FCV
                                              5. JNM
                                              6. IJK
                                              7. OKL (or IOK?)
                                              8. L;. 9 & 10: below the spacebar

                                              Take that, multi-touch screens!

                                              1. 2

                                                Wasn’t that the datahand?

                                                Each finger could move four directions and press a button.

                                                1. 1

                                                  Thanks for that link!

                                                  My proposal was meant to be a qwerty keyboard with multiple trackpoints embedded in it. Thinkpad keyboards have a single trackpoint embedded between the G, H, and B keys (“position GHB”).

                                                  Ten trackpoints is too many–right? I am certain how I’d use four: to zoom/rotate in 3D space; I’d use two index fingers and two thumbs.

                                                  Even though I do have the ability to move each of my fingertips individually, ten is too many, right…? I can’t imagine what kind of input action would require that many 2D analog input at once… Any ideas?

                                                  Edit: substantial edit, my bad.

                                            1. 2

                                              Multicast rtp data perhaps? Maybe they’re streaming elevator music from their VoIP server?

                                              1. 2

                                                My favorite book on the subject of mental math is https://www.amazon.com/Dead-Reckoning-Calculating-Without-Instruments/dp/0884150879 . It includes piles of interesting useful tricks for just about add/subtract, multiply/divide, factoring, and logarithms. Check it out!

                                                1. 2

                                                  I have always used an old school IBM mechanical keyboard, the latest trends seem to have keyboard layouts as shown in the article, does this actually improve comfort and feel better after 8hours of constant use? I was looking at trying one out, but none of my friends have one, and I don’t live anywhere near a place that would have one for demo. Thanks.

                                                  1. 2

                                                    For me, the split keyboard was the largest improvement.

                                                    I blew out an arm on an IBM Model M, had to switch to kinesis, and now ergodox. The largest issue was that my wide shoulders meant my hands were turned outwards when on the keyboard, and that put strain on the inside of my wrists.

                                                    With my current setup, the distance between my F and J keys is twenty three inches (just checked), allowing my arms and wrists to relax.

                                                    1. 2

                                                      This depends on your typing style, I think. For me the change to a columnar stagger and the change to using an fn layer for numbers and punctuation made a much bigger difference in comfort than going to a two-piece keyboard; I guess because of the reduced finger travel?

                                                  1. 7

                                                    Some Haskell features are just mistakes; others are designed for PL researchers to experiment, but their usefulness in industrial programming is yet to be proven.

                                                    I’d like to see a list of which Haskell features aren’t considered good for production code, and why! I’ve decided TransformListComp is one, because it has zero uptake in the community. Years ago I saw ImplicitParams go horribly wrong, but haven’t tried it since.

                                                    On the other side, I love OverloadedStrings and GADTSyntax. I asked on twitter, and the list of batteries included extensions included OverloadedStrings, ScopedTypeVariables, LambdaCase, InstanceSigs, and TypeApplications.

                                                    Got any features you feel are good or bad for production Haskell?

                                                    1. 2

                                                      This is a good start on an answer: https://stackoverflow.com/a/10849782/108359

                                                      1. 0

                                                        Lazy evaluation makes it hard to reason about the space usage or even termination of a program. I prefer having eager evaluation with lazy data structures.

                                                        1. 8

                                                          The downside is that you can’t use functions for control flow. An obvious example is Church encoding, e.g.

                                                          true x y = x
                                                          false x y = y
                                                          ifThenElse c x y = c x y
                                                          

                                                          If we call ifThenElse eagerly, both branches x and y will be calculated; if we used these to e.g. select between a base case and a recursive case, we’d cause an infinite loop.

                                                          Most eager languages resort to a built-in lazy if construct (Haskell has one too, but it’s unnecessary), which forces us to reframe our problem in terms of some built-in boolean type rather than custom domain-specific types (for a similar issue, see boolean blindness).

                                                          On the one hand, (Church) booleans are quite a weak argument for laziness since “real programs” will be using some richer, more meaningful, custom datatypes instead; yet on the other hand, most eager programs end up relying on booleans, since they’re the only thing that works with the built-in if!

                                                          Whilst an eager language with things like case and pattern-matching can alleviate some of these problems, I still think that lazy function calls are important. In fact, I think this is a blub paradox, since even eagerly evaluated languages provide lazy && and || functions; but again, this is usually limited to hard-coded, built-in constructs.

                                                          1. 4

                                                            The lazy if example always stroke me as silly. It is actually less elegant than you might think: to call it, you need to create two thunks, even though you know in advance that (at least) one will be discarded. I know that optimizing compilers can get rid of the overhead, but that would be an implementation detail that isn’t immediately justified by the semantics of the language in question. In any case, you don’t need if as a built-in in a strict language. It is nothing but syntactic sugar for pattern matching on Bool. Similarly, arithmetic if is syntactic sugar for pattern matching on an Ord produced by a call to compare. Etc. etc. etc.

                                                            By making a sharp distinction between values and computations, strict languages gain expressivity over lazy ones. If you want to recover the benefits of laziness in a predominantly strict language, you can always define an abstract type constructor of thunked computations - that can be implemented in 20-30 lines of ML. On the other hand, adding seq to a lazy language wreaks havoc in its semantics, weakening or destroying many free theorems, presumably the main reason for preferring a lazy language over a strict one.

                                                            EDIT: Wording. Broken link.

                                                            1. 3

                                                              In any case, you don’t need if as a built-in in a strict language. It is nothing but syntactic sugar for pattern matching on Bool. Similarly, arithmetic if is syntactic sugar for pattern matching on an Ord produced by a call to compare. Etc. etc. etc.

                                                              Yes, I hand-waved this away as “things like case and pattern-matching” :)

                                                              a sharp distinction between values and computations

                                                              From this perspective, my point could be phrased as laziness “unifying” values and computations. In the if example, we can implement it eagerly by making the thunks explicit, e.g. giving true and false type (Unit -> a) -> (Unit -> a) -> a. Or should that be (Unit -> a) -> (Unit -> a) -> Unit -> a? Maybe we need both…

                                                              One of my favourite things about programming is when I come across unifications of concepts I’d previously considered separate. Examples which come to mind are Python classes being first-class objects, Smalltalk control-flow statements (e.g. ifTrue:) being method calls, Lisp code being data (in a way that’s both useful (unlike strings) and looks the same unlike complicated AST encodings), and pointers being ints in C.

                                                              Whilst I accept that we may lose expressivity, and this can certainly be important for resource-constrained or -intensive tasks, for the sorts of inefficient plumbing I tend to write I very much like the symmetry that is gained from having fewer distinct concepts/constructs/etc. to take into account (especially when meta-programming!).

                                                              adding seq to a lazy language wreaks havoc in its semantics, weakening or destroying many free theorems

                                                              That’s certainly a problem, I agree :(

                                                              1. 4

                                                                Yes, I hand-waved this away as “things like case and pattern-matching” :)

                                                                Handwaving is failing to provide a proof. What you did was simply make a wrong statement:

                                                                Whilst an eager language with things like case and pattern-matching can alleviate some of these problems, I still think that lazy function calls are important.

                                                                Pattern matching isn’t there to “alleviate” any “problems” caused by the lack of laziness. Pattern matching is the eliminator for sum types, which, by the way, don’t actually exist in lazy languages.

                                                                since even eagerly evaluated languages provide lazy && and || functions; but again, this is usually limited to hard-coded, built-in constructs.

                                                                In a strict language, && and || are not functions, but rather syntactic sugar over nested uses of pattern matching on the boolean type.


                                                                From this perspective, my point could be phrased as laziness “unifying” values and computations.

                                                                You aren’t unifying anything. What you’re doing is ditching values altogether, and replacing them with trivial computations that return those values. In your limited world, values only exist “at the end of the computation”, but they are not first-class mathematical objects that you can, say, bind to variables. So you are deprived of any proof techniques that rely on variables standing for values, such as structural induction.

                                                                On the other hand, if you properly place value types and computation types in separate categories, you will find that these categories are related by an adjunction and admit different universal constructions. Explicitly acknowledging this structure allows you to define abstractions that don’t break in the face of nontermination or explicit thunk-forcing, unlike most of Haskell’s library ecosystem.

                                                                One of my favourite things about programming is when I come across unifications of concepts I’d previously considered separate.

                                                                If essential properties of the objects being “unified” are lost, it stops being “unification” and becomes “conflation”.

                                                                for the sorts of inefficient plumbing I tend to write I very much like the symmetry that is gained from having fewer distinct concepts/constructs/etc. to take into account (especially when meta-programming!).

                                                                If anything, you have destroyed the symmetry (duality) between values and computations.

                                                                EDIT: Added remark about conflation. Fixed typo.

                                                                1. 3

                                                                  Pattern matching isn’t there to “alleviate” any “problems” caused by the lack of laziness. Pattern matching is the eliminator for sum types

                                                                  Elimination is when we select one of the branches. That can be done before or after evaluating them. That’s the distinction I was making: in my made-up terminology an “eager case expression” would evaluate all branches to normal form, then select the appropriate one to return; a “lazy case expression” would select the appropriate branch before attempting to evaluate any of the branches. You’re right that if, && and || in eager languages are syntactic sugar over pattern matching (case), but my point was that even eager languages use the “lazy case expression” variety, not the “eager case expression”.

                                                                  The way those languages achieve such laziness is by making pattern-matching a special language construct. We can’t write user functions which behave the same way (unless we distinguish between values and thunks); note that we can write macros to do this, as is common in Lisps.

                                                                  With lazy function calls, such “lazy case expressions” can, in principle, be replaced by eliminator functions (like Church encodings and their variants); although I currently think that would be less nice (see Morte and Dhall, for example).

                                                                  If essential properties of the objects being “unified” are lost, it stops being “unification” and becomes “conflation”.

                                                                  Sure, but what counts as “essential” is subjective. C programmers might consider pointer arithmetic to be essential, which is lost by high-level representations like algebraic (co-)datatypes. Personally, I’m perfectly happy to e.g. conflate scope with lifetime (i.e. garbage collectors based on liveness).

                                                                  I don’t have particularly strong opinions when it comes to either distinguishing or conflating the distinctions used by CBPV, “polarity”, co/data, etc. That’s why I currently fall back on my “symmetry” heuristic.

                                                                  If anything, you have destroyed the symmetry (duality) between values and computations.

                                                                  By “symmetry” I didn’t mean “duality between two different sorts of thing”, I meant “treating all of these things in a single, uniform way” (i.e. conflating). Fewer distinctions means fewer choices to be made, fewer cases to be handled and fewer combinatorial explosions to tackle. This makes life easier when e.g. parsing, code-generating, interpreting, etc.

                                                                  Of course, conflating is a simplistic thing to do; yet such ease is nice to have, so I treat it as the “activation energy” (the ease/value offered) that each distinction must overcome in order for me to use them (at the loss of symmetry).

                                                                  Examples of distinctions which I think are generally worth it (for me):

                                                                  • Type-level/value-level (as found in dependently-typed languages)
                                                                  • Compile-time/run-time
                                                                  • Pure/effectful

                                                                  Distinctions I don’t think are worth it (for me):

                                                                  • Type-level/kind-level
                                                                  • Separate constructs for types/values (e.g. type-level computation in Haskell, type families, etc.)
                                                                  • Statements/expressions
                                                                  • Linear/non-linear types
                                                                  • Concrete/abstract syntax (i.e. not using s-expressions)
                                                                  • Value/delayed-computation (“polarity”)
                                                                  • Partial/total
                                                                  • Native/managed

                                                                  Of course, these are just my preferences, and they vary depending on what problem I’m tackling and what mood I’m in ;)

                                                                  1. 3

                                                                    Elimination is when we select one of the branches. That can be done before or after evaluating them. That’s the distinction I was making: in my made-up terminology an “eager case expression” would evaluate all branches to normal form

                                                                    This doesn’t make sense. Expressions under binders are not reduced in a call-by-value language, and the left-hand side of a branchn is very much a binder for any variables used as constructor arguments. So what you call “eager case expression” does not exist at all.

                                                                    The way those languages achieve such laziness is by making pattern-matching a special language construct.

                                                                    A strict language with pattern matching doesn’t “achieve laziness”. It simply evaluates arms at the correct moment.

                                                                    Sure, but what counts as “essential” is subjective. C programmers might consider pointer arithmetic to be essential, which is lost by high-level representations like algebraic (co-)datatypes.

                                                                    I’m not sure pointer arithmetic is essential, but array manipulation certainly is essential, and memory-safe languages do a very poor job of dealing with it. I’d be willing to give up memory-safety in exchange for a predicate transformer semantics for array manipulation, so long as the semantics is explicitly formalized.

                                                                    Personally, I’m perfectly happy to e.g. conflate scope with lifetime (i.e. garbage collectors based on liveness).

                                                                    I’m not. Languages without substructural types suck at manipulating anything that doesn’t last forever (e.g., file handles).

                                                                    (i.e. conflating). Fewer distinctions means fewer choices to be made, fewer cases to be handled and fewer combinatorial explosions to tackle. This makes life easier when e.g. parsing, code-generating, interpreting, etc.

                                                                    And it makes life harder when debugging. Not to mention when you want to formally prove your programs correct. (I do.)

                                                                    1. 2

                                                                      Expressions under binders are not reduced in a call-by-value language

                                                                      Yes, that’s what I’m referring to as ‘a form of laziness’. Expressions under binders could be (partially) reduced, if we wanted to. Supercompilers do this (at compile time), as do the morte and dhall languages, for example.

                                                                      what you call “eager case expression” does not exist at all

                                                                      The reason that basically no language does this is because it’s a pretty terrible idea, not that it couldn’t be done in principle. It’s a terrible idea because it evaluates things which will be discarded, it introduces preventable non-termination (e.g. with base/recursive cases), and seems to have no upsides.

                                                                      My point is that the same can be said for strict function calls, apart from the “no upsides” part (upsides of strict calls include preventing space leaks, timely release of resources, etc.).

                                                                      array manipulation certainly is essential… I’d be willing to give up memory-safety… Languages without substructural types suck… makes life harder when debugging… when you want to formally prove your programs correct

                                                                      I think this is where our priorities differ. I’m happy enough with an inefficient, simple(istic) language, where I can formally reason about “getting the right answer (value)”, but I don’t care so much about how we arrive there (e.g. order of evaluation, space usage, garbage collection, etc.)

                                                                      1. 4

                                                                        Expressions under binders could be (partially) reduced, if we wanted to. (…) The reason that basically no language does this is because it’s a pretty terrible idea, not that it couldn’t be done in principle.

                                                                        Sure, but not in a call-by-value language. The distinguishing feature (a killer feature!) of call-by-value languages is that variables stand for values, which is not true if expressions are reduced under binders.

                                                                        Supercompilers do this (at compile time), as do the morte and dhall languages, for example.

                                                                        Except for simple cases where the program’s asymptotic time and space costs are not altered, this is a terrible idea. (Yes, even when the program’s performance is improved!) I want to reason about the performance of my program in terms of the language I am using, not whatever machine or intermediate code a language implementation could translate it into. The former is my responsibility, the latter is beyond my control.

                                                          2. 5

                                                            Lazy evaluation makes it hard to reason about the … termination of a program

                                                            A program terminates at least as quickly when evaluated non-strictly as it terminates when evaluated strictly, so that can’t be true.

                                                            1. 3

                                                              For once I second @Yogthos. Lazy evaluation is certainly useful, but not as the default! If you care about algorithms, and IMO every programmer ought to, strict evaluation as the default makes analysis easier than lazy evaluation (“simply substitute and reduce” vs. “keep track of which thunks have been forced”) and gives tighter asymptotic bounds (worst-case vs. amortized) on running times.

                                                              1. 4

                                                                Strict evaluation should not be the default, you lose composition of algorithms.

                                                                1. 2

                                                                  Eta-expand, then compose. Voilà.

                                                                  1. 1

                                                                    Eta-expand then compose what?

                                                                    1. 3

                                                                      For instance, rather than say foo . bar, where foo and bar are arbitrarily complicated expressions, you say \x -> foo (bar x).

                                                                      As for composing complicated algorithms - it’s totally overrated. Instead of designing libraries around a mess of higher-order functions, oops, Kleisli arrows, oops, profunctors, oops, who knows, I’d rather reify every intermediate state of a complex operation using a custom abstract data type, and then only export procedures that perform individual atomic steps - good old-fashioned first-order procedures! Composing these atomic steps into the exact algorithm the user wants is best done by the user himself. Maximum flexibility and no abstraction overhead. But of course, you need decent support for abstract data types for this to work.

                                                                      EDIT: Added long paragraph.

                                                                      1. 4

                                                                        I also have to do that for everything in foo and bar, then manually fuse -> code duplication.

                                                              2. 2

                                                                Perhaps the issue is less direct: laziness tempts us to use infinite (co)datastructures, which we must handle carefully (e.g. don’t take the length of an infinite list).

                                                                1. 1

                                                                  Correction: I should have said “when evaluated lazily” not “non-strictly”. I think as written what I said was false.

                                                                2. 2

                                                                  By the way, that’s not really what Shapr meant by “which Haskell features aren’t considered good for production code”!

                                                              1. 2

                                                                I’ve done a very similar thing using Alpine and multi-stage builds. Using alpine:edge also gives you upx, which lets you compress your binary after the build, which you copy into the next stage, and using ldd you can figure out exactly which libraries are needed for dynamic linking, and you can end up with some really tiny images. Also using GHC’s split objects you can build even smaller binaries. Maybe I should write a post about that sometime soon, once we actually put this into production…

                                                                1. 1

                                                                  Please do create that blog post! I’ve not used upx, got any links that explain that?

                                                                1. 7

                                                                  I really enjoy reading PoC || GTFO. It takes me back to Fravia’s Fortress, BBS text files, old 2600 and Phrack – telling technical stories with some flair and style.

                                                                  I have the PoC || GTFO “bible”, the leather-bound tome with some of their best work so far and it’s really fun to see all the old advertisements and diagrams in print. Highly recommended if you’re at a security conference and can score one in person.

                                                                  1. 4

                                                                    I hope they continue to publish the “bible” versions every year or two. I love having it around and discovered some articles I missed on the first pass through.

                                                                    1. 2

                                                                      Have you found any of the easter eggs in the ‘bible’ ? Some of those periods aren’t.

                                                                    1. 15

                                                                      I was disappointed that there were no demographic questions - that’s a vital area which the Rust survey creators put a lot of work into. I expect it was due to the survey creators not seeing it as important enough to justify the effort of doing it right, which I know is considerable. I understand that decision, but it’s frustrating, because it’s very important to some of us.

                                                                      1. 16

                                                                        I agree that demographics are important! I definitely want to include them in next year’s survey. This is the first survey I’ve ever published. I didn’t want to get the demographic questions wrong or otherwise mishandle them. That’s why I decided to focus on the technical questions. The survey was developed in the open (see this issue); next year’s will be too. I would be delighted to have you help out with the demographic questions.

                                                                        1. 13

                                                                          Just copy ours :).

                                                                          Jest aside, we’re cool with anyone taking these questions like all our other open source output.

                                                                          Also, there’s a huge problem currently: most languages don’t run these kinds of surveys and then many don’t share a common core. It’s hard to read a lot from them.

                                                                          In case of Rust, we’re in the lucky position to actually have run two of them, so we can at least put this year’s number in relationship to last year’s.

                                                                          But for the rest? Are we doing better then language X? Worse? Only gut feeling.

                                                                          I don’t see this as a competition, there’s rarely been a structured approach to mapping communities.

                                                                          In any case, if you’d like to exchange experiences, please feel free to contact community-team@rust-lang.org

                                                                          1. 8

                                                                            I’m happy to hear that! And I’m also happy to be contacted to comment on concrete proposals, when next year’s survey is at that stage; I don’t have the bandwidth to be involved more than that.

                                                                            I suspect the team that did the Rust survey will also be happy to advise about these topics. I know they’ve talked about it a bunch here on lobste.rs, and those old discussions are still in the archives somewhere.

                                                                          2. -2

                                                                            You mean the part about “underrepresented groups” or whatever on the Rust survey? Why do you think that is important?

                                                                            One of my favorite things about the Haskell community is that everyone is too busy doing actual technically impressive stuff to worry about e.g. how many gay people happen to be using it.

                                                                            To be blunt, I think that sort of thinking (obsessing over whether an organic community follows one’s arbitrarily constructed ideals for what it should look like) is a cancerous mind-suck that detracts from actually productive work.

                                                                            To be a little more blunt, I think the fact that Haskell has a reputation as being extremely technical has actually helped the community a great deal, at least just by virtue of the fact that it scares away people who are primarily involved in software as a means to push some political agenda.

                                                                            Late edit: feel free to respond instead of just downvoting, would be happy to be wrong here.

                                                                            1. 33

                                                                              This is downvoted already, I’ll bite anyways. First of all, I think your comment shows that you have no idea what we are doing there.

                                                                              Gathering demographics questions is much more then what you narrow it down to. It’s telling that the only thing you can come up with is “how many gay people happen to be using it”? It’s also: where do people live, would they want to travel to a conference, etc… It cuts out the guesswork. You know what? When you measure, you also sometimes find out that things are just okay. No need to do something. Great! Sometimes, you find something odd. Then you might want to investigate. It’s a community survey. We want to know about people, not about technology. But social groups are also a thing to check.

                                                                              The Rust community also does production user surveys, which are die-hard technical and very detailed, usually specific in certain areas like embedded and or gaming. We seek out users willing to do that and set up meetings to figure out the issues.

                                                                              To be blunt, I think that sort of thinking (obsessing over whether an organic community follows one’s arbitrarily constructed ideals for what it should look like) is a cancerous mind-suck that detracts from actually productive work.

                                                                              To be blunt too: It’s shit that you call my work “not productive”. Here’s a thing: I don’t want to contribute to rustc or Servo. Why? I code all day, I don’t want to code at night. I’m happy that other people want to. I still want to contribute to FOSS. In the last 5 years, I’ve set up 2 foundations, 11 conferences and managed around 250000 Euros to run all these things. I signed leases that could have bankrupted me. I managed to find ways to run 2-days conferences that cost 100EUR and still cover all expenses of speakers, attendees and those that cannot pay 100 Euros. I love that work. People are indeed very fair about this. Those that have more give, those that don’t, don’t. And I want everyone to benefit from that. My whole work revolves around the question “Who’s not here, and why?”. It’s my hack. Nothing of that is fundamentally special, other people could have done it.

                                                                              And you know what? We measure. We don’t go out “hey, please do X because I think it works”. No, we go “hey, we did X and metric Y improved”.

                                                                              It’s also amazing how many people just need a “have you considered programming might be something for you?” and then just head of and learn it. But the question needs to be asked a couple of times.

                                                                              It’s shitty to go on an concern troll around people doing precisely that to better do “productive work”.

                                                                              There’s no way to get me to work on other things without paying me. And you know what I like about the Rust community? They don’t concern troll me. They go: cool, if that’s what you want to do, please do it for us. It’s not like I bind a lot of resources. Community work works best if you don’t have many cycles. We align on a goal and then we do a lot of footwork.

                                                                              Sure, there are cases where issues arise and social work becomes focus, but that’s fine. Interestingly, the work of community workers is often to talk about issues before they come a trashfire, go to reddit and subsequently to 4chan.

                                                                              There’s also the “fixed cake” fallacy at work here: The belief that if we expand our community beyond a certain group, another group has to take the impact. That isn’t the case. The cake is not fixed. The market for programming languages is growing in absolute terms, also our communities are growing in absolute terms. These are effect to be appreciated and taken into consideration.

                                                                              Different folk need to be addressed in different fashion and thats fine. These surveys give us a bearing in where we want to invest our time or where things just work.

                                                                              If you absolutely want to talk in programming terms, we’re profiling our work. I find it amazing that there is so much pushback when we actually check on our successes.

                                                                              It’s shitty of people to devalue that work. A work, which has to be said, is more often done by women and people of color in many communities. Many of which are masters of it.

                                                                              There’s two options here: I do this work within a community or I don’t. It’s as simple as that. No “more productive” trolling.

                                                                              I structured work on these issues is still “cancerous mind-suck” for you, then go ahead. But say it to my face when you meet me.

                                                                              To be a little more blunt, I think the fact that Haskell has a reputation as being extremely technical has actually helped the community a great deal, at least just by virtue of the fact that it scares away people who are primarily involved in software as a means to push some political agenda.

                                                                              I just leave this here:

                                                                              So I just met Simon Peyton Jones (creator of Haskell) and chatted with him for a bit. Asked him his thoughts on Rust and he said he thinks it seems exciting and he was jealous of the great community we have. Just thought I would let you guys know :)

                                                                              (From: https://www.reddit.com/r/rust/comments/5yo24a/meta_rust_jealousy/)

                                                                              This was, by the way, the best compliment I ever saw, I respect Simon very much.

                                                                              We have an awesome many people primarily interested in software working in the Rust community. But they don’t throw the community workers under the bus like you do. That’s why I run a 6-monthly Rust conference and not a 6-monthly Haskell conference.

                                                                              I love Haskell, but there’s reasons I’m not active there. Still, for anyone that wants to learn techniques and procedures, by all means get in touch.

                                                                              Communities don’t happen at random. They work through network effects and feedback and these can be moderated and improved.

                                                                              Finally, to be very blunt: you just cost me 45 minutes of community work, which I’d have preferred to fill with something productive.

                                                                              But I also think it’s worth putting it out for others to read.

                                                                              EDIT: I’d also like to state that I know quite some people in the Haskell community caring very deeply about this. haskell-cafe is still one of my inspirations for a cool space to make. But that space is intentional, not organic.

                                                                              1. 10

                                                                                Thank you very much for writing this. It will serve as a great resource the next time I spot a similar comment in the wild.

                                                                                1. 4

                                                                                  Found this thread late, but wanted to say thanks @skade for the consistently insightful commentary on community here, and for you work in the Rust community. I don’t work with Rust much, but on the occasions when I’m working on community-building, the Rust community is one of the first places I go to for examples of how to do it well.

                                                                                  1. 1

                                                                                    Thanks for responding.

                                                                                    It’s telling that the only thing you can come up with is “how many gay people happen to be using it”?

                                                                                    That’s the only demo data you put on https://blog.rust-lang.org/2017/09/05/Rust-2017-Survey-Results.html

                                                                                    Please, tell me how you thought that was “telling”.

                                                                                    But social groups are also a thing to check.

                                                                                    Check and… what?

                                                                                    The Rust community also does production user surveys,

                                                                                    Do you see me objecting to those?

                                                                                    To be blunt too: It’s shit that you call my work “not productive”.

                                                                                    Sorry you feel that way. I think a lot of things aren’t productive, including some things I do, so you shouldn’t take it personally.

                                                                                    In the last 5 years, I’ve set up 2 foundations…

                                                                                    Cool, but you don’t need to justify your relevance because I called one of your interests into question.

                                                                                    It’s shitty to go on an concern troll around people doing precisely that to better do “productive work”.

                                                                                    Actually, I wouldn’t say anything if I just thought you were just wasting your own time; further than that, I think obsessing over demos is actively counterproductive. The utility to the community is negative, not zero.

                                                                                    There’s also the “fixed cake” fallacy at work here: The belief that if we expand our community beyond a certain group, another group has to take the impact.

                                                                                    It’s not so much “taking the cake” as “ruining the cake”. If you “expand your community” to include demo-obsessed identity politicians, the community is going to become actively worse.

                                                                                    A work, which has to be said, is more often done by women and people of color in many communities.

                                                                                    Why did you feel this comment was relevant to the conversation? I have several hypotheses, but I’d prefer not to assume your motivations.

                                                                                    I structured work on these issues is still “cancerous mind-suck” for you, then go ahead. But say it to my face when you meet me.

                                                                                    Sure. I don’t have an aversion to arguing about the social costs of different activities in person any more than I do online.

                                                                                    That’s why I run a 6-monthly Rust conference and not a 6-monthly Haskell conference.

                                                                                    Right, that’s what I said earlier; something about Haskell pushes away people with demographic planning aspirations, which I like a lot.

                                                                                    Communities don’t happen at random… these can be moderated and improved.

                                                                                    This is just fundamentally untrue; most of the best communities are more or less spontaneous. Many communities I love (such as lobsters) are good precisely because they’re minimally moderated.

                                                                                    haskell-cafe is still one of my inspirations for a cool space to make. But that space is intentional, not organic.

                                                                                    The list is, afaik, unmoderated, and the Haskell IRC (one of the best chats on freenode) is also totally unmoderated. Your example is evidence against your claims.

                                                                                    1. 6

                                                                                      Communities don’t happen at random. They work through network effects and feedback and these can be moderated and improved.

                                                                                      This is just fundamentally untrue; most of the best communities are more or less spontaneous. Many communities I love (such as lobsters) are good precisely because they’re minimally moderated.

                                                                                      This is false for Lobsters, both historically and currently.

                                                                                      Speaking historically, you cut out the key phrase “network effects” from the quote. The Lobsters of early 2014 was a very different, nearly empty place. The current state of Barnacles is quite similar: low activity by any metric you care to measure (traffic, stories posted, comments, votes, etc.) and a negligible sense of community. Online communities start as failures and have to overcome the chicken-and-egg problem that it’s a waste of an individual’s time to participate until quite a lot of other people are already participating.

                                                                                      And on an ongoing basis, Lobsters requires daily attention to moderation and maintenance. Most of it is design, small edits, comments, and private messages. The rare, exciting things like deleted comments and banned users are the tip of an iceberg. It’s all the small, constant attention that keeps the positive feedback loops working to power a successful community instead of killing it. This is also true of Haskell-cafe.

                                                                                      The theme I take from your comments seems to be that the work you are unaware of doesn’t exist and, if it does, it must be worthless. I don’t understand that dismissive cynicism well enough to respond meaningfully to it, so all I can do is point out these surface-level inaccuracies.

                                                                                      1. 2

                                                                                        Lobsters requires daily attention to moderation and maintenance.

                                                                                        I seem to recall jcs saying that he never deleted anything if he could avoid it, and indeed that seemed to be the case. It seems that you are now taking a somewhat more active stance, but historically lobsters has had very little/none of what I would call active moderation.

                                                                                        Most of it is design, small edits, comments, and private messages… This is also true of Haskell-cafe.

                                                                                        SPJ sending out an email about being nice isn’t community moderation or management. Neither is general site maintenance. I’m not sure how you would conclude that I disagreed with any of these things unless we’re using very different definitions for a number of words.

                                                                                        The theme I take from your comments seems to be that the work you are unaware of doesn’t exist

                                                                                        I’m aware of all the examples you gave; they just aren’t the kind of obsessive, inorganic micromanagement I was objecting to.

                                                                                      2. 4

                                                                                        That’s the only demo data you put on https://blog.rust-lang.org/2017/09/05/Rust-2017-Survey-Results.html

                                                                                        I don’t get it. That page contains a bar chart with all sorts of demographic categories on it, just above “Diversity and inclusiveness continue to be vital goals for the Rust project at all levels.”

                                                                                        1. -2

                                                                                          Me:

                                                                                          e.g. how many gay people happen to be using it.

                                                                                          Guy who responded to me:

                                                                                          where do people live, would they want to travel to a conference, etc…

                                                                                          What kind of demo data do you see them sharing on that page? I’m not really sure what you’re confused about; you seem to be agreeing with me.

                                                                                          1. 7

                                                                                            The blog post is an editorialised overview, the 2016 one covers that. https://blog.rust-lang.org/2016/06/30/State-of-Rust-Survey-2016.html#survey-demographics

                                                                                            There have not been notable changes, so it wasn’t mentioned again.

                                                                                        2. 4

                                                                                          As the person who started the Haskell IRC channel and as an active moderator of said channel, moderation happens often. There’s a team of motivated users on the #haskell-ops IRC channel and we have to step in more than I’d prefer.

                                                                                          Good communities require much work and regular feedback.

                                                                                  1. 8

                                                                                    This survey is also for Haskell NON users! It asks if you tried and gave up, and why.

                                                                                    I like teaching Haskell (one hour earlier today at lunch, another hour scheduled after work), so I’d like to hear any negative feedback about starting with Haskell that I could hopefully improve.

                                                                                    Also, I’m glad this survey links to the GHC extensions, I learned several new things!

                                                                                    1. 3

                                                                                      Personally I find the competition from OCaml and F# to be very strong. Unlike Haskell they are impure, but they offer many of the same benefits. I also find they have a much better learning curve coming from an imperative background.

                                                                                      I’m not so sure that laziness by default is an attractive feature in a language. I’d rather use a strict language, and opt into laziness when I need it.

                                                                                    1. 1

                                                                                      That all sounds fine, but there are definitely features missing (or at least not mentioned here) which I look for in a lightweight markup language. Those include:

                                                                                      • Footnotes/endnotes/sidenotes (I think org-mode actually supports at least one of these, though it’s not mentioned in the article)
                                                                                      • Embedded images/other media
                                                                                      • Embedded other markup - math markup is very useful (to me, at least), and I know some people have been keen on embedding graph diagrams (e.g. graphviz). This sort of feature usually translates into the ability to use plugins.

                                                                                      Of course, the more of those features you support, the less “lightweight” the markup ends up being. But that doesn’t make the bits I need any less necessary.

                                                                                      1. 2

                                                                                        I’m a happy preferrant on reStructured Text.

                                                                                        Some may complain about backticks, but it gets everything done.

                                                                                        Markfown feels like a simplified version and this orgmode contraption like weird NIH-CADT of that.

                                                                                        But the world being a mountain of shit, RST requires page-breaks to be embedded separately for each output type. I hope I’m wrong on this, but I don’t think I am.

                                                                                        1. 1

                                                                                          It’s easy to embed latex for math and graphviz for pictures in org-mode, along with a pile of other plugins. One cool feature is embedding your programming language of choice and having a following block show results for that code.

                                                                                          1. 2

                                                                                            It’s easy to embed latex for math

                                                                                            And not just LaTeX math, also LaTeX environments. And if you use GUI Emacs, you can preview the equations and LaTeX environments inline in Emacs with C-c C-x C-l. E.g., here is some inline TikZ in my research notes, where the TikZ fragment is rendered and previewed in Emacs:

                                                                                            https://www.dropbox.com/s/t18zqabwg14bl2n/emacs-latex-environment.png?dl=0

                                                                                            When exporting to LaTeX, the environment is copied as-is. For HTML exports, I have set org-mode to use dvisvgm. So, every LaTeX environment/equation is saved as SVG and embedded in the resulting HTML (you can also use MathJax, but it obviously doesn’t render any non-math LaTeX).

                                                                                            One cool feature is embedding your programming language of choice and having a following block show results for that code.

                                                                                            And the result handling is really powerful. For example, you can let org-mode generate an org table from the program output. Or you can let the fragment generate an image and include the result directly in the org mode file. This is really convenient to generate and embed R/matplotlib/gnuplot graphs. You can then decide whether the code, the result, or both should be exported (to LaTeX/HTML/… output).

                                                                                        1. 7

                                                                                          I’d pair program with them. Seemed to work out alright in the past.

                                                                                          1. 5

                                                                                            It’s been my experience that’s the fastest way to teach programming. Pairing shows the how and the why. I liken it to the master and journeyman approach to skills transfer.

                                                                                          1. 2

                                                                                            I blew out my left arm for 1.5 years, my solution was 1. strap it to my chest for about a year 2. learn right-hand dvorak 3. do my own physical therapy 4. switch to a kinesis keyboard

                                                                                            The largest benefit to me came from coding on a futon or couch where I could change my wrist/arm position every twenty minutes. It turned out my joint wear came from coding in exactly the same position for 16 hours a day.

                                                                                            More recently I’ve switched to ergodox keyboards. Switching to a split keyboard means my wide shoulders don’t cause pronation anymore.

                                                                                            1. 3

                                                                                              coding in exactly the same position for 16 hours a day

                                                                                              I’m afraid that doing same thing for basically whole day is not healthy for any value of thing.

                                                                                            1. 3

                                                                                              Re: “the medium matters”

                                                                                              I’ve always wondered how the existence of IDEs with their helpful features affected the evolution of Java’s syntax and standard library.

                                                                                              1. 1

                                                                                                I know that C#’s LINQ syntax put “from ” first because it made intellisense easier in Visual Studio, related?

                                                                                              1. 1

                                                                                                That’s a sobering post, in relation to improving software.

                                                                                                More importantly, how can I apply this to myself? My first idea is to measure how much I accomplish, and attempt to increase that. I’m doing exercises in language book, I could attempt to increase number of questions answered. I also write tools for myself, I could increase number of GitHub issues completed?

                                                                                                Does anyone have more suggestions on how I can apply these same principles to my personal improvement as a programmer?

                                                                                                1. 1

                                                                                                  Here’s a short summary of my core method (when I’m disciplined enough to follow it):

                                                                                                  • Think of a change to your study habits which you expect could help (with retention, motivation, anything)
                                                                                                  • Come up with a measure you think will change if your study habits change (time spent, test scores)
                                                                                                  • Come up with a timeframe you expect the change to happen within (days? weeks?)
                                                                                                  • Compare the measure before and after

                                                                                                  The key here is that the choice of metric should happen after you identify the change you’re going to make.

                                                                                                  This is primarily because metrics are proxies for actual value (rather than being valuable in and of themselves) because value doesn’t have a simple definition.

                                                                                                  Once you start thinking about ‘how can I improve number of questions answered’, you have moved on from thinking about ‘how can I learn this well’.