1. 3

    I don’t know if language choice has anything to do with it, but the fact remains: the service is now dead. ;)

    I agree in 2012 the state of Python3 support was a valid concern, and I have to admit actively discouraging people from learning it at the time, but luckily that part hasn’t aged well. I still want better FP support in Python of course.

    My biggest problem with Common Lisp is that it’s a Lisp-2. Why would anyone want different namespaces for functions and other bindings in an ostensibly functional language?

    1. 3

      Why would anyone want different namespaces for functions and other bindings in an ostensibly functional language?

      According to this site, one of the arguments was “backwards compatibility”:

      Common Lisp was the result of a compromise between a number of dialects of Lisp, most of which had separate namespaces.. [transitioning to a single namespace] would introduce a considerable amount of incompatibility..” There are really more than two namespaces, reducing either the benefit of collapsing these two or the cost of collapsing them all

      which I don’t find that convincing, since there were plenty of lisps using dynamic scoping, despite CL using lexical scopes. But I agree, things like these together with, frankly wierd function names always annoy me. I’d guess that in the end, CL shouldn’t be seen as a functional language, since Lisp wasn’t even originally conceived as functional language.

      1. 9

        I’d guess that in the end, CL shouldn’t be seen as a functional language, since Lisp wasn’t even originally conceived as functional language.

        There’s a page which I can’t find now which makes the case that the term “functional programming language” evolves over time to mean “the most functional programming language which currently exists”, so different features rotate in and out of defining what it effectively means to be a functional programming language.

        (I’d add that each generation has a different boogeyman idea, an idea which the Average Programmer regards as being “too complex” and which others write blog posts or the local equivalent to explain.)

        The first generation was when Algol introduced recursion. Recursion is also the boogeyman idea of this era.

        The second generation begins with “mature” Lisp (that is, Lisp implementations like MACLISP and Lisp Machine Lisps, not LISP 1.5) and, later, Scheme, where the defining feature is first-class functions and closures, and the boogeyman idea is Scheme’s continuations. Common features are strong dynamic typing and a universal feature is garbage collection. The new-school scripting languages (Perl, Python, Ruby) are languages of this type with object systems bolted on, and Java’s getting there, slowly.

        (Some pre-Common Lisp Lisps didn’t have closures. They had the upwards funarg problem, instead.)

        The third generation is ML and everything after, including OCaml and Haskell. Now, functional programming includes strong, static type systems with algebraic types, Hindley-Milner type inference, and, at a syntactic level, pattern-matching as flow control. The boogeyman idea is monads, and, more specifically, requiring the use of monads to mark out side-effecting code, as Haskell does.

        The underlying point is that “Functional programming is whatever your language of choice doesn’t have yet”: Recursion is now universal. It wasn’t when FORTRAN IV was “your language of choice” for a lot of programmers. Strong dynamic typing and garbage collection aren’t universal, but they’re not weird and wacky ideas only long-haired MIT AI Lab types can make sense of. Marking your side-effecting code in a machine-readable way is still weird and wacky… for now.

        1. 1

          I wonder why backwards compatibility was a concern if CL could not run any of that old code unmodified. Ot could it?

          1. 2

            Common Lisp could be implemented in terms of the Old Lisps with a library of functions and macros. That’s the kind of compatibility they were after.

            1. 1

              I’m pretty sure backwards compatibility with most existing implementations was exactly the goal. I’ve heard you can run early Lisp programs on CL with minimal modifications to this day, but I’ve never tried it myself.

          2. 2

            It might be slightly easier to write tooling when you know what’s a function and what’s a variable.

            On the other hand, if you use a Lisp-2 in functional style (which poorly suits Common Lisp), you still have the problem of full analysis between the boundary of namespaces + awkward syntax. It looks like Common Lisp is meant to be used in imperative style, without juggling functions as values.

            1. 1

              Yeah, I guess it’s my expecation that the language is supposed to be functional that makes lisp-2 look so disappointing for me. It technically has everything for FP to work, so I feel cheated. ;)

            2. 1

              Does it really pose a significant problem?

            1. 1

              Working on small side project website, mobile, and JSON API and setting up switching over to Doom Emacs config from Spacemacs.

              Aside, if you use Spacemacs, does it eat up your power/battery? I am on macOS and Energy usage in Activity Monitor nearly sits steady 150+ during usage and I get maybe 75 minutes of battery on a 2017 MacBook Pro.

              1. 1

                What on earth is it doing to draw power like that? I run Emacs (just my own config) and even with rcirc and mastodon.el filling a few buffers it typically sits at zero CPU usage.

                1. 1

                  You can try doing profiling on your Emacs to see what plugin can take so much CPU: https://www.gnu.org/software/emacs/manual/html_node/elisp/Profiling.html . Usually, it’s not Emacs itself, it’s a plugin that might misbehave.

                1. 1

                  Wow, 28 people run BSD.

                  1. 3

                    Last time I posted a link to my blog on Lobsters, 1 out of 39 visitors came from OpenBSD!

                    If the link was not about bash, I might have gotten even more visitors using OpenBSD.

                  1. 12

                    This list does not mention the most surprising (and a bit disturbing) bash feature for me: it can open tcp/udp sockets on its own [0] through a feature called “special redirects” (there’s no actual filesystem involved here):

                    $ (exec 3<>/dev/tcp/www.google.com/80 ; 
                    echo "GET /search?q=bash+madness HTTP/1.0" >&3; 
                    echo >&3; 
                    cat <&3 ) | less
                    

                    [0] https://dmytrish.net/blog/en/bash-tcp

                    1. 6

                      Please, please stop trying to shoehorn a markup language into ASCII.

                      Tab characters originally were used not to indent text, but to align columns of tables on fixed-width terminals and punched cards (and this is the only usage for which tab characters do not suck). Much later this was repurposed as a “clever” hack to add indentation to text instead of a proper markup language. Elastic tabstops are an effort to make tabs do alignment too, introducing a block-based model into editing, which sounds like even more “clever” hack and requiring more logic from anything that could display text.

                      On the one hand, I am not a die-hard proponent of “code must be fixed-width plain text”, I find the idea of smart, helpful code self-representation theoretically appealing and I sympathize with some visual prototypes and smart editors. I am still open to the idea that someday fixed-width grids of characters will not be the optimal programming interface and source code will be more smart.

                      On the other hand, I have yet to see a practical solution to represent source code as something else than plain text. vim and modern editors make editing not smart, but at least not painful (they understand blocks and indentation), so I see no practical reason to put markup into the text itself without turning it into a rich-text mark up. And rich-text markup for source code is kinda nice, but not very useful on its own.

                      So, until something obviously superior makes fixed-width grids of characters clearly obsolete, I’d rather stay with the most straightforward possible representation of source code (even tab characters have too much cleverness for my taste) and I’d rather rely on editors to be a little smarter, not the the text itself. Formatting should happen when code is written, not when it is displayed.

                      1. 2

                        It’s a parody of the web page for a similarly-named web server.

                        – which one is meant here?

                        1. 3

                          I’m assuming GWAN.

                          Not to be confused with GWARN

                          =)

                        1. 1

                          Because debugging is a necessary evil that should be minimized as much as possible. Ideally, debugger must be the last resort in preventing and fixing bugs.

                          On the one hand, no specification/implementation is perfect, bugs happen (especially in low-level and archaic languages). One of the signs of a novice programmer is thinking only about the happy path and writing code in the “I want to get it done” manner, without thinking about later readers who might want to understand the code. I find debug logging a great help in understanding other’s source code.

                          On the other hand, it is always better to prevent bugs than to fix them. The need to chase a specific run-time logic/spec violation should ideally be a lost cause already. There are expressive type systems, static analysis, borrow checking, immutability, high-level constructs, assertions, unit/integration testing that should prevent whole classes of bugs.

                          It is very easy to fall into “tunnel vision” in a large/legacy codebase, i.e. investigate a specific bug on a specific execution path only to fix this specific execution path, disregarding the big picture and architecture, not asking oneself “how come that this state is even possible?”. This usually leads to patchwork of small fixes that makes the code base even less coherent with time and produces even more bugs. Tools with focus on easy debugging usually encourage this style of programming.

                          P.S. I see a reference to a similar comment in the post and objection that not all debugging is interactive. I am happy with “we need more/better tools to understand the runtime behavior of a program” (e.g. valgrind, ThreadSanitizer, etc), but the word “debugging” already has a specific meaning and a bad reputation.

                          1. 1

                            This is the first time I heard that debugging has a “bad reputation”. There must be a culture gap here.

                          1. 1

                            The horror! Identifying bitstreams with strings?? But won’t that be terribly inefficient, for such a common operation? You know what, I’m actually glad this is a cornerstone of the Unix philosophy, otherwise it might have been optimised away.

                            – guess what, files are actually identified by inode numbers, it’s just that the virtual file system hides this fact.

                            1. 11

                              I can’t imagine a valid use case for shadowing a method argument.

                              In OCaml I use shadowing all the time, since when I shadow a value, I make it impossible to access the previous value by accident. Especially if the type stays the same I never run into the problem where I accidentally pass num around where instead I meant to use new_num by the simple fact that the scoping just prevents me from messing up.

                              First, you need to type and read this noisy colon between names and types. What is the purpose of this extra character? Why are names separated from their types? I have no idea. Sadly, it makes your work in Kotlin harder.

                              Except for this being the accepted type annotation syntax in just about any modern typed language I might concede the point but then again, why does Java have all these { and }? I have no idea. Sadly, they make your work in Java harder.

                              1. 3

                                Yes, that’s how it works in Rust too: shadowing often replaces assignment (and you can even “change” type of the variable, that feels like a dynamically typed language!). To my surprise, this shadowing-as-usual makes me much more aware of shadowing and actually prevents shadowing bugs.

                                1. 2

                                  I don’t know if Kotlin has it, but one thing that is useful for shadowing is a warning for if you don’t use a variable. In that case, the code provided would provide warnings because they don’t use one of the num’s.

                                1. 3

                                  I find this post to be too much on the “rant for the sake of rant” side:

                                  None of these “fancy” tools still builds by a traditional make command.

                                  – that can be said about almost anything not written in C. I understand where the author is coming from (the old-school universe of C packages), but come on, the world is much more complex and diverse now. Hadoop is definitely a good horse for beating in this case, but would not it be a nightmare to build and maintain it with any approach?

                                  the Docker approach boils down to downloading an unsigned binary, running it

                                  – I find this statement very weird: docker build and private registries are your friends. I understand that setting up proper CI and a private registry can take a non-trivial amount of complexity and effort, but it is a way to ensure trust and upgrades, but the author is still talking only about running unsigned binaries from Dockerhub as if a proper secure process does not exist.

                                  Feels like downloading Windows shareware in the 90s to me.

                                  – with the difference that shareware is explicitly hostile to user inspection and modification, and any pre-built Docker containers should have clear, public and reproducible build process, otherwise I don’t bother running them.

                                  With signed packages, built from a web of trust.

                                  – maybe I’m being too strict here in my understanding of “a web of trust”, but has the author actually ever exchanged PGP keys with maintainers of the old-school Linux distros/BSD distributions? Isn’t the traditional sysadmin model also based on trust: you get an installation image from a trusted place and trust the distro maintainers? In case of almost any distro except Gentoo, you also run binaries made by someone else (with root access to your system and no isolation whatsoever except for manually set up chroots, cgroups, namespaces, … oh wait, it smells like Docker already).

                                  Some even work on reproducible builds.

                                  – isn’t that one of the promises of Docker, reproducible builds with some effort, but without overhauling-in-depth every build system in existence?

                                  Docker itself may be a short-sighted overhyped product with lots of technical weaknesses, but containerization is here to stay.

                                  1. 2

                                    but would not it be a nightmare to build and maintain it with any approach?

                                    Yes, that’s the core of the problem: The design choices that make it a nightmare to build and maintain regardless of the approach taken. Potentially, if containers weren’t used, people might feel the need to fix this bug.

                                    isn’t that one of the promises of Docker, reproducible builds with some effort, but without overhauling-in-depth every build system in existence?

                                    Except I need to build code on systems without docker. Getting some docker-built libraries into a state where I could include them linked into an android app turned from a “Well, it’ll take an hour or so, uses cmake, can’t possibly be that bad” into 3 engineers trying things out for about a week, including debugging cmake with strace, and gdb in order to get the build to work.

                                    1. 1

                                      – isn’t that one of the promises of Docker, reproducible builds with some effort, but without overhauling-in-depth every build system in existence?

                                      How is that possible, though? Programs still need to get built through their existing infrastructure.

                                      1. 2

                                        Taking a minimal fixed image as the base for building and paying some attention to what is being downloaded from where is much more closer to reproducible builds than building a package on a continuously changing traditional system with lots and lots of libraries and admin/user activity.

                                        When you start nailing versions of OS and every library to build a reproducible image, soon you will be down to managing many virtual machines, and at some point plain containerization of builds makes more sense.

                                        1. 2

                                          But we still have the problem of building our software reproducbly.

                                          Also, I disagree that reusing the same binary artifact (in this case an OS image) counts as reproducible builds. It’s like saying installing a prebuilt piece of software is a reproducible build.

                                          1. 1

                                            I agree that full reproducibility is a noble goal and I hope that NixOS/bazel approach will be dominant someday.

                                            Right now, it’s easier to deal with existing/legacy systems just building from minimal fixed images: that does not give full reproducibility in the strict sense, but it’s way better to have those kinda-reproducible builds in practice than what we had before.

                                            1. 1

                                              Also, I disagree that reusing the same binary artifact (in this case an OS image) counts as reproducible builds. It’s like saying installing a prebuilt piece of software is a reproducible build.

                                              Seems closer to building with a prebuilt compiler to me. Of course it’s important to ensure that you can rebuild the compiler/OS image, but you wouldn’t expect to rebuild it for every build.

                                              1. 2

                                                We’ve had that state of affairs for over a decade, at least, though. I don’t know anyone who was calling building AMIs “reproducible builds”. To make my critique more nuanced: I don’t think Docker is changing the state of reproducible builds.

                                                1. 1

                                                  Usability matters. I don’t think I ever saw people using AMIs to run the build in, which is what would be the equivalent here.

                                                  1. 1

                                                    What does “run the build in” mean?

                                                    1. 1

                                                      Performing the build process - running the compiler etc. - inside the known-state system that the AMI provides. I didn’t ever see them being used that way.

                                                      1. 1

                                                        That is how one builds a new AMI: you launch an existing AMI, run your build process, then save that as a new AMI.

                                                        1. 1

                                                          I never saw that being used as the primary build process, probably because it takes so long (comparatively) to launch an AMI. Rather there would be a language-specific process that built a binary, and then a distinct AMI-building process that just stuck that binary in the AMI. Whereas docker is more practical to use as your sole build process.

                                                          1. 1

                                                            Maybe, I see using AMIs to build AMIs quite commonly. But either way, I’m not really sure if this changes my core critcism: Docker is not changing the state-of-the-art of reproducible builds.

                                                            1. 1

                                                              As I said, usability matters; Docker doesn’t change what’s possible, but it does change what’s easy.

                                                              1. 1

                                                                That’s fine, but as I explicitly stated earlier, my point was about the ability to do reproducible builds, and docker has not changed one’s ability to accomplish that.

                                      1. 6

                                        It’s surprising how little work has been done on automatic analysis and inference of algorithmic complexity (e.g. incorporating this information into a type system or other static metadata). On the other hand, it actually may be a harder problem than it looks: even program termination is not decidable in general and often requires complex manual proofs (e.g. termination of GCD in Coq).

                                        On a side note: algorithmic complexity of containers in the C++ standard library is documented.

                                        1. 1

                                          I’m a little out of my depth in Coq and the like, but would love if you have any pointers to things I could look into more.

                                          1. 1

                                            In Coq, the Fixpoint construct can encode bounded recursion and it is checked for termination: termination must be either inferred or proved. Termination check is easy for structural recursion over inductive types, including Peano-encoded numbers (just iterate until it’s exhausted), but for the Euclid algorithm it’s not obvious: the Euclid algorithm is guaranteed to terminate, but it’s not structurally obvious and requires a number-theoretical proof.

                                            Coq uses a small set of conservative, syntactic criteria to check termination of all recursive definitions. These criteria are insufficient to support the natural encodings of a variety of important programming idioms. [0]

                                            [0] http://adam.chlipala.net/cpdt/html/GeneralRec.html

                                        1. 2

                                          Oh please, leave articles like this on the orange site.

                                          1. 1

                                            Can you elaborate please?

                                            So far it has been marked as off-topic by 9 person, and up-voted by 5.

                                            What’s wrong with it? I don’t want to post off-topics but, to my eyes, it seemed pretty interesting if your work is related to the web: it proposes an interesting point of view about the distribution of contents in the web, pretty much in line with the experience I have had in Italy, for example.
                                            Also, it’s as much as off-topic as any article about Facebook and Cambridge Analytica, isn’t it?

                                            1. 2

                                              as much as off-topic as any article about Facebook and Cambridge Analytica, isn’t it?

                                              – yes, exactly, I don’t want politics (even if it’s web-related) here, Lobsters is a technical place for me and I don’t want it to lose focus, which is very easy with politics-related discussions (everybody always has something to say about politics, including me). If I want to read yet another heated (and completely pointless) political discussion, I can go to HN.

                                              The topic of the article is related to web only tangentially (using web as an excuse to talk about geopolitics), and labels networking and finance are completely misleading here.

                                              P.S. Thanks for the reminder, I went through recent stories about Facebook/CA and hid those too.

                                              1. 1

                                                Let me explain the tags, so you can clarify why they were not appropriated, for my future posts:

                                                • finance (for “finance and economics”, as defined in the tags autocompletion list): the article explains how the advantage in the quality of the news used to depend on the number correspondents that national media could pay.
                                                • web: the article explains that the web created a new space for distributing contents and this opened a space for new players.
                                                • networking: the article explains how this status of things could lead to break the internet (international network) into several national networks

                                                Can you clarify why each of these tag were not appropriate?
                                                (I’m not arguing, this is a genuine question since it’s evident that I did not understood their scope)

                                                /cc @friendlysock @pushcx

                                                1. 2

                                                  web: the article explains that the web created a new space for distributing contents and this opened a space for new players.

                                                  Depending on who is arguing, this may not be something technical about the web. As comparison, an article about how printing presses works would be technical, an article about how the fall of newspapers would not be, despite newspapers being a product of printing presses. It depends on your perspective but I think the message from @dmytrish is that just because something happens on the web doesn’t make it technical.

                                              2. 1

                                                Also, it’s as much as off-topic as any article about Facebook and Cambridge Analytica, isn’t it?

                                                Indeed! And those get flagged mercilessly as well.

                                            1. 10

                                              I don’t see the point (other than coolness for the sake of coolness) of trying to recreate a vector-based, inherently imperative language on a virtual machine that does not lean itself to imperative programming in general. BEAM primitives do not include tools to deal with mutability, at all, and array support is non-existent in BEAM [1]. Compiling a vector-based language into BEAM is just combining the worst of both worlds.

                                              A more viable approach would be to extend BEAM itself to include efficient mutable vectorized operations, but it’s still unclear who would need a hybrid of Octave and Erlang and why.

                                              [1] “Arrays are implemented as a structure of nested tuples.” – https://stackoverflow.com/questions/28676383/erlang-array-vs-list

                                              1. 4

                                                Maybe just for intellectual challenge like with the demoscene.

                                                Alternatively with wild speculation, you have a legacy application in Erlang but no library or language for easily expressing or checking certain types of solutions. You build something to handle that which integrates with legacy app. Ive seen that kind of thing done with C, Java, etc.

                                                1. 2

                                                  It would be interesting to see some kind of cross-section between apl and a language like translucid . So instead of having mutable arrays you have multidimensional streams.

                                                  1. 1

                                                    Does it need to be truly mutable? I mean, if we are talking about a BEAM language as APL-ish as Erlang is PROLOG-ish, then we’re not really talking about underlying semantics at all.

                                                    1. 2

                                                      On hn, the author of the article mentions about he’s far more interested in a good way of expressing set operations and business rules in those terms, rather than writing a high performance array processing language for Erlang.

                                                      1. 1

                                                        Makes sense.

                                                        Does BEAM have good FFI support? Generally speaking, if you want to make matrix operations efficient, you first make them possible with a friendly front-end and then you replace the backend with calls to existing hyper-optimized FORTRAN libraries, like both numpy & julia do. It means breaking out of the VM though, & potentially breaking some of Erlang’s guarantees.

                                                        1. 3

                                                          Erlang has ways of calling into C, but those are generally considered something to be careful of, because they can block the scheduler, though NIF’s marked as dirty seem to be able to trade off a little bit of performance to get those scheduling properties back.

                                                          For the use-case that the poster was thinking of, however, I think it might make more sense to start by backing the arrays with Erlang binaries, but I could be wrong. I’d certainly be interested to see what it’d look like.

                                                  1. 5

                                                    Interesting example given in haskell about type system complexity:

                                                    length (1, 2)  --> 1    wut?
                                                    length (1, 2, 3)  --> *incomprehensible error* 
                                                    
                                                    1. 4

                                                      FWIW, this is all caused by a Foldable ((,) a) instance that is already quite controversial in the Haskell community1. It isn’t the only controversial Foldable instance either - did you know there is a Foldable (Either a)2?

                                                      The main friction is that removing instances that were previously there may cause code that currently compiles to stop compiling. One suggestion I personally like is to have a compiler warning for the pathological cases3.

                                                      1. 3
                                                        <interactive>:6:1: error:
                                                            • No instance for (Foldable ((,,) t0 t1))
                                                                arising from a use of ‘length’
                                                        

                                                        — what’s incomprehensible about this?

                                                        1. 4

                                                          Hmm well this is what I got, which is pretty incomprehensible to someone starting out with haskell I think.

                                                          <interactive>:4:1: error:
                                                              • No instance for (Foldable ((,,) t0 t1))
                                                                  arising from a use of ‘length’
                                                              • In the expression: length (1, 2, 3)
                                                                In an equation for ‘it’: it = length (1, 2, 3)
                                                          <interactive>:4:9: error:
                                                              • Ambiguous type variable ‘t0’ arising from the literal ‘1’
                                                                prevents the constraint ‘(Num t0)’ from being solved.
                                                                Probable fix: use a type annotation to specify what ‘t0’ should be.
                                                                These potential instances exist:
                                                                  instance Num Integer -- Defined in ‘GHC.Num’
                                                                  instance Num Double -- Defined in ‘GHC.Float’
                                                                  instance Num Float -- Defined in ‘GHC.Float’
                                                                  ...plus two others
                                                                  ...plus three instances involving out-of-scope types
                                                                  (use -fprint-potential-instances to see them all)
                                                              • In the expression: 1
                                                                In the first argument of ‘length’, namely ‘(1, 2, 3)’
                                                                In the expression: length (1, 2, 3)
                                                          <interactive>:4:11: error:
                                                              • Ambiguous type variable ‘t1’ arising from the literal ‘2’
                                                                prevents the constraint ‘(Num t1)’ from being solved.
                                                                Probable fix: use a type annotation to specify what ‘t1’ should be.
                                                                These potential instances exist:
                                                                  instance Num Integer -- Defined in ‘GHC.Num’
                                                                  instance Num Double -- Defined in ‘GHC.Float’
                                                                  instance Num Float -- Defined in ‘GHC.Float’
                                                                  ...plus two others
                                                                  ...plus three instances involving out-of-scope types
                                                                  (use -fprint-potential-instances to see them all)
                                                              • In the expression: 2
                                                                In the first argument of ‘length’, namely ‘(1, 2, 3)’
                                                                In the expression: length (1, 2, 3)
                                                          
                                                          1. 5

                                                            Yeah, sadly GHC error messages are pointlessly hard to read.

                                                            The first should just say “There is no instance of Foldable for (a,b,c)”.

                                                            The other two are very standard messages you’ll see all the time. You don’t even need to read them. Actually, GHC should be taught to simply not produce them in cases like this. They’re a consequence of the previous error. GHC should be printing something like “I don’t know what type to assign to literal ‘1’ because there are no constraints on it. If there are other type errors fixing them may add additional constraints. If not, annotate the literal with a type like (1::Int)”.

                                                            Basically, 1, doesn’t mean much. It could be an integer, a double, a km/s, a price, the unit vector, etc. As long as the type has a Num instance available 1 can be converted to it. Since the type controls the behavior of that object you need to know what it is before you can run the code.

                                                            1. 3

                                                              I agree that the amount of information that GHC outputs is overwhelming (the GHC typechecker looks to me like a complicated solver environment that might be better served by its own interactive mode and type-level debugger, Coq-style). On the other hand, the source of the error is clearly written in two lines at the start of the message, that’s why it’s hardly “incomprehensible”.

                                                              1. 2

                                                                To me this looks like the unreadable messages from g++. You just have to learn to read through it!

                                                              2. 3

                                                                For someone new to Haskell, the notion that – were you to invest the time to learn, this would be a readable message – is hard to fathom. I think that sort of imagination barrier is why generally things with steep learning curves are less popular.

                                                                1. 9

                                                                  There’s nothing Haskell-specific about bad error messages. And it’s nothing to do with imagination or the learning curve of Haskell. It’s just the obtuse way that GHC error messages are written and the lack of interest to make them better.

                                                                  If this message said “(,,) is not an instance of Foldable” no one would find it difficult to comprehend.

                                                                  That being said. The tuple instance of Foldable is really horrible and confusing. Either length shouldn’t be in foldable and we should use some other concept (slotness or something) or that instance shouldn’t be there by default. But this has nothing to do with Haskell or type systems. It’s just as if Java had created a terrible misnamed class that gave you the wrong answer to an obvious query.

                                                            1. 10

                                                              I’m not sure why D isn’t more popular. My guess is some teething issues back in the day limited its growth:

                                                              • std lib split with phobos/tango
                                                              • D1 vs D2 changes
                                                              • dmd compiler backend license
                                                              • lack of a build package manager (eg. pre dub)

                                                              From what I understand, these have all since been resolved. It will be interesting to see if D usage picks up, or if those early speed-bumps made too much room for other (newer) languages to pass it by (Go, Rust, Swift).

                                                              1. 7

                                                                For me, D is known and perceived just as a “better C++” (Alexandrescu is one the brightest evangelists of D and his C++ past does not help with the language image) and I do not want a better C++. I do not want another deeply imperative programming language that can do some functional programming accidentally. I do not want another language born from the culture of obsessive focus on performance at the expense of everything else.

                                                                What I want is a ML-ish language for any level of systems programming (sorry, D and Go: having GC is a non-starter) with safety and correctness without excessive rituals and bondage (like in Ada). Rust fits the bill: it’s explicitly not functional, but has strong safety/correctness culture.

                                                                1. 7

                                                                  Precisely because of the lack of GC and the focus on lifetimes, Rust is much more similar to (modern) C++ than D will ever be. Writing Rust is like writing correctly written C++.

                                                                  D, having a GC, leads to different programs (than C++ or Rust) because of this global owner for resources that are only memory. eg: slices have no ownership information in the type system. This makes scripting very friction-less, at the cost of some more problems with non-memory resources. But not at the cost of speed.

                                                                  D has @safe which is machine-checked, opt-in memory safety.

                                                                  1. 4

                                                                    Thanks for the clarification, indeed I had a slightly wrong impression about the D programming style and ignored the profound influence of garbage collection on the programming style.

                                                                    Still, everything that I learn about D irks me (native code with GC by default? metaprogramming with native code, without dynamic eval? opt-in @safe?!) and feels too much like the old C++ culture with its insensible defaults.

                                                                    1. 2

                                                                      native code with GC by default?

                                                                      This is why Go appealed to so many people isn’t it. This is the “new normal”. (Of course, OCaml etc had this before)

                                                                      dynamic eval

                                                                      This is probably a bridge too far if you are appealing to people from C++ background (C++ programmers as target audience is bad market strategy for D, IMHO)

                                                                      Opt-in @safe.

                                                                      I agree with you on this.

                                                                      feels too much like the old C++ culture

                                                                      Well, the two main D architects are old C++ hands after all!

                                                                2. 6

                                                                  I think I addressed at the very top the current biggest obstacles to D adoption. I encounter them often when I try to excitedly discuss D with anyone.

                                                                  1. 6
                                                                    • Go – Too limited a language. It has good marketing behind it. Outside of having some libraries that are not available natively on D (or C), there is no good reason to use Go as a D programmer.
                                                                    • Rust – A very strong competitor. Not every program needs Rust’s level of thinking about memory (though some of Rust-exclusive feature do make it attractive over D)
                                                                    • Swift – Nice enough language. Is still an alien on Linux (Windows, what is even that?). D has first class support on both Linux and Windows.
                                                                    1. 3

                                                                      I agree with all those points.

                                                                      I was more thinking about folks skipping over D in the past, and how that potentially limited its uptake, than from the perspective of a D programmer looking at the current popular trends. Certainly an interesting perspective though. Thanks for sharing!

                                                                    2. 2

                                                                      Indeed all those points have since been resolved. What hasn’t been resolved is that there isn’t a simple message to give as a marketing motto, since D tends to check all the boxes in many areas.

                                                                      1. 4

                                                                        I think it should go back to its roots, which is why I happen to like it: “D: the C++ you always wanted.”

                                                                        1. 2

                                                                          But nowadays there are much more people that were never exposed to C++ in the first place.

                                                                          1. 1

                                                                            Very true!

                                                                            Also, I must be one of the few people who learned C++ and never learned C. “C/C++” has always irked me, as if the two were easily exchangeable, or worse, the same language.

                                                                    1. 6

                                                                      Small joys of full ownership over your product, indeed.

                                                                      This way of writing hardly applies to a large codebase with hundreds of people working on it every day without clear ownership boundaries.

                                                                      1. 22

                                                                        I think it comes down to, if someones reading your code, they’re trying to fix a bug, or some other wise trying to understand what it’s doing. Oddly, a single, large file of sphaghetti code, the antithesis of everything we as developers strive to do, can often be easier to understand that finely crafted object oriented systems. I find I would much rather trace though a single source file than sift through files and directories of the interfaces, abstract classes, factories of the sort many architect nowadays. Maybe I have been in Java land for too long?

                                                                        1. 10

                                                                          This is exactly the sentiment behind schlub. :)

                                                                          Anyways, I think you nail it on the head: if I’m reading somebody’s code, I’m probably trying to fix something.

                                                                          Leaving all of the guts out semi-neatly arranged and with obvious toolmarks (say, copy and pasted blocks, little comments saying what is up if nonobvious, straightforward language constructs instead of clever library usage) makes life a lot easier.

                                                                          It’s kind of like working on old cars or industrial equipment: things are larger and messier, but they’re also built with humans in mind. A lot of code nowadays (looking at you, Haskell, Rust, and most of the trendy JS frontend stuff that’s in vogue) basically assumes you have a lot of tooling handy, and that you’d never deign to do something as simple as adding a quick patch–this is similar to how new cars are all built with heavy expectation that either robots assemble them or that parts will be thrown out as a unit instead of being repaired in situ.

                                                                          1. 6

                                                                            You two must be incredibly skilled if you can wade through spaghetti code (at least the kind I have encountered in my admittedly meager experience) and prefer it to helper function calls. I very much prefer being able to consider a single small issue in isolation, which is what I tend to use helper functions for.

                                                                            However, a middle ground does exist, namely using scoping blocks to separate out code that does a single step in a longer algorithm. It has some great advantages: it doesn’t pollute the available names in the surrounding function as badly, and if turned into an inline function can be invoked at different stages in the larger function if need be.

                                                                            The best example of this I can think of is Jonathan Blow’s Jai language. It allows many incremental differences between “scope delimited block” and “full function”, including a block with arguments that can’t implicitly access variables outside of the block. It sounds like a great solution to both the difficulty of finding where a function is declared and the difficulty in thinking about an isolated task at a time.

                                                                            1. 2

                                                                              It’s a skill that becomes easier as you do it, admittedly. When dealing with spaghetti, you only have to be as smart as the person who wrote it, which is usually not very smart :D.

                                                                              As others have noted, where many fail is too much abstraction, too many layers of indirection. My all time worst experience was 20 method calls deep to find where the code actually did something. And this was not including many meaningless branches that did nothing. I actually wrote them all down on that occasion for proof of the absurdity.

                                                                              The other thing that kills when working with others code is the functions/methods that don’t do what they’re named. I’ve personally wasted many hours debugging because I skipped over the funtion that mutated that data it shouldn’t have, judging from it’s name. Pro tip; check everything.

                                                                              1. 2

                                                                                Or you can record what lines of code are actually executed. I’ve done that for Lua to see what the code was doing (and using the results to guide some optimizations).

                                                                                1. 1

                                                                                  Well, I wouldn’t say “incredibly skilled” so much as “stubborn and simple-minded”–at least in my case.

                                                                                  When doing debugging, it’s easiest to step through iterative changes in program state, right? Like, at the end of the day, there is no substitute for single-stepping through program logic and watching the state of memory. That will always get you the ground truth, regardless of assumptions (barring certain weird caching bugs, other weird stuff…).

                                                                                  Helper functions tend to obscure overall code flow since their point is abstraction. For organizing code, for extending things, abstraction is great. But the computer is just advancing a program counter, fiddling with memory or stack, and comparing and branching. When debugging (instead of developing), you need to mimic the computer and step through exactly what it’s doing, and so abstraction is actually a hindrance.

                                                                                  Additionally, people tend to do things like reuse abstractions across unrelated modules (say, for formatting a price or something), and while that is very handy it does mean that a “fix” in one place can suddenly start breaking things elsewhere or instrumentation (ye olde printf debugging) can end up with a bunch of extra noise. One of the first things you see people do for fixes in the wild is to duplicate the shared utility function, and append a hack or 2 or Fixed or Ex to the function name and patch and use the new version in their code they’re fixing!

                                                                                  I do agree with you generally, and I don’t mean to imply we should compile everything into one gigantic source file (screw you, JS concatenators!).

                                                                                  1. 3

                                                                                    I find debugging much easier with short functions than stepping through imperative code. If each function is just 3 lines that make sense in the domain, I can step through those and see which is returning the wrong value, and then I can drop frame and step into that function and repeat, and find the problem really quickly - the function decomposition I already have in my program is effectively doing my bisection for me. Longer functions make that workflow slower, and programming styles that break “drop frame” by modifying some hidden state mean I have to fall back to something much slower.

                                                                                    1. 2

                                                                                      I absolutely agree with you that when debugging, it boils down to looking and seeing, step by step, what the problem is. I also wasn’t under the impression that you think that helper functions are unnecessary in every case, don’t worry.

                                                                                      However, when debugging, I still prefer helper functions. I think it’s that the name of the function will help me figure out what that code block is supposed to be doing, and then a fix should be more obvious because of that. It also allows narrowing down of an error into a smaller space; if your call to this helper doesn’t give you the right return, then the problem is in the helper, and you just reduced the possible amount of code that could be interacting to create the error; rinse and repeat until you get to the level that the actual problematic code is at.

                                                                                      Sure, a layer of indirection may kick you out of the current context of that function call and perhaps out of the relevant interacting section of the code, but being able to narrow down a problem into “this section of code that is pretty much isolated and is supposed to be performing something, but it’s not” helps me enormously to figure out issues. Of course, this only works if the helper functions are extremely granular, focused, and well named, all of which is infamously difficult to get right. C’est la vie.

                                                                                      Anyways, you can do that with a comment and a block to limit scope, which is why I think that Blow’s idea about adding more scoping features is a brilliant one.

                                                                                      On an unrelated note, the bug fixes where a particular entity is just copied and then a version number or what have you is appended hits way too close to home. I have to deal with that constantly. However, I am struggling to think of a situation where just patching the helper isn’t the correct thing to do. If a function is supposed to do something, and it’s not, why make a copy and fix it there? That makes no sense to me.

                                                                                      1. 1

                                                                                        It’s a balance. At work, there’s a codebase where the main loop is already five function calls deep, and the actual guts, the code that does the actual work, is another ten function calls deep (and this isn’t Java! It’s C!). I’m serious. The developer loves to hide the implementation of the program from itself (“I’m not distracted by extraneous detail! My code is crystal clear!”). It makes it so much fun to figure out what happens exactly where.

                                                                                  2. 2

                                                                                    A lot of code nowadays (looking at you, Haskell, Rust, and most of the trendy JS frontend stuff that’s in vogue) basically assumes you have a lot of tooling handy, and that you’d never deign to do something as simple as adding a quick patch

                                                                                    I do quick patches in Haskell all the time.

                                                                                    1. 1

                                                                                      Ill add that one of the motivations of improved structure (eg functional prigramming) is to make it easier to do those patches. Especially anything bringing extra modularity or isolation of side effects.

                                                                                  3. 6

                                                                                    I think it’s a case of OO in theory and OO as dogma. I’ve worked in fairly object oriented codebases where the class structure really was useful in understanding the code, classes had the responsibilities their names implied and those responsibilities pertained to the problem the total system was trying to solve (i.e. no abstract bean factories, no business or OSS effort has ever had a fundamental need for bean factories).

                                                                                    But of course the opposite scenario has been far more common in my experience, endless hierarchies of helpers, factories, delegates, and strategies, pretty much anything and everything to sweep the actual business logic of the program into some remote corner of the code base, wholly detached from its actual application in the system.

                                                                                    1. 7

                                                                                      I’ve seen bad code with too many small functions and bad code with god functions. I agree that conventional wisdom (especially in the Java community) pushes people towards too many small functions at this point. By the way, John Carmack discusses this in an old email about functional programming stuff.

                                                                                      Another thought: tooling can affect style preferences. When I was doing a lot of Python, I noticed that I could sometimes tell whether someone used IntelliJ (an IDE) or a bare bones text editor based on how they structured their code. IDE people tended (not an iron law by any means) towards more, smaller files, which I hypothesized was a result of being able to go-to definition more easily. Vim / Emacs people tended instead to lump things into a single file, probably because both editors make scrolling to lines so easy. Relating this back to Java, it’s possible that everyone (with a few exceptions) in Java land using heavyweight IDEs (and also because Java requires one-class-per-file), there’s a bias towards smaller files.

                                                                                      1. 1

                                                                                        Yes, vim also makes it easy to look at different parts of the same buffer at the same time, which makes big files comfortable to use. And vice versa, many small files are manageable, but more cumbersome in vim.

                                                                                        I miss the functionality of looking at different parts of the same file in many IDEs.

                                                                                    2. 3

                                                                                      Sometimes we break things apart to make them interchangeable, which can make the parts easier to reason about, but can make their role in the whole harder to grok, depending on what methods are used to wire them back together. The more magic in the re-assembly, the harder it will be to understand by looking at application source alone. Tooling can help make up for disconnects foisted on us in the name of flexibility or unit testing.

                                                                                      Sometimes we break things apart simply to name / document individual chunks of code, either because of their position in a longer ordered sequence of steps, or because they deal with a specific sub-set of domain or platform concerns. These breaks are really in response to the limitations of storing source in 1-dimensional strings with (at best) a single hierarchy of files as the organising principle. Ideally we would be able to view units of code in a collection either by their area-of-interest in the business domain (say, customer orders) or platform domain (database serialisation). But with a single hierarchy, and no first-class implementation of tagging or the like, we’re forced to choose one.

                                                                                      1. 4

                                                                                        Storing our code in files is a vestige of the 20th century. There’s no good reason that code needs to be organized into text files in directories. What we need is a uniform API for exploring the code. Files in a directory hierarchy is merely one possible way to do this. It happens to be a very familiar and widespread one but by no means the only viable one. Compilers generally just parse all those text files into a single Abstract Syntax Tree anyway. We could just store that on disk as a single structured binary file with a library for reading and modifying it.

                                                                                        1. 3

                                                                                          Yes! There are so many more ways of analysis and presentation possible without the shackles of text files. To give a very simple example, I’d love to be able to substitute function calls with their bodies when looking at a given function - then repeat for the next level if it wasn’t enough etc. Or see the bodies of all the functions which call a given function in a single view, on demand, without jumping between files. Or even just reorder the set of functions I’m looking at. I haven’t encountered any tools that would let me do it.

                                                                                          Some things are possible to implement on top of text files, but I’m pretty sure it’s only a subset, and the implementation is needlessly complicated.

                                                                                          1. 1

                                                                                            Anyone who truly thinks this would be better ought to go learn some lisp.

                                                                                            1. 1

                                                                                              I’ve used Lisp but I’m still not sure what your point is here. Care to elaborate?

                                                                                              1. 2

                                                                                                IIRC, the s-expr style that Lisp is written in was originally meant to be the AST-like form used internally. The original plan was to build a more suggared syntax over it. But people got used to writing the s-exprs directly.

                                                                                                1. 1

                                                                                                  Exactly this, some binary representation would presumably be the AST in some form, which lisp s-expressions are, serialized/deserialized to text. Specifically

                                                                                                  It happens to be a very familiar and widespread one but by no means the only viable one.

                                                                                                  Xml editors come to mind that provide a tree view of the data, as one possible alternative editor. I personally would not call this viable, certainly not desirable. Perhaps you have in mind other graphical programming environments, I haven’t found any (that I’ve tried) to be useable for real work. Maybe you have something specific in mind? Excel?

                                                                                                  Compilers generally just parse all those text files into a single Abstract Syntax Tree anyway

                                                                                                  The resulting parse can depend on the environment in many languages. For example the C preprocessor can generate vastly different code depending on how system variables are defined. This is desirable behavior for os/system level programs. The point here is that in at least this case the source actually encodes several different programs or versions of programs, not just one.

                                                                                                  My experience with this notion that text is somehow not desireable for programs is colored by using visual environments like Alice, or trying to coerce gui builders to get the layout I want. Text really is easier than fighting arbitrary tools. Plus, any non text representation would have to solve diffing and merging for version control. Tree diffing is a much harder problem than diffing text.

                                                                                                  People who decry text would have much more credibility with me, if they addressed these types of issues.

                                                                                            2. 1

                                                                                              Yes, I’m 100% in agreement.

                                                                                          2. 2

                                                                                            That’s literally true! I am work with some of the old code and things are really easy. There are lots of files but all are divided into such an easy way.

                                                                                            On the other hand, new project that is divided into lots of tier with strick guidelines, it become hard form me to just find a line from where bug occur

                                                                                          1. 2

                                                                                            A link has one or more targets, represented by a permanent address combined with an optional start offset and length (in bytes).

                                                                                            In a UTF-8 world, shouldn’t this be characters rather than bytes?

                                                                                            1. 1

                                                                                              PNG and any binaries is not exactly UTF-8 (I assume you still want to access images via hyperlinks).

                                                                                              1. 1

                                                                                                True, but does it make sense to link to a byte position within an image or binary? In the latter case, perhaps — but might it not also make more sense to link to a character within the display version of the binary?

                                                                                                1. 2

                                                                                                  In my work with Xanadu, we supported bytes for all types of spans but also supported characters for text. Specs indicated we should also support (x,y) coordinate bounding boxes for images, time (in seconds, minutes, etc.) for audio, and bounding-box/time composite addresses for video. (Span format was shared between links and transclusions, and so this affected fetch and cache behavior.)

                                                                                                  These specifications were not fully implemented by my project, and I’m unaware if any of the other simultaneous implementations being worked on actually supported them. Part of the reason for this is that we would like to only fetch and cache necessary parts of the target. This works great for text (so long as you’re using a protocol that lets you fetch only certain byte spans – we supported HTTP, which theoretically supports this). It’s a lot harder for compressed formats. Ultimately, we would either need to perform a bunch of round-trips fetching headers and things in order to store the minimum, or we would need to fetch large but not-necessarily-complete chunks (enough to identify frames). Either way would involve extremely format-dependent code, and we weren’t terribly comfortable with what amounts to a piecemeal rewrite of FFMPEG to expose strange internal stuff for our totally-unsupported special case. (We had enough of that as it is, with trying to convince OpenGL to render text on a texture in a way that was portable & fast enough to use as a text editor while using the modern pipeline!)

                                                                                                  Ultimately, we needn’t have worried – we only very rarely ran into third party HTTP servers that let us request particular byte spans, so most of the time we had to download whole files anyway. (I was pushing for gopher and IPFS support – gopher because it lacked the overhead of HTTP, and IPFS because it actually guaranteed permanent addresses – but neither of these actually supports fetching arbitrary byte spans anyhow.)

                                                                                                  As a result of working on that, I’m wary of demanding that future systems support anything so rich as semantically-meaningful units on compressed formats. (After all, transclusion and bidirectional links are easy by comparison, and yet the web doesn’t support them and neither do many other “hypertext” systems!)

                                                                                            1. 12

                                                                                              I’d prefer it if there was no design. Just the content.

                                                                                              1. 16

                                                                                                Which you get with a (full) RSS feed.

                                                                                                1. 7

                                                                                                  Yup!

                                                                                                  Take care if you use hugo, the default rss template does not render the full article.

                                                                                                  Here’s a modified one that renders the full article in the feed:

                                                                                                  <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
                                                                                                    <channel>
                                                                                                      <title>{{ .Title}} </title>
                                                                                                      <link>{{ .Permalink }}</link>
                                                                                                      <description>Recent posts</description>
                                                                                                      <generator>Hugo -- gohugo.io</generator>{{ with .Site.LanguageCode }}
                                                                                                      <language>{{.}}</language>{{end}}{{ with .Site.Author.email }}
                                                                                                      <managingEditor>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</managingEditor>{{end}}{{ with .Site.Author.email }}
                                                                                                      <webMaster>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</webMaster>{{end}}{{ with .Site.Copyright }}
                                                                                                      <copyright>{{.}}</copyright>{{end}}{{ if not .Date.IsZero }}
                                                                                                      <lastBuildDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</lastBuildDate>{{ end }}
                                                                                                      {{ with .OutputFormats.Get "RSS" }}
                                                                                                          {{ printf "<atom:link href=%q rel=\"self\" type=%q />" .Permalink .MediaType | safeHTML }}
                                                                                                      {{ end }}
                                                                                                      {{ range .Data.Pages }}
                                                                                                      <item>
                                                                                                        <title>{{ .Title }}</title>
                                                                                                        <link>{{ .Permalink }}</link>
                                                                                                        <pubDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</pubDate>
                                                                                                        {{ with .Site.Author.email }}<author>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</author>{{end}}
                                                                                                        <guid>{{ .Permalink }}</guid>
                                                                                                        <description>{{ .Content | html }}</description>
                                                                                                      </item>
                                                                                                      {{ end }}
                                                                                                    </channel>
                                                                                                  </rss>
                                                                                                  

                                                                                                  I probably should have included that in the article… Too late now.

                                                                                                  1. 3

                                                                                                    Why is it too late now?

                                                                                                    1. 5

                                                                                                      I was about to answer laziness and realized this is not something that should be celebrated.

                                                                                                      It’s now included.

                                                                                                      1. 1

                                                                                                        Thank you for adding it, it will be handy for me when I go back to your post in a few weeks and look for how to do this. :)

                                                                                                  2. 1

                                                                                                    Blogs should just be an XSLT transform applied to RSS ;)

                                                                                                  3. 3

                                                                                                    The built-in Firefox Reader mode is a godsend. I feel much more comfortable reading long texts in the same font, page width, background color + the scrollbar on the right now gives me a pretty good estimate of reading time.

                                                                                                    1. 1

                                                                                                      Weirdly, though, that all comes down to the good design of Firefox reader mode :D.

                                                                                                    2. 1

                                                                                                      RSS, lightweight versions (light.medium.com/usual/url ?), heck even gopher, perfectly does the job! we need these things.

                                                                                                      1. 2

                                                                                                        Yes, that’s where these initial lists come from, but only a few were added yet. Some information also comes from GitHub Explore. The point of this basic website was simply to add:

                                                                                                        • A fixed TOC on the right
                                                                                                        • Stars / forks to links that point to other GitHub repos
                                                                                                        • Search (although the current implementation is limited)