1. 21

    Yeah, 72 is much more reasonable. We need hard limits, or at least ‘annnoying’ conventions to keep the horrors at bay. The human eye scans lines best at around 72 characters wide, and we should put human readability of our code before other concerns. I have worked on projects that had huge long lines and there is no excuse. If a language or tool or whatever can’t deal with human limits, find or make another tool. Linus’ current workstation should not be the standard.

    That being said, I think Racket has made a reasonable compromise:

    A line in a Racket file is at most 102 characters wide.

    If you prefer a narrower width than 102, and if you stick to this width “religiously,” add a note to the top of the file—right below the purpose statement—that nobody should violate your file-local rule.

    This number is a compromise. People used to recommend a line width of 80 or 72 column. The number is a historical artifact. It is also a good number for several different reasons: printing code in text mode, displaying code at reasonable font sizes, comparing several different pieces of code on a monitor, and possibly more. So age doesn’t make it incorrect. We regularly read code on monitors that accommodate close to 250 columns, and on occasion, our monitors are even wider. It is time to allow for somewhat more width in exchange for meaningful identifiers.

    https://docs.racket-lang.org/style/Textual_Matters.html

    1. 25

      The human eye scans lines best at around 72 characters wide

      I would like to have 72 chars wide line but with ignored indentation. It would make nested block readable on its own.

      Example with 40 chars width ignoring indentation white space

      Lorem ipsum dolor sit amet, consectetur
      adipiscing elit. Donec sit amet augue
      felis. Suspendisse a ipsum et sem auctor
      porttitor in ac lacus. 
      
          Curabitur condimentum augue diam, ut
          molestie nibh faucibus nec. Aliquam
          lacinia volutpat tellus, non
          sollicitudin nulla luctus sit amet.
      
              Aenean consequat ipsum sem, ac rutrum
              leo dictum at. Suspendisse purus dolor,
              condimentum in ultrices vel, egestas vel
              ipsum.
      

      Versus 40 chars width including indentation

      Lorem ipsum dolor sit amet, consectetur
      adipiscing elit. Donec sit amet augue
      felis. Suspendisse a ipsum et sem auctor
      porttitor in ac lacus. 
      
          Curabitur condimentum augue diam, ut
          molestie nibh faucibus nec. Aliquam
          lacinia volutpat tellus, non
          sollicitudin nulla luctus sit amet.
      
              Aenean consequat ipsum sem, ac
              rutrum leo dictum at.
              Suspendisse purus dolor,
              condimentum in ultrices vel,
              egestas vel ipsum.
      
      1. 18

        The human eye scans lines best at around 72 characters wide

        With monospace fonts? Or proportional ones? With large text or small?

        With English prose, poetry, or with C code? With hyphenation? Indentation?

        I’ve found that recommendation is pretty good for English text with a middle size proportional font. I do not find it works as well for code.

        1. 5

          100% agreed. As I argued in the comments above, people don’t read code the same way that they read prose, and so I would not try to generalize a heuristic meant for prose to code.

          1. 3

            I agree. Reading written text involves repeatedly shifting your focus to the line below. A consistent and short line length in that case is very important. Code is not the same. It’s far more common reading code to study a single line or small block, and in that case, I find that arbitrarily wrapping a line to stay within 80 characters usually breaks consistency and harms readability. I used to subscribe to the 80 character limit until I realised this difference. We don’t read code like we read written text.

            Terminal/editor windows side by side is a fine point, but in general the vast majority of lines of code are short anyway, often well under 80 characters. If a few lines happen to wrap on your display, I hardly think that’s going to completely kill readability, and it’s certainly a trade-off I’m willing to make. If many lines are wrapping then yes, you probably have a problem with your code formatting (or your code in general). It’s the hard limit that I take issue with. Back when I wrote my own code like this, all too often I would find myself renaming identifiers (usually for the worse) among other arbitrary and unhelpful things, just to fit some code within a line without wrapping. I wouldn’t be surprised if more often than not this is the outcome for many others who attempt this, and it’s almost certainly a net negative for readability. Dropping the idea entirely has been a huge relief. One less thing to think about. Most of my code still consists of short lines, as it always did, and as most code naturally does. But when I need to, and it makes sense to write a longer line, I don’t spend a second agonising over whether it fits within some special number of characters, and instead focus entirely on whether it in itself is clear and understandable.

          2. 10

            I want to reinforce your comment that

            The human eye scans lines best at around 72 characters wide, and we should put human readability of our code before other concerns.

            Recently, I have been trying to optimise my on-screen experience and I found a series of peer-reviewed articles with recommendations that improved my experience.

            In one of those, it is claimed, indeed, that more than 72 and less than 80 characters (precisely, 77) is the optimal number for clear readability.

            The study is for dyslexic people, and I was never diagnosed as so. But it works for me, and I tend to believe it works for most people as well.

            1. 1

              Yeah what I’ve read confirms this, I think the width of the text column can vary based on the character width. For example in CSS I typically set something like:

               p { max-width: 32em; }
              
              1. 4

                You can also use 72ch to set it based on width of the 0 character (which will approximate it)

          1. 6

            I see lot of haters here which makes me a bit sad. So I want to thank you for your article and express, that there are also people who like ASN.1. It is very powerful (meta)format.

            1. 6

              there are also people who like ASN.1

              I am one of them. I am in a project using a protocol specified using (a small variation of) ASN.1 and it has been an interesting experience, which made me wonder why people are so crazy about JSON everywhere.

              In fact, a couple of weeks ago, when I saw a post about MessagePack in here, I wondered why the authors decided to re-invent the wheel.

              But, well, there are so many serialisation formats, and different people seem to like the different flavors.

              If I can, I will keep using it. :)

              1. 0

                Add two more consecutive optional fields with an identical type to one of your structs and you’ll understand.

            1. 9

              Here’s some that I had my eye on:

              • Chapel: a programming language designed for parallel computing. A lot of data locality constructs as part of the core language, and I like how to they do configuration.
              • P: A language with first class state machines! Might not satisfy the “beginner documentation” requirement though.
              • Esterel: real niche, but the only free, battle-tested, synchronous programming language I know about.
              1. 1

                This P language seems really nice! I will give it a try!

                Thank you!

              1. 3

                I’ll try to give a sketch of an Emacs-centered workflow. My main tools are (surprisingly) GNU Emacs and xterm+bash. I don’t need much more than that.

                When working on something, I usually have one of two setups:

                • For compiled languages, I split Emacs intro three windows (for non-Emacs users, what the window manager calls a “window” is a “frame” in Emacs-speak, while a frame in the Emacs window is a “window”). Usually a file buffer on one side of the screen, and a vertically split window with a compile buffer on the top and a terminal*/dired below. These can be rotated easily (using window-swap-states). But I would usually order than so that rotating through the buffers makes sense. Binding recompile to a handy key is very useful, in my case I use the F2 key.
                • For interpreted languages, assuming there is proper Emacs support, I would have mostly the same setup, except that the compile buffer would be replaced by a REPL, and I would tend to not have a terminal open.

                Regarding Emacs extensions, projectile is very useful, but I’m not an advanced user. My main commands are projectile-find-file and projectile-kill-buffer. Sometimes I forget to use the latter, and then I end up with 700 opened buffers, like a few weeks ago. Other noteworthy packages are avy, for jumping to symbols, but also copying and moving them around, ivy/swiper, for faster completion and perhaps dumb-jump for looking up definitions without having to set anything up. Any other tips would be language specific.

                This is my configuration, in case anyone cares, but I’m in the process of re-writing and shrinking it, so it’s not as tidy as it used to be.

                * In my case Eshell.

                1. 1

                  Out of curiosity: do you use xterm so you can use Alt + B and Alt + F to navigate readline?

                  At least in my case, that’s mostly the reason I use it over the other existing options.

                  1. 2

                    No, there’s no real reason. Sometimes I get bored and switch back to MATE’s terminal (or when i need to resize my terminal for an audience to recognize anything). But I just tried Alt-B and Alt-F in mate-terminal, and it worked, so I’m not sure what you’re talking about. Have you disabled the menu bar?

                    1. 1

                      But I just tried Alt-B and Alt-F in mate-terminal, and it worked, so I’m not sure what you’re talking about. Have you disabled the menu bar?

                      I am a bit embarrassed now, because I just tested it on gnome-terminal (on i3) and it has worked as well.

                      I had this memory I changed to xterm for that reason, though. Perhaps my memories is playing tricks on me.

                      Any how, I learned it works on gnome-terminal as well—and possibly everywhere else. :)

                      1. 1

                        The behaviour is totally unintuitive, and I remember just ignoring it for ages, so there’s nothing to be ashamed of :)

                  2. 1

                    It’s interesting how different our setups are considering we both use Emacs; it fascinates me every time I come across one how varied the users of Emacs are.

                    Thank you for your configuration, by the way. I can’t remember how I first came across it, but I’ve found a few nice settings in there, and like your list of inspirational configurations which I’ve also found a few nice things in. When you switched to Rmail I got quite excited, I barely know anyone else who uses it!

                    1. 2

                      Considering how “low-level” Emacs primitives are (buffer, window, frame, …) it doesn’t even surprise me. Most people in this thread use a window manager, and look at all the variation there!

                      Thank you for your configuration, by the way

                      It means a lot that people find what I do helpful! A lot of knowledge, methods and tricks about Emacs is lost in old configuration files or blog posts, so trying to dig this up for others is kind of interesting, especially because workflows have changed so much over time.

                      1. 1

                        Considering how “low-level” Emacs primitives are… Most people in this thread use a window manager

                        Considering I spend a lot of time proclaiming that Emacs is nothing more than an interface, I’m surprised I never made this comparison myself!

                        Yeah, maybe it’s quite sad but I do like to print off people’s init files and just read through them. There’s a lot of good stuff in a lot of them.

                  1. 10

                    My development environment is kinda like the UNIX philosophy version of an IDE. I use:

                    • Fish as my shell
                    • Kakoune as my text editor
                    • tmux for windowing and persistent session stuff
                    • fzf for fuzzily changing directories and picking files
                    • ripgrep and fd for searching
                    • kitty as my terminal emulator

                    I’m probably forgetting other things, but those are the important ones. My personal laptop runs macOS, but all other machines run NixOS.

                    I work almost entirely with Haskell and Nix, so language specific tools include stuff like Cabal, ghcid, ormolu, and HLint. I also use home-manager for declarative package management, as opposed to the more imperative nix-env style.

                    1. 2

                      As @damantisshrimp said, I also have almost the same configuration:

                      • s/fish/bash/
                      • s/Kakoune/Neovim/
                      • s/kitty/uxterm/

                      Some other things that I think it is worth to mention:

                      • i3 as window manager;
                      • Zeal for offline documentation;
                      • w3m for quick internet surfing (mostly StackOverflow and online documentation);
                      • it might sound crazy, but the amount of time I use find (application of operations on set of files), awk (querying) , sed (simple modifications), and ed (more complex modifications) daily makes them deserve a special entry in this list.
                      1. 2

                        I have almost the same environment, except I couldn’t completely switch to Kakoune so I went back to the comfort of Neovim.

                        Great setup ;)

                      1. 4

                        Anyone knows examples of conservative static analysis tools that check for termination in general purpose languages?

                        Isn’t it exactly the Halting Problem?

                        For trivial examples (e.g., while(true)) grep can find that. The problem is exactly the non-trivial cases, and that is what I think the author is asking advice for.

                        I still think Dhall is the best philosophical road in configuration files.

                        1. 1

                          Hey, I’m the author!

                          Yep, if it wasn’t for “conservative”, it would be halting problem. “Conservative” means it’s allowed to reject a valid terminating program (because it’s not smart enough to figure out why it’s guaranteed to terminate). But if you program does pass the check, it’s 100% guaranteed to terminate.

                          Some examples would be:

                          • a simple example of a conservative subset of language would be setting a timeout for a program. So if your program takes a millisecond longer than a timeout, it would be rejected as invalid

                          • a more sophisticated example is a language that doesn’t support loops, but uses something like structural recursion

                            Or you could just not allow recursion or control flow in the first place, which is what Dhall does.

                          So what I meant by a ‘conservative subset’ was – you can restrict your existing language (i.e. Python) to a subset that is provably terminating. For example if you forbid using for/while loops, eval/exec, imports and standard library you might get a terminating subset of the language (of course you have to analyse it thoroughly). That way:

                          • your config is guaranteed to terminate in finite time
                          • you don’t have to reinvent the syntax for your config – you benefit from the existing tools for your programming language

                          Not sure, maybe “conservative” is not the right word for that, but I think it’s what is usually used when referring to static analysis tools, e.g. here

                          1. 1

                            Dhall has control flow and recursion. Your own post addresses this so your comment could probably just omit the Dhall mention.

                            I’ve considered making a config language out of PureScript which would disable recursion and import a custom content-addressable package set, which would expose only those functions that terminate (map, fold, etc). Rather than creating anything new, it would restrict what is already there and maintained. PureScript is better than Haskell at talking natively to JSON, with its record system, which is why I’d choose it.

                            You could perform “modifications” by replacing/updating top-level definitions of normal forms. But only normal forms can be modified directly without worry, i.e. pretty much JSON. Anything else and you would have to wrap things with setters and adders, which obscures the structure of the config. But one could explore normalization tools to “inline” code with its results to recover normal form structure.

                            I think something like this is deserves more research.

                            1. 1

                              Dhall has control flow and recursion

                              Ah, I meant it doesn’t support general recursion. In my understanding it’s got folds/maps, but no for/while analogues which makes it total, right?

                              I’ve considered making a config language out of PureScript which would disable recursion and import a custom content-addressable package set.

                              Rather than creating anything new, it would restrict what is already there and maintained.

                              Yep, that’s exactly what I had in mind by a “subset of a language” ! Sounds really cool, did you get far with this?

                              1. 1

                                Sorry, wrong assumption on my part. It doesn’t have recursion, not even primitive recursion or structural recursion.

                                I assumed it had some kind of iteration to achieve Ackermann; it achieves it instead with a Natural/fold which takes an arbitrary Natural for how many iterations to perform. That’s sadly enough to define a pernicious Ackermann-like function. I’m not sure that Gabriel was aware of this possibility when adding Natural/fold to Dhall; I’m sure he wouldn’t have, as the main promise of Dhall is termination, which really comes with an implicit “in short time” qualification, at least it did in my mind; otherwise you always have to add a “in n steps” or “in n milliseconds” sandbox to your configurations which is unaccaptable: what do you do when you hit this limit?

                                1. 2

                                  No problem!

                                  This is exactly why I’m somewhat sceptical about termination as a necessary property, because termination in theory (i.e. in 100 years) means non-termination for all practical purposes. Ackermann is a contrived example, but once you have anything resembling functions (even withouth recursion!), you’re doomed anyway, e.g.

                                  function f1(x) { return concat(x     , x); }
                                  function f2(x) { return concat(f1(x), f1(x)); }
                                  function f3(x) { return concat(f2(x), f2(x)); }
                                  function f4(x) { return concat(f3(x), f3(x)); }
                                  ...
                                  function fN(x) { return concat(...)
                                  

                                  (not Dhall, but I believe this is expressible in Dhall)

                                  As N grows, this will take exponential time to evaluate (and also exponential memory), while the code itself grows linearly with N. Linear growth means it’s something that can be written and managed by a human, so here you go, you have something that doesn’t terminate in practice. Again, this is something contrived and unlikely to occur unless you’re being malicious, but if we’re discussing language design, that’s something to keep in mind.

                                  You’d get this blowup behaviour from jinja templates, or YAML, or anything that supports string interpolation or helper functions.

                        1. 4

                          Anyone here running NetBSD as their primary desktop? I’d love to hear your stories :-) Can you share some of what you like about it? The small moments where you stop and think, that’s why I run it!

                          1. 10

                            I use it (have for a few years). I really enjoy how quickly I can go from “name of a binary I use” to the source code that created it. Applies to both base and packaging.

                            It’s also a really nice simple system that I understand fully, which is always fun. It brings me a lot of confidence in doing insane things like live-swapping libc to test out a patch.

                            My use case has been very FOSS-heavy for many years, so I don’t miss much. I did quit some video games in my transition, but it felt like a good decision (the particular way I was playing had negative effects on me).

                            During a few days I felt like I want to work from my NetBSD machine, I was using it as a SSH terminal to a remote Linux box, and using my Android phone for video calls.

                            My graphical environment is actually harder to reproduce on the user-friendly linuxes, since I use things like dwm, which are configured by patching code & rebuilding.

                            I’ve been a developer for NetBSD for the past few years and it has encouraged me to do crazy things I would never have otherwise attempted, like a lot of toolchain and driver work. It has given me justified self-confidence on the professional front. It’s a nice community and I recommend it.

                            1. 3

                              I’ve been a developer for NetBSD for the past few years

                              Get a hat! https://lobste.rs/hats

                            2. 8

                              I don’t anymore (it simply doesn’t support things that I need for my job), but I did for a long time years ago.

                              • pkgsrc is great.
                              • The whole rc system is really nice.
                              • The work that went into making it very portable also resulted in a clean and understandable code base for the code system components.

                              NetBSD also seems to be doing the most moving-the-ball-forward of the BSDs of late. OpenBSD is trying to be the most secure which is great, but that encourages not doing things like putting Lua in kernelspace. :) The rump kernel project and various other bits of neat stuff is great.

                              1. 5

                                NetBSD also seems to be doing the most moving-the-ball-forward of the BSDs of late

                                well.. jmcneill@ is doing incredible development with all the things Arm and embedded (RPi4 Ethernet most recently, AWS a1.metal, display output on lots of SoCs…), for sure.

                                But in general, FreeBSD is rolling lots of balls forward, I would say more than the others, but I’m biased of course :)

                                The rump kernel project

                                unfortunately rumprun doesn’t seem to be actively developed anymore..

                                1. 4

                                  Sorry, I didn’t mean to discount others’ hard work. I meant something more along the lines of “willing to do more researchy/experimental things.”

                                  Then again, I may be wrong on that front too.

                                  1. 1

                                    That HID work looks promising. It would be good if it enabled N-key rollover in keyboards to allow things like Plover and my unreleased stenography engine to work on FreeBSD.

                                    1. 1

                                      N-key rollover in keyboards

                                      It does, I just checked — my USB keyboard went from 6-key to >20-key rollover.

                                      I’m actually kinda surprised it didn’t work with the stock ukbd — I thought the keyboard pretended to be a hub with multiple keyboards, but no, it’s multiple HID endpoints on one device.

                                      1. 1

                                        Excellent. Thanks for checking.

                                  2. 1

                                    What is good about putting Lua in kernelspace?

                                    1. 6

                                      So far they’ve used it for scriptable CPU scaling and packet filtering, which is cool. The paper where they discuss the implementation points out that their packet filtering does not introduce significant performance overhead and does filtering stuff that would be impossible with existing solutions.

                                      The idea of having a memory-safe scripting in the kernel instead of more C seems like it could be a good thing for when things need to be more dynamically configured. If they had gone with userspace controllers, they’d need to poke more holes into the userspace-kernelspace barrier. This way, a lot of functionality can be exposed through the single “run a Lua program” thing.

                                      Most importantly, though (and IMHO), it doesn’t matter whether it’s good or not. We can’t know if it’s good or if it’s useful until we try it, and I’m glad that the NetBSD folks were willing to give it a try. If they fail, no harm no foul, but if they succeed, they’ve pushed the state of the art a little bit forward.

                                      1. 2

                                        Yeah, I didn’t know about this either. Found slides on it. Looks pretty cool. One motivation, prototyping or debugging drivers, is similar to Rump kernel.

                                    2. 6

                                      Anyone here running NetBSD as their primary desktop?

                                      Yes, I am.

                                      I had an old netbook idle at home and decided to make that my primary computer (opposed to the family computer). Have in mind, though, I primarily use my computer to write code and I am a quite new NetBSD user.

                                      In a nutshell:

                                      Pros:

                                      • it is fast (faster than my previously installed Arch Linux in that same machine);
                                      • it is “clutter free”—it has (almost) zero default services, and I know and understand all the process running in the operating system;
                                      • it is documentation (both the “The Guide” and the man pages) is the most complete and “human friendly” I have ever seen—take a look on the man afterboot to understand what I mean;
                                      • there is a level of symbiosis between the kernel and userland is incredible (for a person used to Linux);
                                      • pkgsrc, although I just use the binary distribution (pkg_add and pkgin) so far;
                                      • it is the first *nix system I feel comfortable to not have a window manager (I have i3 for the rare occasions I need to use Firefox, though);
                                      • it seems (by what I see in their official blog) to be actively improving security (e.g., kernel fuzzing efforts) and compatibility with LLVM, which I find interesting.

                                      Cons:

                                      (I sincerely cannot say: everything seems to be working—I haven’t tested the Bluetooth—and I am very glad with it! Let’s try something for the sake of argument.)

                                      • it requires some experience with *nix to install and configure it, although not much—I doubt my parents would use it instead of Windows;
                                      • it forces you to read more documentation;
                                      • pkgsrc binaries are updated less frequently (every quarter of the year) than in the apt repository;
                                        • I have yet to learn how to use the pkgsrc properly, and it requires some documentation reading;
                                        • even when I learn to use ir properly, I will need a lot of luck, patience, time and electricity compiling Firefox in a netbook.
                                      • sailor is not there yet, while FreeBSD has Jails.

                                      Why NetBSD?

                                      All in all, I thought: why not?

                                      1. 2

                                        @guiraldelli, thanks a lot for your detailed reply and links! I’m going over them all. :-)

                                      2. 4

                                        Not as primary, but I’m running it on two oldish computers, an Amiga 1200 (with a 030 and fpu) and a PC with one of the first Athlon CPUs (Slot A).

                                        I love it. The kernel and userspace are non-bloated (relative to Linux), the manpages are excellent and the source code is well-structured and pleasant to read.

                                      1. 27

                                        I think if you’re considering Gtk or Qt for it’s cross-platformness, you should probably also consider Tcl/Tk. It’s a lot smaller and faster than the others, and if one of them is good enough, Tcl/Tk almost certainly is better.

                                        And if Tcl/Tk isn’t good enough? You know, sometimes that higher standard of quality is required for consumers, and if it is, you’ll waste so much time trying to get that quality out of a “cross-platform GUI framework” that you might’ve just as well used a “native” framework and written your code twice, because your program will always be bigger than your competitors, and slower than them, and it’ll always feel a little bit off even if you do everything else right.

                                        For me, I’ve completely written off this kind of “cross-platformness” because that kind of hassle just isn’t worth it.

                                        1. 10

                                          I come here to say about Tcl/Tk and FLTK: both are way lighter options.

                                          1. 10

                                            Accessibility is a quality that is arguably even more important for business applications than it is for consumers, and Tk doesn’t have it, at all. That should be a deal-killer, unless you’re developing an application for a task that’s inherently visual (e.g. working with graphics).

                                            1. 2

                                              Do you know if command line tools are inherently accessible to screen readers, or does something additional have to be done?

                                              1. 2

                                                Yes, command-line tools are inherently accessible.

                                                1. 5

                                                  Although I’ve always had trouble with the ones which add pseudo-gui elements.

                                              2. 2

                                                To the person who downvoted my comment as a troll, do you have any suggestions on how I could have written it better? I’m not trolling; I’m genuinely advocating for accessibility (check out my profile). If it came off as too harsh, that’s just because I feel strongly about this subject.

                                                1. 1

                                                  I’ve also been getting a bunch of frivolous “troll” flags on comments I made in good faith. Perhaps someone is abusing the comment downvoting system? I don’t think you should have to back down from what you said, it was a good point and not harsh at all.

                                                2. 1

                                                  This is also one of the reasons why Electron is awesome(ish): you get all the web platform accessibility tools and APIs. And there is tons of content about how to make accessible applications with web technologies, which IIRC was not the case with QT/GTK.

                                                  1. 1

                                                    But (and you may have anticipated this point) is a framework really all that accessible if every application written with it is ginormous and requires a whole bunch of CPU and RAM just to get up & running?

                                                3. 2

                                                  To paraphrase a common refrain in Functional Programming circles, “islands of backend logic in a sea of GUI code”.

                                                  1. 2

                                                    Agreed, particularly when there are good choices for writing your business logic that will run and interoperate more or less cleanly no matter where you are. You technically don’t even have to write (most of) your code “twice”, just the UI bits, which are probably going to be quite different anyway because of the different UI metaphors prevalent on different platforms.

                                                    1. 3

                                                      going to be quite different anyway because of the different UI metaphors prevalent on different platforms.

                                                      Count on it!

                                                      And it’s hard to be fluent in those metaphors when they’re so different from system to system unless you force yourself to use those other systems often enough that you wouldn’t put up with a Qt or Gtk build of your Mac or Windows app anyway.

                                                    2. 0

                                                      I have been thinking electron/react native as cheap way (in time and money) to get cross platform applications. And as money<->time<->quality dependency chain dictates, you will lose certain amount of fit and finish when you take that approach. Lot of times it is correct call to do (I rather take non-native application than no application at all) but it is always a trade-off which needs to made, not just defaulted to.

                                                    1. 11

                                                      It reminds me of The Haskell Pyramid: several (maybe even most?) of the benefits of programming in Haskell come from basic constructs.

                                                      Hail Occam’s razor, Pareto principle, KISS principle and similar reasonings, once again, in a world full of (essential, but mostly incidental) complexities. 🙂

                                                      1. 1

                                                        Thanks for sharing that… maybe I’ll give haskell a try after all!

                                                      1. 21

                                                        ASCII only, displays everywhere

                                                        What? UTF-8 is widely supported. If you care about a consistent view, please drop all your CSS. This makes the whole website only suitable for users who write in the Latin script. If you hate emojis, I would suggest to block that specific range of UTF-8 codepoints.

                                                        1. 7

                                                          This makes the whole website only suitable for users who write in the Latin script.

                                                          Although I agree with your statement, it is worth to mention that my native language uses Latin script and still needs characters out of the ASCII range to clearly distinguish different words.

                                                          The situation is even worse for the one I am learning recently: in it, there are necessary letter which aren’t in ASCII, although still uses Latin script.

                                                          1. 5

                                                            Yeah that’s a weird thing isn’t it. ASCII was supposed to support American English. It doesn’t and I don’t think there’s any major natural language that can be written using ASCII only. Maybe its inventors have been a bit naïve, or they just needed something to put on their résumé…

                                                            1. 6

                                                              It was created in 1963. The world was very different back then!

                                                        1. 1

                                                          That’s the second translation extension which I use that was “attacked” by Mozilla. The other one was S3.Translator after a request to collect statistics of use.

                                                          1. 3

                                                            That’s removal from addons.mozilla.org. Is S3.Translator also blocked for side-loading?

                                                              1. 3

                                                                FWIW we have been delisted (wiped from the face of the AMO, automatic updates disabled, but people can install/still use the extension) rather than blocked (forcibly disabled on every users’ machine).

                                                                1. 2

                                                                  Jesus. If Mozilla prevents me from using tridactyl, then it’s likely that I move back to Chrome. Thank you for dealing with it. It sure looks frustrating.

                                                            1. 0

                                                              It still makes me sad to think about the fact that we never managed to figure out how to make BitTorrent work while respecting intellectual property. There is this amazing technology that enables us to share a vast amount of data in a wonderfully efficient way. Still, using it you’re most likely doing something that is considered illegal in your country.

                                                              1. 13

                                                                [..] to make BitTorrent work while respecting intellectual property.

                                                                BitTorrent optimises sharing; intellectual property, as I understand it is mostly used, works as a stop-mechanism to sharing.

                                                                Thus, to me, it seems they have conflicting interests.

                                                                Of course it is a simplistic perspective on the subject.

                                                                But every time I think about sharing and intellectual property in digital age, I think about libraries and books, and how the standard copyright notice

                                                                No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher […]

                                                                and libraries co-exist. (Highlight on distributed is mine.)

                                                                Of course it is simply a mental exercise and I have no solution to it.

                                                                1. 6

                                                                  How do you define “respecting intellectual property”? Users get to decide whether to download a torrent and participate in the swarm. If it’s illegal, then they are breaking the law. If I use a kitchen knife to cut roast beef, that’s legal. If I use a kitchen knife to cut roast rhino, that’s likely illegal. It’s not the knife maker’s responsibility to ensure that I’m not using the knife to cut endangered species.

                                                                  1. 0

                                                                    It’s definitely illegal to kill people with that knife, in case you needed a clearly illegal act.

                                                                  2. 4

                                                                    There is this amazing technology that enables us to share a vast amount of data in a wonderfully efficient way. Still, using it you’re most likely doing something that is considered illegal in your country.

                                                                    Perhaps it is the laws that are at fault, and not the technology. Intellectual Property is predicated on the zero-sum logic of property–for you to have something, I must lose it. Clearly that is not the case with information.

                                                                    1. 0

                                                                      Clearly that is not the case with information.

                                                                      Perhaps, but people don’t create information for free. There is an expectation that you will get reimbursed in some way, and if you don’t then you are indeed operating at a loss. After all you could have done something else with your time.

                                                                      Edit; and that doesn’t include the patent argument where an inventor would have indeed lost something, business, if people enroach on the intellectual property at the core of that business.

                                                                      1. 5

                                                                        Neither of those statements are accurate, right–you and I are both (to the best of my knowledge) creating works here without getting paid. People strumming to themselves on guitars and singing madeup songs while working aren’t getting paid. The idea that all creation is motivated by economic activity seems easy to disprove.

                                                                        The patent thing isn’t about losing something due to encroachment. It’s about saying “Hey, we’ll give you a limited monopoly to give you even more value in the market, in order to incentivize discovery of new techniques and then to explicitly force you to share that intellectual property by having public record of how you did what you did.” That second part is critical, and one very much lost in modern software and legalese.

                                                                        Also, real talk: the American publishing industry was founded on copyright infringement. The spread of semiconductor techniques and thus the information age was predicated on the sharing of information via folks like the Traitorous Eight and so forth. The bootstrapping of the Chinese tech manufacturing in places like Shenzhen is massively driven by infringement on IP.

                                                                        Enforcement of IP is just about rent seeking, and not societal good. Anyways, happy to continue in DMs–this is off-topic for the site. :)

                                                                    2. 3

                                                                      never managed to figure out how to make BitTorrent work while respecting intellectual property

                                                                      You could say the same thing about any protocol. HTTP, FTP, NNTP, Email, etc.

                                                                      1. 3

                                                                        Yeah, but of those two, which is the important one?

                                                                        There are several problems. One big problem is that copyright’s terms are for years, but it only takes a few days for information to be broadcast to all target markets. The biggest Hollywood films, for example, are released near-simultaneously globally and make most of their sales within the first two weeks. Books once took years to print in serious quantity, but now take months to print and weeks to ship. Music can be recorded in a weekend and be streamable within a week. Youtube and Twitch permit global publication within under a minute of recording.

                                                                        Consider a world where copyright is extremely short, due to the rapid speed at which information can be broadcasted. In such a world, a copyright holder might release their artwork over BitTorrent. In order to do this, they would sell tickets which each have a unique private tracker URL. Then, when the artwork is published, the ticket-holders are sent torrent metadata and can download their own private copies of the artwork hours or even possibly days ahead of non-paying viewers.

                                                                        Another big problem is that it’s not at all clear whether information, in the information-theoretic sense, is a medium through which expressive works can be created; that is, it’s not clear whether bits qualify for copyright. Certainly, all around the world, legal systems have assumed that bits are a medium. But perhaps bits have no color. Perhaps homomorphic encryption implies that color is unmeasurable. It is well-accepted even to legal scholars that abstract systems and mathematics aren’t patentable, although the application of this to computers clearly shows that the legal folks involved don’t understand information theory well enough.

                                                                        I wonder why this makes you sad. As a former musician, I know that there is no way to train a modern musician, or any other modern artist, without heavy amounts of copyright infringement. Copying pages at the library, copying CDs for practice, taking photos of sculptures and paintings, examining architectural blueprints of real buildings. The system simultaneously expects us to be well-cultured, and to not own our culture. I suggest that, of those two, the former is important and the latter is yet another attempt to coerce and control people via subversion of the public domain.

                                                                        1. 2

                                                                          I really like the idea of selling tickets for private tracker URLs. This would make decentralized distribution possible while making sure that content creators get their revenue.

                                                                          Besides the hardly measurable cultural loss due to restrictive copyright law, I think what this also comes down to is waste of resources. I wonder how much money could be saved by switching to a decentralized infrastructure when distributing files on the internet.

                                                                          Keep in mind all those judges, laywers and prosecutors whose resources are bound in disputes concerning copyright. If there were better ways of regulating intellectual property, I believe that many economies could benefit from this.

                                                                      1. 8

                                                                        Learning the existence and shortcuts of GNU Readline made me a better user of Bash and GDB.

                                                                        But the breakthrough was to learn about (and use) rlwrap (Readline wrapper), which adds Readline functionality to a software which doesn’t (originally) use it (e.g., telnet).

                                                                        With such a tool, it is possible to C-r, C-p, C-a, C-w, C-y, etc, in any command line interface.

                                                                        1. 2

                                                                          The day I found rlwrap to make the oracle SQL cli tool bearable was a good day

                                                                        1. 20

                                                                          Summary. The linked text is just a rant, didn’t address real practical problems in the Haskell language and complains about the wrong subjects, in my opinion.

                                                                          The linked text infuriates me. And it is so because it is uninformative and unnecessarily spreads fear, uncertainty, doubt with no back-up data or references.

                                                                          It can be read as a rant, at most, and as so it should be tagged, in my opinion.

                                                                          But I want to address what I find wrong in this text.

                                                                          • IO problem. Besides the fact that main = putStrLn "Hello, World!" is a simpler “Hello, World!” program than its C counterpart, what I learned is that Haskell forces the programmer to think about, design and implement data transformation first, then process the input or output of it. In Java-esque language, one concerns about the business logic, then moves to the user interaction part (which will be encapsulated in IO monad). When I wrote a compiler, interpreter and simulator (for a metabolic language) in Haskell, I learned that lesson smoothly because most of my design models were already mathematical, so I was (theoretically) mainly focused on the “business logic”. Later, when I had to add the HTML5 user interface, it was actually easy, and I could never feel this IO problem with my rustic MVC design.
                                                                          • Monoid abstraction. I am not kidding, but the 5th-grade children in my homeland learn what a monoid is, so I think any other programmer can as well.Of course they don’t learn the name “monoid” (or “group”, in the mathematical sense), but they do learn all the other nomenclature: set, binary operation, identity element, associativity. And they have exam on the subject, in which they have to recognize which pairs of sets and binary operations forms a monoid or not. Do I come from a genius land? Certainly not! The real problem is not how difficult is to understand the monoid abstraction, but the semantics it contains after all the social media complaint about Haskell, monad and its sordid definition as “a monad is just a monoid in the category of endofunctors”. If monoid would be presented as the Clock Arithmetic design pattern in a Gang of Four book, I think most people would accept it. (Well, most people accept the monoidal properties of Promise and Future in Java without naming it monoid or even monad.) My opinion is that a lot of programmers are “mentally lazy” and want an easy, served-on-a-plate simple solution for their problem instead of reflecting a little on concepts—design patterns are great examples of obvious instances of applied object-oriented concepts which most people can just “go to the shelf and take the canned solution” if they don’t want to design their own architecture using the basic set of object-oriented concepts.
                                                                          • “Not made for this world.” I think the correct term would be “ahead of its time”. The syntax is not ALGOL-like (but mathematical); the language stimulates reasoning on data and, consequently, use of types; it also stimulates determinism (via pure functions); on the other hand, it makes non-determinism easy (top of my head, Applicative and lists are nice to fool around with it); the easiness of polymorphism (compare it with C++ templates) and type classes; lazy evaluation; several styles for concurrency and parallelism; and so on… The list of features that makes Haskell ahead of its time (of course it is not the unique language in this group) is incredible! But when we are stuck with strict-subset languages of the 1970s (yes, most of our dear programming languages are a strict subset of ALGOL 68), then we tend to think any breakthrough language is not made for this world.

                                                                          Uff! I took out of my system.

                                                                          However, I want to briefly point out some of my practical criticisms towards Haskell, which I (as a not-very-experienced Haskell programmer) think the article should have addressed.

                                                                          • Memory consumption. Do you remember the compiler/interpreter/simulator I told you before? Yeah… To translate a couple of rules to a primitive assembly, my application consumed 5GB of my working memory. And it is not because I am a lousy programmer (could be, though), but because the “standard way of doing it” (no advanced Haskell trickery) tends to generate memory consuming programs, in my experience.
                                                                          • Debugging. I am fan of debugging. I do use GDB almost daily, both to understand calls, as well as, to debug buggy applications. I still don’t know how to do so in Haskell, although I haven’t dive into it.
                                                                          • Performance. Haskell is fairly good at it, I must admit: it is comparable to Java and Go (given it is comparable to Java). But I think it has the potential to get closer to Rust—so I can have some chance to convince my employer to start using it.
                                                                          • Retrievability and documentation. The Haskell Wiki is a gem, in my opinion: almost any subject concerning programming languages (deliberately exaggerated comment), particularly Haskell, in in there. The problem is to find it (when one wants or needs it). Books? The landscape has been changing (thank you, @bitemyapp), but compared to Rust (and it is impressive free book, thanks @steveklabnik), for instance, it is light-years behind. And to find the correct (maintained) library for common tasks is not easy, but there are initiatives on the way.
                                                                          • Reputation. “The Haskell Pyramid” is a real problem, in my opinion. Maybe one of the biggest problems in Haskell.

                                                                          It is obvious that Haskell isn’t perfect and lacks tools for “practical, industrial application”. Unfortunately, those were not addressed in the unique paragraph (out of eight) in that text.

                                                                          P.S.: it seems that <br/> doesn’t work in item lists in this Markdown.

                                                                          P.P.S.: I didn’t mean to say the Rust book(s) are better than “Haskell Programming from First Principles” (@bitemyapp, I am a fan of your book and I cannot recommend it enough). What I meant is that the (series of) Rust book(s) are part of the documentation Rust documentation available (for free) in the Rust’s website and it is easy to find it, read it and understand Rust from it. In Haskell, the closest equivalents are the Haskell Wiki and the Haskell Wikibook, which isn’t as maintaned as the Rust book.

                                                                          1. 2

                                                                            Far as memory, performance, and predictability, you might find Habit interesting. I think it stalled after they got poached for bigger projects. It’s still there to give ideas and inspiration to whoever wants to try it next.

                                                                            1. 1

                                                                              Thank you for your link: I will take a detailed look at it.

                                                                              From the language’s report, I think you are spot-on:

                                                                              This report presents a preliminary design for the programming language Habit,a dialect of Haskell [14] that supports the development of high quality systems software.

                                                                              I guess Habit has major focus on these features, but @Leonidas has nicely answered me in another thread in which he suggests that OCaml also addresses these features better than Haskell. What do you think?

                                                                              1. 2

                                                                                Haskel tries to be purely functional as much as possible. Ocaml is more flexible with several paradigms supported. It might be easier to do imperative stuff just for that reason. Haskell’s consistency might have advantages, too. Probably depends on your needs.

                                                                                I know Jane St is all over Ocaml, including improving it. One weakness I know of Ocaml is concurrency. Haskell has better options. Ocaml usage seems to lean more toward mainstream programming, maybe making it easier to learn. Whereas, Haskell gets into mathematical concepts that might give you more new ways of thinking on top of FP in general.

                                                                            2. 1

                                                                              The syntax is not ALGOL-like (but mathematical);

                                                                              That’s a simple demonstration of what is so annoying about the Haskell community. Do you honestly believe that Algol was created without mathematics? Do you think e.g. Alan Perlis and John McCarthy, two members of the ALGOL committee, lacked the deep mathematical knowledge required to understand monoids? Knuth doesn’t know or use mathematics so the poor fellow had to hack things up in MIX assembler? What about Alan Turing? There is nothing more imperative and side effect plagued than a state machine with an infinite storage unit attached. Poor Dennis Ritchie, with his Ph.D. on the Grzegorczyk hierarchy probably didn’t understand the definition of a function so he had to hack up something like C? Is that your theory? People who are not on the Haskell bus are just too lazy and innumerate to appreciate the depths of algebra needed to understand what a monoid is?

                                                                              Here, I have a question for you. Function composition is associative and the identity map is trivial, so why do you need to add a monoidal structuring to Haskell if it’s about “pure” functions? The answer, to me, is that Haskell is just embrarrassed by its statefulness, there is an inherent notion of state in the structure of the program text even without all the monad nonsense because it’s not at all convenient to write a program as a single expression using function composition as the only connective e.g. a single lambda-calculus expression. Of course, contrary to ideology, this is also true of most mathematical texts even the most non-algorithmic. When you have definitions “let G be a non trivial cyclic group” and then later “Let G be the monster group”, those indicate a change of state. x is not always the same value in a mathematics textbook! Try explaining Euler’s method without some notion of step. So “pure” Haskell is, of course, statefull but shamefaced about it. And then you realize that, oh shit, the dim Algol committee maybe was not so stupid and to do complex things we need to be able to specify both state and evaluation order so you want to add more complicated imperative structures, but still retain the illusion of being in a pure function space - viola, use the endofunctor to smuggle in state variables and a fancy composition rule to compose associatively while properly connecting all the state variables. Ta da!

                                                                              1. 5

                                                                                I usually abstain myself from this kind of comments, but I think it is important to clarify some things.

                                                                                1. I think your comment is unnecessarily off-tone and pretentious.

                                                                                2. From a single quote from my comment, you have inferred a lot about the Haskell community and myself.

                                                                                When I stated that “[t]he syntax in not ALGOL-like (but mathematical)”, I meant it is based on the mathematical notation and not on natural language’s prose. It is not exclusive to Haskell: OCaml, SML and others have the same kind of syntax. And that is deliberated.

                                                                                1. I am aware of the people in the committee of the ALGOL language. And most of them were mathematicians (van Wijngaarden, Dijkstra and Hoare are some examples that I know by heart), so I am personally sure that they were aware of what a monoid is, particularly when 11-year children are aware of it.

                                                                                By the way, the members I cited above introduced more mathematical formalism to the ALGOL-68 they designed or implemented: W-grammar, recursion and ranges check (via Hoare logic expressions), respectively.

                                                                                But it doesn’t exclude the fact that ALGOL has a syntax similar to prose instead of mathematical notation.

                                                                                1. I don’t think that “[p]eople who are not on the Haskell bus are just too lazy and innumerate to appreciate the depths of algebra needed to understand what a monoid is”, but I think that most programmers are.

                                                                                In some fields somehow related to computing, e.g. electronic engineering, to know advanced calculus is a pre-requisite to make the simplest products (for instance, Laplace transform to solve integro-differential equations).

                                                                                It is not that they really need to know it, but since it is the basic concept to reach their solution, they learn and internalise it.

                                                                                It might seem different in programming, but it is not. Associative operations are widespread in algorithms, and to know what a monoid is incredibly helps to compose better solutions. And it is not a difficult concept, given that 11-year students learn it!

                                                                                But I see a lot of programmers complaining about learning few mathematical concepts (actually, it is a matter of just learning the naming), but learning the shiny new framework every six months.

                                                                                To me, personally, it is just shallow.

                                                                                1. “Why do you need to add a monoidal structuring to Haskell if it’s about “pure” functions”?

                                                                                I am not sure whether I am the right person to answer you this, but the Monoid type class is there for convenience: it is just, in OOP terms, “an interface”. (It is a type class, actually.)

                                                                                The reason it is there it to facilitate code-reuse: all types that derive Monoid have the mappend function implementation, also known as <> “operator”.

                                                                                1. I am not aware of a single person that denies the existence of state in Haskell. As fas as I am concerned, Monad is a way to model those states in the language so that it fits in the type-checker and the “purism” of the language.

                                                                                2. “Algol committee maybe was not so stupid and to do complex things we need to be able to specify both state and evaluation order […]”.

                                                                                The need is conjectured to not exist by the Turing-Church thesis; as I understand it, the Turing-complete models are equivalent to the sequential Turing model, but λ-calculus doesn’t require so.


                                                                                Still, I you made me think on reason Haskell has the Monoid type class, which was a useful exercise and I hope we can keep a civil discussion.

                                                                                1. 1

                                                                                  When I stated that “[t]he syntax in not ALGOL-like (but mathematical)”, I meant it is based on the mathematical notation and not on natural language’s prose. It is not exclusive to Haskell: OCaml, SML and others have the same kind of syntax. And that is deliberated.

                                                                                  ALGOL is not based on natural language at all. FORTRAN is definitely not based on natural language - Formula Translation. C looks like recursive functions.

                                                                                  But it doesn’t exclude the fact that ALGOL has a syntax similar to prose instead of mathematical notation.

                                                                                  No. The main constraint on ALGOL surface syntax was that it had to be typed on teletype machines or even punched cards.

                                                                                  It might seem different in programming, but it is not. Associative operations are widespread in algorithms, and to know what a monoid is incredibly helps to compose better solutions. And it is not a difficult concept, given that 11-year students learn it!

                                                                                  Here you combine a condescending approach with a claim “incredibly helps” that seems completely wrong to me. Of course, even in US elementary schools, one learns about associative operations. How does it help in programming to call that “monoidal” ?

                                                                                  I think programmers should learn algorithmic analysis, state machines, combinatorics, linear algebra, … I don’t see a need for abstract algeba in programming - it’s cool stuff, but …. That is, programmers need mathematics, but they do not necessarily need elementary concepts of abstract algebra awkwardly glued onto a complicated programming language.

                                                                                  1. 3

                                                                                    No. The main constraint on ALGOL surface syntax was that it had to be typed on teletype machines or even punched cards.

                                                                                    ALGOL had distinct reference and hardware languages. They say for the reference language:

                                                                                    1. The characters are determined by ease of mutual understanding and not by any computer limitations, coders notation, or pure mathematical notation.
                                                                                    1. 1

                                                                                      That’s such a great paper: very condensed and high level of information. But it certainly does not help the argument that ALGOL was based on natural languages - it’s clearly designed around arithmetic expression. Wikipedia has a good note on the typewritter constraints - pre ASCII!!!.

                                                                                      https://en.wikipedia.org/wiki/ALGOL

                                                                                      The ALGOLs were conceived at a time when character sets were diverse and evolving rapidly; also, the ALGOLs were defined so that only uppercase letters were required.

                                                                                      1960: IFIP – The Algol 60 language and report included several mathematical symbols which are available on modern computers and operating systems, but, unfortunately, were not supported on most computing systems at the time. For instance: ×, ÷, ≤, ≥, ≠, ¬, ∨, ∧, ⊂, ≡, ␣ and ⏨.

                                                                                      1961 September: ASCII – The ASCII character set, then in an early stage of development, had the \ (Back slash) character added to it in order to support ALGOL’s boolean operators /\ and /.[23]

                                                                                      1962: ALCOR – This character set included the unusual “᛭” runic cross[24] character for multiplication and the “⏨” Decimal Exponent Symbol[25] for floating point notation.[26][27][28]

                                                                                      1964: GOST – The 1964 Soviet standard GOST 10859 allowed the encoding of 4-bit, 5-bit, 6-bit and 7-bit characters in ALGOL.[29]

                                                                                      1968: The “Algol 68 Report” – used existing ALGOL characters, and further adopted →, ↓, ↑, □, ⌊, ⌈, ⎩, ⎧, ○, ⊥ and ¢ characters which can be found on the IBM 2741 keyboard with “golf-ball” print heads inserted (such as the APL golfball). These became available in the mid-1960s while ALGOL 68 was being drafted. The report was translated into Russian, German, French and Bulgarian, and allowed programming in languages with larger character sets, e.g. Cyrillic alphabet of the Soviet BESM-4. All ALGOL’s characters are also part of the Unicode standard and most of them are available in several popular fonts.

                                                                                      The great David Parnas once told me of a colleague that the colleagues highest mathematical accomplishment had been some skill using the symbol “golfball”.

                                                                                  2. 0

                                                                                    The need is conjectured to not exist by the Turing-Church thesis; as I understand it, the Turing-complete models are equivalent to the sequential Turing model, but λ-calculus doesn’t require so.

                                                                                    It’s like the need to use positional notation instead of Roman numerals for arithemtic. In principle, Roman numerals suffice.

                                                                                  3. 3

                                                                                    Which is why folks like Hoare embraced the nature of actual programs by creating mathematical ways of reasoning about imperative code. The tools with the most productive, practical use in industry follow that approach. Most use Why3 which addresses functional and imperative requirements. The idea being you use whatever fits best for your problem.

                                                                                  4. 1

                                                                                    I think the correct term would be “ahead of its time”.

                                                                                    Yes, I bet Esperanto was simply ahead of its time as well. It’s the world’s fault that it wasn’t widely adopted!

                                                                                  1. 14

                                                                                    I find that I treat programming languages the same way. If it compiles and runs quickly without eating up resources, I feel like the language is higher quality. It’s why I keep coming back to C and Lua despite their many issues. It’s why I can never get into Haskell or Rust despite agreeing with many of the design decisions.

                                                                                    1. 5

                                                                                      Guess then you should give OCaml a try, for a language pretty similar to Haskell overall and the one that has inspired Rust and compiling fast with low overhead.

                                                                                      1. 3

                                                                                        I would likely be still using OCaml if the compiler was slower (good incremental compilation support makes it much less of an issue anyway), but it’s one of the things keeping me there for sure.

                                                                                        If it can compile itself under 10 minutes using a single CPU, the only question about beta compilers and experimental variants is whether I want to try it or not, and opam makes it dead simple to keep multiple versions on the same machine.

                                                                                        1. 1

                                                                                          If you’re on the .net platform, F# is a nice sister language to OCaml. Frequently valid OCaml is valid F#.

                                                                                          1. 3

                                                                                            Nothing against F#, if I find myself using .Net, I’ll sure go for it. However, I find the decision to remove modules really strange.

                                                                                            OCaml’s object system clearly had to be removed for interoperability with other CLR languages. I suppose the reason to drop decidable type inference was the same. But why remove the second most important feature of ML? :)

                                                                                            1. 1

                                                                                              For the most part we have clr friendly ways of doing the same thing already through generics and SRTP. There is a long term community goal of supporting them however the need is less strong due to the current clr approaches. So it falls on the back burner. To be precise we do have modules, they just aren’t parameterized modules, which is I presume what you’re referring to. Yeah for many OCaml natives, it’s use it if you’re in .NET :) I wouldn’t blame you for sticking with OCaml outside of it.

                                                                                              1. 1

                                                                                                If I remember correctly, F# has no signatures and structures in the usual ML sense, not just functors. If I’m behind the times, that’s great.

                                                                                                Also, is there a GUI builder that targets F# in VisualStudio or elsewhere now?

                                                                                                1. 1

                                                                                                  We have signatures but they go in a signature file, inline signatures are the subject of many years of debate that you’re free to join :). As I understand it structures are sidestepped in how we do things a little differently, I don’t know if you’ll enjoy it but it is comfortable for us.

                                                                                                  As for GUI builder, most people don’t use wsiwyg but there’s strategies for making GUI apps for android, ios, apple, windows, and linux, as well as a way to do F# to Javascript and react/preact based approach for building web-ui. The popular approach for most of these is to use an “elm-like” library. The space is maturing and is very usable but of course isn’t going to be as feature rich as something like C#.

                                                                                                  1. 1

                                                                                                    Signatures are there in the form of .fsi interface files but I don’t think I’ve ever used them over the C#y class system.

                                                                                                    Modules are basically static classes that are presented as namespaces you can put let bindings in. They’re lexically extensible, which is neat (i.e. you can add a new function to the List module by defining a new module called List and putting it in there). It sometimes confuses VS though.

                                                                                          2. 2

                                                                                            Sincere question: what’s the advantage of OCaml over Haskell and Rust? (I do Haskell.)

                                                                                            As far as I am concerned, OCaml isn’t faster than any of those at runtime (e.g., OCaml vs. Haskell) but has the advantage (at certain perspectives) of strict evaluation.

                                                                                            Is it compilation time? Memory footprint? Or is it even faster in runtime?

                                                                                            1. 3

                                                                                              I think I wouldn’t really put Rust and Haskell/OCaml in the same niche since Rust is useful where you have to avoid GC for reasons of limited memory or consistent reaction times but otherwise I would about always choose Haskell.

                                                                                              OCamls advantage is that it compiles very fast and runs pretty fast often without having to spend much time for optimizations. I heard anecdotal experience that the performance of Haskell code tends to be hard to estimate and hunting for space-leaks time consuming. The OCaml on the other hand is rather simple and predictable in what code it will generate, which is why it is even possible to write allocation-free loops. OCaml used to be written more imperatively but over time has evolved into a more Haskell-ish way of writing (but without the operator forest).

                                                                                              So yeah, if you’re already happily using Haskell then there is not much reason to switch, similarly the other way. But if you come from e.g. C, Lua or Python there are a lot of benefits.

                                                                                        1. 6

                                                                                          If you want all those great features on the web, try invidious. (not affiliated)

                                                                                          1. 1

                                                                                            I have been looking to something similar for a while.

                                                                                            I would like it to not show the video, just the audio… But it is already very good!

                                                                                            edit: It is possible to do so in there. I haven’t found out a way to make it the default mode, though.

                                                                                            edit 2: In the Preferences, there is a checkbox Listen by default. 🤦‍♂️

                                                                                            Thank you!

                                                                                          1. 8

                                                                                            For what it’s worth, I am pretty happy with the overall look&feel of sourcehut. My only complain would be that the gray is a bit too much gray, but I don’t know what would be the implication of a softer one in terms of accessibility (maybe none?)

                                                                                            1. 3

                                                                                              I’m guessing the css for the site is simple enough that adding a per account setting for some CSS or generic properties would probably be fairly easy. So you could for instance override the grey with another colour in your account. Just populate the custom values into the template (I think this is just jinja2) as a context value and voila. One of the things about hosted github and gitlab is all you really get to change is the project logos and org logo. Would be nice is sourcehut had a little bit of chstomizability in terms of look and feel. Also someone could then set up a dark mode theme for sourcehut…

                                                                                              1. 3

                                                                                                I guess a Greasemonkey extension should do it easily without need of server side customisation. 🙂

                                                                                                1. 4

                                                                                                  Yes but then I wouldn’t be able to MySpace up my sourcehut pages in various shades of neon pink and orange, distracting people from my poorly written code by making their eyes bleed.

                                                                                                  1. 1

                                                                                                    It’s on Google death list-for running third party code.

                                                                                              1. 15

                                                                                                Besides Rust and Go, Ocaml also fits those three bullets. The software 0install was ported from Python to Ocaml in the best (less biased, easiest to comprehend) refactoring stories I’ve read:

                                                                                                http://roscidus.com/blog/blog/2014/06/06/python-to-ocaml-retrospective/ (the first two posts in the series sets up a test for the author to complete in a bunch of languages, which he then holds an interim retrospective of)

                                                                                                Not that Ocaml has much buzz (from my view), except for Reason/Bucklescript, and some things from Jane Street.

                                                                                                1. 6

                                                                                                  I share your feelings.

                                                                                                  When I read the white paper, I thought that Haskell, Ocaml and even (especially?) ATS would fit the same requirements.

                                                                                                  But as many have noted, it’s a opinionated white paper to work as a programming language marketing.

                                                                                                  1. 5

                                                                                                    They seem to be wanting something that can pull in wide net of mainstream developers. Most don’t know Haskell or ATS. The second they’ll really have trouble with. Ocaml seems like it has a bigger community but it not very mainstream. Whereas, there’s piles of developers all over the languages they considered with Rust growing in that direction faster than most new languages.

                                                                                                    So, I think it was right for a Javascript shop to ignore stuff like ML’s, Haskell, and ATS.

                                                                                                1. 2

                                                                                                  I have looked into it and OpenJML as well.

                                                                                                  Both look promising, with KeY being “more formal” (as I understood it) and OpenJML a lower entry point to the layman.

                                                                                                  Have you used it at your projects?

                                                                                                  Would you mind to share your experience?

                                                                                                  1. 2

                                                                                                    I haven’t used it. Ran across it when I was researching something related to stack machines.

                                                                                                  1. 2

                                                                                                    Cardelli is impressive and this paper (which is kind of a little book) is a seminal work and one of the best papers I have (n)ever (finished) read(ing).

                                                                                                    Thank you for posting it!

                                                                                                    By the way, it is worth to take a look at Cardelli’s other works. He is what HN loves to call a polyglot and his works on not-standard computation and bioinformatics/computational biology are impressive.

                                                                                                    1. 2

                                                                                                      Cardelli is amazing. My favorite work of his was Modula-3. It was a safe, systems language that nicely balanced features vs simplicity, compile speeds, running speeds, safe-by-default, unsafe if necessary, and built-in concurrency. There was some formal verification on standard library, too. SPIN OS was a notable use.