Threads for jackdk

  1. 2

    Shipped a few updates to some Haskell packages I maintain, in particular reflex-backend-socket and reflex-libtelnet.

    1. 6

      In the sport of fencing, there’s an aesthetic concept carried over from its dueling roots known as la belle mort (the beautiful death) wherein two opponents match each other touch for touch with increasingly complex attacks until, tied and exhausted, they are one touch away from the end of the bout. One of them brings the tactical wheel back around to a simple attack and the other, like being knocked over with a feather, fails to anticipate it and loses. In a real duel, it would be considered even more beautiful if both opponents died from their wounds.

      This is all very weirdly romanticized and morbid, much in the way that death by Perl regex seems to be from some of the applause here. Somehow the author has tricked themselves into spotting a mistake only to find that their limited knowledge of character order in the Unicode table (and I do believe it’s Unicode, not ASCII) made the mistake one of their own misinterpretation.

      At the risk of moralizing, the lesson should not be (in effect): Someone’s regular expression confused me, which is their fault, so I will rewrite it in a way that makes more sense to me and preemptively tell you all to write regular expressions the same way.

      Instead, it should be something more like: Humans are too fallible to understand regular expression reliably without aids. Anyone who implies that there’s anything obvious about regular expression syntax is either lying or deluded. I was deluded. I will therefore use diagram tools when I have to read or write regular expressions (there have been several over the years) and will generally prefer parsers or codecs, which are generally more deliberate and less ambiguous.

      1. 3

        Unicode table (and I do believe it’s Unicode, not ASCII)

        It is both. Unicode is a superset of ascii.

        1. 2

          It is both. Unicode is a superset of ascii.

          It depends. UTF-16 is a unicode encoding and it is not a super-set of ascii. UTF-8 OTOH is.

          1. 3

            Ascii is both an encoding and a character set. The unicode character set is a superset of the ascii character set; this has no bearing on the relationship between the ascii encoding and the various ways in which unicode can be encoded.

        2. 2

          Well Unicode was deliberately designed so that it’s first 127 code points are identical to those of ASCII, but yes.

          And I would agree that the traditional regex syntax is not very intuitive. I’d like to see something like a small DSL for constructing regexes. Instead of /\d{2}[,-.]\d{2}/' from the article, why not the much-easier-to-debug: Regex::new().digit().repeat(2).unicode_range(',', '.').digit().repeat(2).compile();

          1. 5

            Does that .repeat(2) apply only to the method call immediately prior, or to the entire built regex thus far? The compact notation lacks this ambiguity.

          2. 2

            I’m not saying the mistake is in failing to memorize character order in Unicode (or ASCII, the difference being somewhat beside the point). The mistake is in implying that there is some objective level of obviousness in regex character classes or in regex as a whole.

            Now that we’ve passed 150 upvotes, I think it’s time to revisit the old saw:

            Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems.

            —Jamie Zawinski, paraphrasing a similar saying about sed

          1. 3

            Now if only they’d open source windows 7 haha. I wish. None the less it’s cool to see old software like this previously a black box in essence be open sourced by corporates. It’d be cool if older things like 98/95 or 3.1 we’re open sourced since their architecture was supersceded by the NT kernel and platform.

            1. 3

              I think this is a key point, and a key confession.

              All versions of MS-DOS are long dead, and the last release of the last branch, IBM PC DOS 7.1 (no, not 7.01) was 2003: https://sites.google.com/site/pcdosretro/doshist

              It’s dead. If MS were serious about FOSS, they could release DOS without any impact on current Windows.

              But they don’t.

              DOS doesn’t have media playback or anything. There shouldn’t be any secrets in there.

              It would help FreeDOS and maybe even an update to DR-DOS.

              So why not?

              I suspect they are ashamed of some of it.

              Win9x is equally dead, with no new releases in over 20 years. But I bet some of the codebase is still used, especially UI stuff, and some doesn’t belong to them.

              Win9x source code would really help ReactOS and I suspect MS is scared of that, too.

              IBM, equally, could release OS/2. Especially Workplace OS/2 (the PowerPC version) as there should be little to no MS code in that.

              Red Hat could help. They have relevant experience.

              But they don’t. How mysterious, huh?

              1. 10

                Any big project like this might contain licensed code.

                MSFT is a huge company with big pockets, so if they slip up, they might have to pay out a bunch of money to whatever ghouls have speculatively purchased the IP to that code. It’s rather easy to convince a jury that big bad MSFT is wantonly denying a rights-holder their fair share by open-sourcing.

                What’s the upside to open-sourcing? A bunch of nerds are happy, another bunch are vocally unhappy just because the license is not the best one du jour, and snarky Twitterers are highlighting shitty C code.

                It’s much easier to just say “we can’t risk it”.

                1. 3

                  whatever ghouls have speculatively purchased the IP to that code

                  I’m going to shamelessly steal that term for speculators in IP.

                  1. 2

                    Well, yes, but… here we are, discussing a fairly substantial app which they just did precisely this to.

                    It can be done and it does happen.

                    So I submit that it’s more important to ask if there’s anything that people can do in order to help it happen more often, rather than discuss why it doesn’t happen.

                    So, for example, we can usefully talk about things like MS-DOS, which is relatively tiny and which contains very little licensed code – for instance, the antivirus and backup tools in some later versions – and which could easily be excised with zero functional impact.

                    The question becomes not “why doesn’t this happen?” but the far more productive “well what else could this happen to?”

                    For instance, I have MS Word for DOS 5.5 here. It works. It’s still useful. It’s a 100% freeware download now, made so as a Y2K fix.

                    But the last ever DOS version was MS Word for DOS 6, which is a much nicer app all round. How about asking for that just as freeware? Or even their other DOS apps, such as Multiplan and Chart? How about VB for DOS, or the Quick* compilers?

                    1. 2

                      So I submit that it’s more important to ask if there’s anything that people can do in order to help it happen more often, rather than discuss why it doesn’t happen.

                      Sure, I am on board with this!

                      I keep being reminded about this, the fact that MSFT does anything with open source, much less literally owning a huge chunk of its infrastructure (Github) is mind-blowing to me.

                      I’m pretty sure there are people within MSFT who want to open-source DOS. 20 years ago this would have been unthinkable.

                    2. 1

                      That, and who knows if they even still have the source code?

                      1. 2

                        :-D Yes, that is very true.

                        I think WordPad happened because the source to Windows Write was lost.

                        Whereas Novell Netware 5 and 6 introduced new filesystems – which lacked the blazing, industry-beating performance of the filesystem of Netware 2/3/4 – not because the source was lost, but because nobody could maintain it any more.

                        The original Netware Filesystem was apparently an approximately half megabyte single source file in x86 assembly. Indeed it is possible that it was machine-generated assembly: Novell ported Netware from its original 68000 architecture to 80286 by writing a compiler that compiled 68000 assembler into 80286 object code.

                        When they revealed this and were told that it was impossible, they responded that they did not know that it was impossible so they just did it anyway.

                        There were only a handful of senior engineers who could, or even dared, to make any changes to this huge lump of critical code, both performance-critical but also vital to data integrity.

                        So in the end, when all those people were near retirement, they just replaced the entire thing, leaving the old one there untouched and writing a completely new one.

                        This lost a key advantage of Netware. Netware servers were extremely vulnerable to power loss, because at boot, the OS read the disks’ file allocation tables into RAM and then kept them there. The data structures were in memory only and so if the server was downed without flushing them to disk, data loss was inevitable.

                        This had 2 key results:

                        1. Netware disk benchmarks were 2-3 orders of magnitude faster than any other OS in its era, the time of slow, expensive spinning disks.
                        2. Netware had a linear relationship between amount of disk space and amount of RAM needed. The more disk you added, the more RAM you had to have. In the 1980s that could mean that a server with big disks needed hundreds of megabytes of memory, at a price equivalent to a very nice car or perhaps a small house.
                        1. 2

                          I think WordPad happened because the source to Windows Write was lost.

                          It’s not. (I’m looking at it.)

                          Write was just really, really early. It was written while Windows 1.0 was being written. The version included in NT 3.x is still 16 bit; while those systems did include some 16 bit code, this is the only executable I’m seeing that was 16 bit for lack of porting.

                          Important code, in particular its formatting code, was written in 16 bit assembly.

                          WordPad was a showcase for the then-new RichEdit control, which was part of rationalizing the huge number of rich text renderers that had proliferated up to that point.

                    3. 3

                      Not the latest version, but here you go: https://github.com/microsoft/MS-DOS

                      1. 3

                        True. But those were obsolete even 25Y ago.

                        From vague memory:

                        MS-DOS 1.x was largely intended to be floppy-only.

                        MS-DOS 2.x added subdirectories.

                        MS-DOS 3 added support for hard disks (meaning 1 partition on 1 hard disk).

                        3.1 added networking

                        3.2 added multiple hard disks.

                        3.3 added multiple partitions per hard disk (1 primary + 1 extended containing multiple logical partitions).

                        That was it for years. Compaq DOS 3.31 added FAT16 partitions above 32MB in size.

                        IBM wrote DOS 4 which standardised Compaq’s disk format.

                        Digital Research integrated 286/386 memory management into DR-DOS 5.

                        MS-DOS 5 copied that. That is the basis of the DOS support that is still in NT (32-bit Windows 10) today.

                        MS-DOS 6 added disk compression (stolen from Stacker), antivirus and some other utilities.

                        6.2 fixed a serious bug and added SCANDISK.

                        6.21 removed DoubleSpace

                        6.22 replaced DoubleSpace with DriveSpace (stolen code rewritten)..

                        That’s the last MS version.

                        Later releases were only part of Win9x.

                        IBM continued with PC-DOS 6.3, then 7.0, 7.01 and finally 7.1. That adds FAT32 support and some other nice-to-have stuff, derived from the DOS in Win95 OSR2 and later.

                        So, basically, anything before DOS 5 is hopelessly obsolete and useless today. DOS 6.22 would be vaguely worth having. Anything after that would be down to IBM, not Microsoft.

                      2. 2

                        They likely don’t own the rights to everything inside MS-DOS, or the rights are unclear. The product wasn’t made to be open source to begin with, so considerations for licensing was likely never taken. It would be a rather large undertaking to go through the code and evaluate if they could release each piece or not as they would likely have to dig up decades old agreements with third parties, many of which are likely vague compared to todays legal standards for software, and interpret them with regards to the code.

                        All of this for a handful of brownie points for people who think retro computing is fun? Eh. Not worth it.

                        1. 1

                          I think you may over-estimate the complexity of MS-DOS. :-D

                          Windows, yes, any 32-bit version.

                          But DOS? No. DOS doesn’t even have a CD device driver. It can’t play back any format of anything; it can display plain text, and nothing else. It doesn’t have a file manager as such (although DOSShell did, which came later.) It has no graphics support at all. No multitasking. It doesn’t have a mouse driver as standard, even. No sound drivers, no printer drivers. Very little at all, really.

                          The only external code was code stolen from STAC’s Stacker in DoubleSpace in MS-DOS 6, and that was removed again in DOS 6.2.

                          1. 2

                            DOS doesn’t even have a CD device driver

                            MSCDEX.EXE was in one of the MS-DOS 6.x versions, IIRC, but I suppose you mean that each CD-ROM drive vendor provided its own .SYS file to actually drive the unit?

                            1. 2

                              That’s right. The first time I ever saw a generic CD-drive hardware device driver – as oppposed to the filesystem layer which acted like a network redirector – was on the emergency boot disk image included in Win9x.

                              Never part of DOS itself. The SmartDrive disk cache only came in later, too, and there were tons of 3rd party replacements for that.

                              (I had some fascinating comments from its developer a while back about the IBMCACHE.SYS driver bundled with DOS 3.3 on the IBM PS/2 kit a while ago. I could post, if that’s of interest…?)

                              1. 1

                                You should definitely post those comments somewhere - the DOS era is fading into obscurity (deservedly or sadly, depending on who you ask).

                                1. 2

                                  Interesting. OK, maybe I will make that into a new blog post, too. :-) Cheers!

                        2. 1

                          Indeed, really makes you think about the goals and morals behind projects and things within big corporations. As much as I’d like to see it happen, I doubt it ever will.

                          1. 2

                            My suspicion overall is this:

                            A lot of history in computing is being lost. Stuff that was mainstream, common knowledge early in my career is largely forgotten now.

                            This includes simple knowledge about how to operate computers… which is I think why Linux desktops (e.g. GNOME and Pantheon) just throw stuff out: because their developers don’t know how this stuff works, or why it is that way, so they think it’s unimportant.

                            Some of these big companies have stuff they’ve forgotten about. They don’t know it’s historically important. They don’t know that it’s not related to any modern product. The version numbering of Windows was intentionally obscure.

                            Example: NT. First release of NT was, logically, 1.0. But it wasn’t called that. It was called 3.1. Why?

                            Casual apparent reason: well because mainstream Windows was version 3.1 so it was in parallel.

                            This is marketing. It’s not actually true.

                            Real reason: MS had a deal in place with Novell to include some handling of Novell Netware client drive mappings. Novell gave MS a little bit of Novell’s client source code, so that Novell shares looked like other network shares, meaning peer-to-peer file shares in Windows for Workgroups.

                            (Sound weird? It wasn’t. Parallel example: 16-bit Windows (i.e. 3.x) did not include TCP/IP or any form of dial-up networking stack. Just a terminal emulator for BBS use, no networking over modems. People used a 3rd party tool for this.

                            But Internet Explorer was supported on Windows 3.1x. So MS had to write its own alll-new dialup PPP stack and bundle it with 16-bit IE. Otherwise you could download the MS browser for the MS OS and it couldn’t connect and that would look very foolish.

                            The dialup stack only did dialup and could not work over a LAN connection. The LAN connection could not do PPP or SLIP over a serial connection. Totally separate stacks.

                            Well, the dominant server OS was Netware and again the stack was totally separate, with different drivers, different protocols, everything. So Windows couldn’t make or break Novell drive mappings, and the Novell tools couldn’t make or break MS network connections.

                            Thus the need for some sharing of intellectual property and code.)

                            Novell was, very reasonably, super wary of Microsoft. MS has a history of stealing code: DoubleSpace contained stolen STAC code; Video for Windows contained stolen Apple QuickTime code; etc. etc.

                            The agreement with Novell only covered “Windows 3.1”. That is why the second, finished, working edition of Windows for Workgroups, a big version with massive changes, was called… Windows for Workgroups 3.11.

                            And that’s why NT was also called 3.1. Because that way it fell under the Novell agreement.

                            1. 3

                              16-bit Windows (i.e. 3.x) did not include TCP/IP or any form of dial-up networking stack… The dialup stack only did dialup and could not work over a LAN connection. The LAN connection could not do PPP or SLIP over a serial connection. Totally separate stacks.

                              There’s some nuance here.

                              • 1992: Windows 3.1 ships with no (relevant) network stack. In unrelated news, the Winsock 1.0 interface specification happened, but it wasn’t complete/usable.
                              • 1993: Windows for Workgroups 3.11 ships with a LAN stack. The Winsock 1.1 specification happened, which is what everybody (eventually) used.
                              • 1994: A TCP/IP module is made available for download for Windows for Workgroups 3.11. That included a Winsock implementation.
                              • 1995: Internet Explorer for Windows 3.x is released.

                              Of course, IE could have just bundled the TCP/IP stack that already existed, but that wouldn’t have provided PPP. It could have provided a PPP dialer that used the WfWg networking stack, but that wouldn’t have done anything for Windows 3.1 users.

                              As far as I can tell, the reason for two stacks is Windows 3.1 support - that version previously had zero stacks, so something needed to be added. There would also have been many WfWg users who hadn’t installed networking components.

                              There’s an alternate universe out there where the WfWg stack was backported to 3.1, with its TCP/IP add-on, and a new PPP dialer…but that’s a huge amount of code to ask people to install. Besides, the WfWg upgrade was selling for $69 at the time, mainly to businesses.

                              The real point is a 1992 release didn’t perfectly prepare for a 1995 world. Windows 95 (and NT 3.5) had unified stacks.

                              MS has a history of stealing code: DoubleSpace contained stolen STAC code; Video for Windows contained stolen Apple QuickTime code; etc.

                              The STAC issue was about patents, not copying code. The QuickTime copying allegation was against San Francisco Canyon Co, who licensed it to Intel, who licensed it to Microsoft.

                              1. 2

                                You are conflating a whole bunch of different stuff from different releases here. I don’t think that the result is an accurate summary.

                                Windows 3.1: no networking. Windows 3.11: minor bugfix release; no networking.

                                Windows for Workgroups 3.1: major rewrite; 16-bit peer-to-peer LanMan-protocol networking, over NetBEUI. No TCP/IP support IIRC.

                                Windows for Workgroups 3.11: a major new version, disguised with a 0.01 version number, with a whole new 32-bit filesystem (called VFAT and pulled from the WIP Windows Chicago, AKA Windows 95), a 32-bit network stack and more. Has 16-bit TCP/IP included, over NIC only. No dialup TCP/IP, no PPP or SLIP support.

                                32-bit TCP/IP available as an optional extra for WfWg 3.11 only. Still no dialup support.

                                IE 1.x was 32-bit only.

                                IE 2.0 was the first 16-bit release. https://en.wikipedia.org/wiki/Internet_Explorer_2

                                The dialup TCP/IP stack was provided by a 3rd party, FTP Software. https://en.wikipedia.org/wiki/FTP_Software

                                That dialup stack was dialup only and could not run over a NIC.

                                So, if you installed 16-bit IE on WfWg 3.11, which I did, in production, you ended up with effectively 2 separate IP stacks: a dialup one that could only talk to a modem on a serial port, and one in the NIC protocol stack.

                                The IE PPP stack was totally separate and independent from the WfWg TCP/IP stacks, and it did not interoperate with WfWg at all. You could not map network drives over PPP for example.

                                The real reason that there were 2 stacks is not so much separate OSes – it’s that MS licensed it in.

                                As far as the STAC thing – I may as well copy my own replied from the Orange Site, as it took a while to write.

                                This is as I understand it. (It’s my blog post, BTW.)

                                https://web.archive.org/web/20070509205650/http://www.vaxxine.com/lawyers/articles/stac.html

                                https://www.latimes.com/archives/la-xpm-1994-02-24-fi-26671-story.html

                                https://tedium.co/2018/09/04/disk-compression-stacker-doublespace-history/

                                https://en.wikipedia.org/wiki/Stac_Electronics#Microsoft_lawsuit

                                MS bullied Central Point Software into providing the backup and antivirus tools, on the basis of CPS being able to sell upgrades and updates.

                                CPS went out of business.

                                https://en.wikipedia.org/wiki/Central_Point_Software

                                MS attempted to bully STAC into providing Stacker for free or cheaply. STAC fought back.

                                Geoff Chappell was involved:

                                https://www.geoffchappell.com/

                                He’s the guy that found and published the AARD code MS used to fake Windows 3.1 failing on DR-DOS.

                                https://en.wikipedia.org/wiki/AARD_code

                                As described here: https://www.zapread.com/Post/Detail/7735/aard-code-or-how-bill-gates-finished-off-the-competition/

                                Discussed on HN here: https://news.ycombinator.com/item?id=26526086

                                Especially see this nice little summary: https://news.ycombinator.com/item?id=26529937

                                It would be hard to patent this stuff that narrowly. Various companies sold disk compression; note the whole list here:

                                https://en.wikipedia.org/wiki/Disk_compression#Standalone_software

                                MS saw the code, MS copied it, STAC proved it, MS removed it (MS-DOS 6.21) and then added the functionality back (MS-DOS 6.22) after re-implementing the offending code.

                      1. 4

                        I’m impressed by the amount of effort you’ve put into bringing the community/random encounter side of a conference online as well. I’ve soured on online conferences this year, as my willingness to say “OK, I’ll buy a ticket for the cause” has dried up. And yet I find myself sad that I missed this one - it sounded like a great time.

                        The idea of playing chatroulette with random conference-goers is great, and I hope that more conferences try new things like this.

                        1. 9

                          [Lua] is an imperative language, which is an unintuitive paradigm for solving real-world problems.

                          These are the spicy takes I come to read. Agreed.

                          1. 13

                            Since computing large factorials is the only thing Haskell is any good at, let’s export that capability to Lua:

                            No language is safe.

                          1. 2

                            Cool trick. The debian/rules files in .deb source packages use a #!/usr/bin/make -f as their shebang. If the -i argument to nix-shell was smart enough to split the interpreter argument, this problem would be trivially solvable.

                            I personally prefer to provide shell.nix or flake.nix, and expect people to have the correct tools on hand to run Makefile commands. This also means you have the tools on hand if you want to run commands to debug the build or whatever.

                            1. 1

                              Yes. I think it could also just tailor the case. Or have a ~nix-make affordances that both handles this and better advertises it’s presence.

                              I have used the shell.nix approach (and am not really set in my ways here), but I feel like there’s a tension between providing a shell.nix for hacking on a project VS one for using/trialing it. This is letting me have this cake and eat it too.

                              1. 1

                                Flakes can ease that devShell/testShell tension somewhat because nix develop and nix shell are separate commands. To hack on a project: nix develop, but to try it out nix shell <reference to the flake package output>.

                                1. 1

                                  That makes sense. I’ve been keeping my head in the sand on flakes, though I’m not sure how much longer I’ll try to sustain that…

                                  1. 2

                                    Fortunately there is a second good nix post today to help with that!

                                    1. 2

                                      Grin. I did save it. Now, to remember to look at my saves someday…

                            1. 7

                              Really cool project, and great to see homegrown home automation as opposed to it all running on a “cloud” which can disappear at any time.

                              1. 8

                                Maybe it’s my age showing, but I’m reluctant to deploy home-grown automation solutions because they inevitably break or others in the house get frustrated with them.

                                My garage monitoring solution is an off-the-shelf Z-wave garage door tilt sensor that sends updates to my Z-wave network, which are picked up by Indigo running on a Mac Mini. This starts a timer and sends a Pushover notification to my wife and me if the garage door is left open for more than 10 minutes.

                                1. 3

                                  Maybe it’s my age showing, but I’m reluctant to deploy home-grown automation solutions because they inevitably break or others in the house get frustrated with them.

                                  This was front of mind when building this project. I tried hard to reduce the surface area of breakage in this project by stripping the system image down a lot. Time will tell I suppose :)

                                  1. 2

                                    I’m with you there. I have the same setup, but with node-red. This particular sensor is on Zigbee, but the door controller is on Z-wave. I want local control, but for technical ownership of these problems I want as close to zero as possible.

                                    1. 2

                                      Their is always a risk with external services being not available anymore, like Insteon Shutdown. But I really dislike “face-lifts” or behaviour/feature changes for apps. It’s annoying and most often this change doesn’t have a benefit for me. Something like a rearrangement of products in your favourite supermarket. I try to use simple solution to avoid the not working frustration. For my Hue lights I just used two modes (bright or dimmed) which I can set with a text message to my XMPP bot. I also use this bot to get the current outside temperature or to send me a reminder via email.

                                  1. 6

                                    Fennel being loadable from Lua, and Lua being written in ANSI C makes an interesting combination for anyone wanting to write bootstrappable software in a functional style.

                                    1. 4

                                      Aren’t many LISPs written in C?

                                      1. 9

                                        Indeed but Lua is insanely portable. They have taken great pains to depend only on what is guaranteed by C89: there are no endian dependencies, no assumptions about struct layout, etc.

                                        1. 5

                                          Yes but it’s unusual to have no dependencies beyond a C89 compiler and libc.

                                          1. 2

                                            I’m pretty sure Janst can be built with just a C compiler. And several of the numerous dialects of Scheme.

                                          2. 3

                                            Some, but most are written in Lisp, maybe or maybe not with a C core for runtime. Bootstrapping is occasionally a bit of a headache.

                                            1. 3

                                              Most LISPs are not what I’d consider Functional. They’d support it, sure, and functions are first class, but they don’t emphasize immutability and bottom-up-creation and side-effect limiting like what I think of with functional languages.

                                              1. 4

                                                Absolutely true. I’ve never understood why LISP gets associated with functional. The syntax seems like it would lead that direction, but in practise most LISPs are very low level.

                                                1. 2

                                                  Back when lisps were new, the fact that it was even possible to do functional programming at all was considered unique and novel. Having access to lambda was considered a very advanced feature. Nowadays if your language doesn’t even have closures, no one will really take it seriously, so whatever association lisp-in-general has with FP should probably be thought of as historical.

                                                  1. 3

                                                    When lisps were new, they frequently lacked lexical closures. See the ‘upwards funarg problem’.

                                                    1. 1

                                                      I may be a heretic here but I actually think lexical closures are bad, and a poor substitute for currying/partial application.

                                                      1. 4

                                                        You’ll hate Fennel then! Probably shouldn’t use it. Closures are more central to Fennel than any other language I know.

                                                        1. 1

                                                          More than other lisps?

                                                          1. 2

                                                            I haven’t used every lisp, but …

                                                            • much more than: Emacs Lisp, Common Lisp, Hy, LFE
                                                            • a fair bit more than: Clojure
                                                            • a little more than: Racket, Scheme (mostly due to the module system which is mostly closures)

                                                            Fennel has partial, but not currying, since it’s on a runtime where argument count for a function isn’t determinable.

                                                        2. 1

                                                          What is the difference between lexical closures and currying/partial application?

                                                          1. 2

                                                            With a lexical closure every term that is closed over now has to live as long as the execution of that closure, with no explicit annotation of anything about the sharing.

                                                            When a human reads the program they have to remember if this is a by-value or by-name binding.

                                                            If you want a type system that incorporates lifespans or ownership, then again you need some default rule to describe the relationship between outer and inner code, which the programmer has to remember and cannot alter.

                                                            By contrast, if all such sharing happens via a function call (which is to say with explicit parameter passing and type annotation) then the writing programmer can make explicit what the sharing regime is, and the reader can apply the same rules that apply when they read any other function call.

                                                            Obviously, you can do absolutely amazing things with lexical closures, but you can also do all those things with non-closing functions and partial application.

                                                            I guess I’m saying that explicit is better than implicit.

                                                            1. 2

                                                              When a human reads the program they have to remember if this is a by-value or by-name binding.

                                                              This is less an argument against closures and more an argument for making your language have consistent and sensible argument passing semantics. It’s not a problem in Fennel because the distinction between pass-by-value and pass-by-name is irrelevant.

                                                              If you want a type system that incorporates lifespans or ownership, then again you need some default rule to describe the relationship between outer and inner code, which the programmer has to remember and cannot alter.

                                                              Again, there are plenty of contexts where this is true, but none of this is the slightest bit relevant here.

                                                              1. 2

                                                                Thanks for the response, but this didn’t address the question I had.

                                                    2. 1

                                                      Yeah my knowledge is somewhat stale but at least back in the day only the very core functions were in C and the majority of the image was LISP.

                                                      There was also a lot on top of that core that use a foreign function interface to layer in functions from C-land like UI libraries and the like.

                                                    3. 3

                                                      Fennel & Lua seem like a great duo to be aware of. I’ve also recently learned about Fabrice Bellard’s QuickJS which is interesting for similar reasons as a simple & embeddable pure C implementation of ES2020.

                                                    1. 1

                                                      I don’t really see what the big deal is. A monad is just like a burrito.

                                                      1. 2

                                                        You have it backwards - a burrito is just like a monad.

                                                      1. 2

                                                        I don’t know what I would use this for, but I want one! Looks like a fancier version of the Griffin PowerMate which I had back in the day…

                                                        1. 7

                                                          It’s perfect for controlling synthesizers. Hardware knobs are so satisfying to use compared to touch-screens or mice, but the problem is that when you load a new patch the physical knob position no longer matches the actual value of the parameter it represents. There are workarounds but they’re annoying. Endless encoders help, but they don’t show the value. A few synths use a ring of LEDs around the knob to indicate the value, but this display is much clearer, and the haptics give you a clear sense of when you’ve reached a limit.

                                                          One could make good money selling a box with a few of these in it. There’s an existing product with a ring-of-LEDs that has 4 knobs and sells for $900, and it’s junk compared to this.

                                                          The sad thing is that what you’d really want is 20 or more knobs like this, to control a whole synth, but that would get very pricey…

                                                          1. 1

                                                            There’s an existing product with a ring-of-LEDs that has 4 knobs and sells for $900, and it’s junk compared to this.

                                                            Monome Arc was also the first thing I thought about ;)

                                                          2. 1

                                                            I have one of those! I picked it up at a junk sale a while back, and have no idea how to make it drive anything. Any advice?

                                                            1. 2

                                                              There was a software for it for OSX that I used to configure it back in the day, but I think it stopped working for some reason (32 bit maybe?). I don’t know how it present itself on the USB bus, maybe it’s a HID device that you can listen to events from and map that to actions somehow…

                                                              Edit: seems like it is indeed a HID device and there are various open source projects that interface with it. Should be possible to find something ready to use, or at least enough info to patch something together.

                                                          1. 2

                                                            And then when the type syntax inevitably evolves, we’ll need a babel equivalent for type syntax. This is absurd.

                                                            1. 1

                                                              I used https://learn.cantrill.io/ , and passed my Certified Solutions Architect (Associate) late last year. Even if certifications aren’t your main goal, the SAA-C02 syllabus covers core AWS services to a good depth, and a lot of other services at a “what does this do, and what’s its high-level architecture?” level.

                                                              1. 32

                                                                That logo is generic and forgettable as hell. The mo-zilla lizard was way better. People on the whole don’t have taste. I have nothing important to note just feels so commercial.

                                                                1. 3

                                                                  In my opinion, we only get to complain about something becoming commercial if we’ve donated time or effort to the project. Commercial things survive, and absent contributions, important things should do what it takes to survive.

                                                                  1. 11

                                                                    I think people should feel free to constructively criticize regardless of whether they’ve contributed. Imposing a barrier to critique doesn’t really help.

                                                                    1. 1

                                                                      Nah, that part was voted on by the users. We screwed it up, not them. So I’m going to complain.

                                                                      1. 1

                                                                        Unfortunately, commercial things do not necessarily survive. I am not a web developer, so I have little need for MDN. I do use a web browser every day, however, and Firefox is the last serious bulwark against a Chrome monoculture. Mozilla should stop shuffling deckchairs around and let us fund the browser.

                                                                      2. 3

                                                                        If the logo being mediocre is all there is to complain about in it, they’ve done pretty well overall. :)

                                                                        1. 1

                                                                          I can get with that. it was the only thing that struck me as bad out of the entire redesign.

                                                                      1. 3

                                                                        The only module names I ever abbreviate to T are Data.Text, and occasionally some of its submodules (Data.Text.Encoding, Data.Text.IO, …).

                                                                        1. 1

                                                                          Haskell is great, except for all the monad transformer stuff. That’s all an absolute nightmare. At this point, I just don’t really see a reason to use it over Rust for writing practical (i.e. non-research) software. Rust has the most important pieces from Haskell.

                                                                          1. 12

                                                                            My experience with monad transformers is that they can offer a lot of practical value. There’s a little bit of a learning curve, but I think that in practice it’s a one-time cost. Once you understand how they work, they don’t tend to add a lot of cognitive burden to understanding the code, and can often make it easier to work with.

                                                                            I do like some of the work people are doing with effects systems to address the down sides of monad transformers, and eventually we might move on, but for a lot of day to day work it’s just very common to end up doing a lot of related things that all need to, e.g. share some common information, might fail in the same way, and need to be able to do some particular set of side effects. A canonical example would be something like accessing a database, where you might have many functions that all need to access a connection pool, talk to the database, and report the same sorts of database related errors. Monad transformers give you a really practically effective way to describe those kinds of things and build tooling to work with them.

                                                                            1. 8

                                                                              What’s wrong with “all the monad transformer stuff”?

                                                                              1. 3

                                                                                Monads are mostly complexity for the sake of being able to imagine that your functions are “pure”. I have not found any benefits for such an ability, besides purely philosophical, at least in the way most functional programming languages are built. There are better ways, that can forgo the need for imagination, but the functional programming crowd doesn’t seem to find them.

                                                                                1. 15

                                                                                  Monads are for code reuse, they’re absolutely, completely, not at all about purity.

                                                                                  1. 3

                                                                                    I have not found them any good for that use case either. The code I’ve seen usually ends up as a recursive monad soup, that you need to write even more code to untangle. They can work well in some limited contexts, but those contexts can often work just as well using other programming constructs in my opinion. Limited code reuse in general is a problem with many half-assed solutions that only work in limited contexts, for example inheritance, DSLs, composition(the OOP kind), etc. Monads are just another one of them, and honestly, they are just as, if not more easy to overuse as the other solutions.

                                                                                    1. 9

                                                                                      I do not understand this perspective at all. traverse alone saves me an astonishing amount of work compared to reimplementing it for every data structure/applicative pair.

                                                                                      1. 2

                                                                                        The reason you need traverse at all is monads. It’s all complexity for the sake of complexity in my eyes.

                                                                                        1. 5

                                                                                          Not at all. traverse works for a far wider class of things than just monads. And even if a language didn’t have the notion of monad it would still benefit from a general interface to iterate over a collection. That’s traverse.

                                                                                          1. 3

                                                                                            general interface to iterate over a collection

                                                                                            So, a for loop? A map() in basically any language with first-class functions?

                                                                                            Anyways, my comment about needing traverse at all is in response of needing to reimplement it for many different data structures. The problem I see in that, is that the reason you get all of those data structures is because of Monads. There a lot less of a need to have such a function when you don’t have monads.

                                                                                          2. 3

                                                                                            The reason you need traverse at all is monads. It’s all complexity for the sake of complexity in my eyes.

                                                                                            How would you write, say,

                                                                                            traverseMaybeList :: (a -> Maybe b) -> [a] -> Maybe [b]
                                                                                            traverseEitherBoolSet :: (a -> Either Bool b) -> Set a -> Either Bool (Set b)
                                                                                            

                                                                                            in a unified way in your language of choice?

                                                                                            1. 3

                                                                                              On a good day, I’d avoid the Maybe and Either types that are used for error handling, and just have good old exceptions and no need any traversal. On a bad day, I’d probably have to use traverse, because Maybe and Either, are monads, and create this problem in the first place.

                                                                                              1. 1

                                                                                                I think if you prefer exceptions to Maybe/Either then you’re sort of fundamentally at odds with Haskell. Not saying this in a judgmental way, just that “exceptions are better than optional/error types” is not how Haskell thinks about things. Same with Rust.

                                                                                                Though, even in Python I typically write functions that may return None over functions that throw an exception.

                                                                                                1. 1

                                                                                                  I think if you prefer exceptions to Maybe/Either then you’re sort of fundamentally at odds with Haskell.

                                                                                                  I’m pretty sure by just disliking monads I’m at odds with Haskell as it currently is. But do note, that not all exceptions are crafted equally. Take Zig for example, where errors functionally behave like traditional exceptions, but are really more similar to error types in implementation. A lot nicer than both usual exceptions, and optional/error types in my opinion.

                                                                                                  Though, even in Python I typically write functions that may return None over functions that throw an exception.

                                                                                                  It really depends if the function makes sense if it returns a none. If you’re trying to get a key from cache, and the network fails, returning a None is fine. If you are trying to check if a nonce has been already used, and network fails, returning None is probably the wrong thing to do. Exceptions are a great way to force corrective behavior from the caller. Optional types have none of that.

                                                                                                  1. 1

                                                                                                    I don’t understand why you say Zig error types “behave like traditional exceptions”. My understanding is that if I have a function that returns a !u32, I can’t pass that value into a function that takes a u32.

                                                                                                    Similarly, I don’t understand the idea that exceptions force correctional behavior. If I have a function that throws an exception, then I can just… not handle it. If I have a function that returns an error type, then I have to specify how to handle the error to get the value.

                                                                                                    1. 1

                                                                                                      Yes, but essentially, you are either handling each error at the call site, or, more often, you bubble the error upwards like an exception. You end up with what I would call forcibly handled exceptions.

                                                                                                      Not correcting some behavior leads to your program dying outright with exceptions. If you handle the exception, I’d say you are immediately encouraged to write code that corrects it, just because of how the handling is written. With functions that return an error type, it’s very easy to just endlessly bubble the error value upwards, without handling it.

                                                                                                      1. 1

                                                                                                        With functions that return an error type, it’s very easy to just endlessly bubble the error value upwards, without handling it.

                                                                                                        If I have an Optional Int, and I want to put it in a function that takes an int, I have to handle it then and there. If I have an optional int and my function signature says I return an int, I must handle it within that function. The optional type can’t escape out, versus exceptions which can and do.

                                                                                              2. 2

                                                                                                I’d argue that these specific types are actually not very useful. If any error occurs, you don’t get _any _ results? In my experrience it’s more likely that we need to partition the successful results and log warnings for the failures. The problem with these rigidly-defined functions is that they don’t account for real-world scenarios and you just end up writing something by hand.

                                                                                                1. 1

                                                                                                  Haskell’s standard library is anything but rigid in my opinion. Take the specific case of “something that contains a bunch of (Maybe item).

                                                                                                  • If you want a list of all items inside Just but only if there is no Nothing anywhere, you write toList <$> traverse l.
                                                                                                  • If you want a list of all items inside Just, you can write fold $ toList <$> l.
                                                                                                  • If you want just the first item, if any, you write getFirst $ fold $ First <$> l
                                                                                                  • If you want the last item, if any, you can write getLast $ fold $ Last <$> l

                                                                                                  These are specific to Maybe, especially the First and Last, I’ll give you that. But functions from the stdlib can be snapped together in a huge number of ways to achieve a LOT of things succinctly and generally.

                                                                                                  1. 1

                                                                                                    OK, this doesn’t actually answer my question. Say I have a stream of incoming data. What I really want to do is validate the data, log warnings for the ones that fail, and stream out the ones that succeed.

                                                                                                    1. 2

                                                                                                      Then use an API that’s designed for streaming processing of data, for example https://hackage.haskell.org/package/streamly

                                                                                                  2. 1

                                                                                                    I wrote up a few different operations on “collections of Maybe” or “collections of Either” in Haskell. The total number of lines of code required to express these operations using the standard Haskell library was around 12, including some superfluous type signatures. They cover all the cases in my other comment, as well as the “partitioning” you mention in your post. Here’s a gist:

                                                                                                    https://gist.github.com/DanilaFe/71677af85b8d0b712ba2d418259f31dd

                                                                                        2. 9

                                                                                          Monads are mostly complexity for the sake of being able to imagine that your functions are “pure”.

                                                                                          That’s not what they’re for. Monad transformers (well, the transformers in the mtl with the nice typeclasses) in particular let you clearly define what effects each piece of code has. This ends up being pretty useful: if you have some sort of monad for, say, SQL server access, you can then see from a given function’s type if it does any SQL transactions. If you attempt to do SQL where you’re not supposed to, you get a type error warning you about it. I think that’s pretty convenient. There’s lots of examples of this. If you’re using the typeclasses, you can even change the effect! Instead of reading from an actual db, you could hand off mocked up data if you use one monad and real db info with the other. This is pretty neat stuff, and it’s one of my favorite features of Haskell.

                                                                                          I agree that they might not always be super clear (and monad transformers start to have pretty shit perf), but they’re not just for intellectual pleasure.

                                                                                          1. 1

                                                                                            Monad transformers (well, the transformers in the mtl with the nice typeclasses) in particular let you clearly define what effects each piece of code has.

                                                                                            Slight correction: they usually let you define what classes of effects a piece of code has. This of course can range in abstraction, from a very specific SQLSelect to an overly broad, and not at all descriptive IO. One problem often seen with this, is that methods often combine several different effects to achieve the result, which leads to either having an obnoxiously large function signature, or having to merge all the effects under the more generic one, whether that be the more useful SQL if you’re lucky and the method only touches the SQL, or the frankly useless IO, in both cases loosing a big part of the usefulness of it.

                                                                                            But the thing is, that you don’t need monads to achieve any of that anyways. If you represent external state (which the effects are meant to move away from you) as an input to a function, and the function outputs the same external state back, just with the commands it wants to do, a runtime can perform the IO, and bring you back the information on second function call. This of course might be somewhat counter-intuitive, as people are used for their main() function to be run only once, but it leads to another way of thinking, a one where you are more aware of what state you carry, and what external systems each function can interface with, as it lives straight in the function signature, only with an opportunity to hide inside a type to group several of them. This style would also naturally promote IO pipelining, since you easily can (and probably want to) submit more than one IO request at once. You can build the IO runtime on anything you want as well, be it io_uring, or a weird concoction of cloud technologies, if you provide your program with the same interface. It also brings the same testing possibilities, even slightly more, as making a golden data tests becomes ridiculously easy. More impressively, it brings the possibility of relatively easy time-travel debugging, as you only need to capture the inputs to the main function every function call to accurately replay the whole computation, and in part, enable to check some fixes without even re-doing the IO. I think this is a better way to move towards in functional programming, but I myself don’t have the time, or motivation in functional programming to push it that way.

                                                                                            1. 2

                                                                                              Classes of effects instead of effects is a distinction without a difference, right? I can define a monad typeclass that only puts things in a state and a monad typeclass that only takes things out instead of using StateT (in fact they exist and are called Reader and Writer), and I can get as granular as I’d like with it. The amount of specificity you want is entirely up to you.

                                                                                              I agree that the IO monad is pretty frustratingly broad. You’re also correct that you don’t need monads to do this sort of thing. I’m having a little bit of trouble understanding your replacement. You mean a function with external state a and pure inputs b with result c should have the type a -> b -> (c, a), right? What would you do when you want to chain this function with another one?

                                                                                              1. 1

                                                                                                No. Your main function’s signature looks like a -> a. And a runtime calls it again and again, taking the actions the function specified in the output type that contains the external state objects, performing them, and putting the results back into the same objects. Your other functions as such grow in a similar manner, for example a function that takes an external resource a and a pure input b, to for example submit a write request, it would look like a -> b -> a. An important thing to note, that it only submits a request, but doesn’t do it yet. It would only be performed once the main function ends, and the runtime takes over. As such, you couldn’t do reading as trivially as a -> b -> (a, c), as you cannot read the data out while “your” code is running. This isn’t great for usability, but that can in large part be solved by using continuations.

                                                                                                As a side note, I don’t particularly enjoy chaining. It’s another solution that is only needed because monads make it appear that the function isn’t performing IO, when it’s more useful for you to think that it does. With continuations, you could just make this look like several function calls in a row, with plain old exceptions to handle errors.

                                                                                                1. 2

                                                                                                  This seems far more complex than using monads to me, but different people think in different ways. I don’t know what you mean by you don’t enjoy chaining— you don’t like sequencing code?

                                                                                                  1. 1

                                                                                                    I like sequencing code, but I don’t enjoy sequencing code with monads, since monads force the sequencing of code they touch to be different, just because they are monads.

                                                                                                    1. 2

                                                                                                      Can you provide an example of monads changing the way you sequence code? That’s one of the major benefits of do-notation in my mind: you can write code that looks like it is executing sequentially.

                                                                                                      1. 2

                                                                                                        The do-notation is the problem. Why would sequencing functions that do IO would need to be different from sequencing functions that don’t? IO is something normal that a program does, and functional programming just makes it weird, because it likes some concept of ‘purity’, and IO is explicitly removed from it when the chosen solution are monads.

                                                                                                        1. 2

                                                                                                          Because functions that do IO have to have an order in which they execute. The machinery of a monad lets you represent this. I don’t care which side of (2+2) + (2+2) executes first, but I do care that I read a file before I try to display its content on screen.

                                                                                                          1. 1

                                                                                                            In the general case, you don’t care about the order the IO executes as long as you don’t have any dependencies between it. multiply(add(2, 2), 2) will always perform addition first, multiplication second, just like displayData(readFile(file)) will always read the file first, and display the data second. Compiler will understand this, without needing to distinguish the functions that do IO, from those that don’t. In the few cases where you don’t have any fully direct data dependencies, but still need to perform IO in specific order, you then may use specific barriers. And even with them, it would still feel more natural for me.

                                                                                                            1. 2

                                                                                                              In the general case, it’s impossible to determine which code might depend on the other. A contrived counter example would be writing to a socket of program a, that itself writes to the socket of program b, and then writing to the socket of program b. The order here matters, but no compiler would be able to determine that.

                                                                                                              In the few cases where you don’t have any fully direct data dependencies, but still need to perform IO in specific order, you then may use specific barriers.

                                                                                                              Yes, these specific barriers are exactly what monads provide.

                                                                                                              Can you provide an example of a monad restructuring how you want to sequence something? I’m very open to seeing how they fall short, I haven’t written Haskell in a long time (changed jobs) but I miss the structure monads give very often.

                                                                                                              1. 3

                                                                                                                Of course no compiler can determine all dependencies between IO. In other languages you don’t need to worry much about it, because in other languages the evaluation order is well defined. Haskell though, forgoes such definition, and with the benefits it brings, it also brings it’s problems, namely, the inability to easily order unrelated function evaluation. There is seq and pseq, but they are frowned upon because they break monads :). So the way the IO monad works is by introducing artificial data dependencies between each monad. This feels quite hacky to me. But do note that this is mostly a problem with Haskell, and many other functional programming languages that are full of monads could get rid of them without much change in the language semantics.

                                                                                                                Monads don’t change how I sequence something. But they do greatly annoy me, by needing special handling. It’s like mixing async and non-async code in other languages - either you go fully one way, or fully the other. Mixing both does not work well.

                                                                                                                1. 2

                                                                                                                  seq and pseq are definitely not frowned upon!

                                                                                                                  1. 1

                                                                                                                    Monads don’t change how I sequence something.

                                                                                                                    Then why did you say:

                                                                                                                    I like sequencing code, but I don’t enjoy sequencing code with monads, since monads force the sequencing of code they touch to be different, just because they are monads.

                                                                                                                    They also don’t need special handling. Do-notation is syntax sugar, but there’s nothing in Haskell that privileges monads outside of the standard library deciding to use them for certain things. They are just a typeclass, the same as any other.

                                                                                                  2. 1

                                                                                                    As a response to your edit: no, reading is still a class of actions. You can read a single byte, or you can read a million bytes, and those two are very different actions in my mind. Trying to represent such granularity in monads is difficult, and honestly, a waste of time, since you don’t need such granular control anyways. But this is mostly disagreements in definition at this point, so no need to discuss this further I think.

                                                                                                  3. 1

                                                                                                    Yeah, linear IO is a major motivator for my work on Dawn.

                                                                                                2. 8

                                                                                                  This does not match my experience using monads at all.

                                                                                                  1. 1

                                                                                                    Monads arise naturally from adjoint functors. Perhaps they are not obvious, but that does not mean that they are artificially complex.

                                                                                                    It sounds like you vaguely disapprove of effects-oriented programming, but you need to offer concrete alternatives. Handwaves are not sufficient here, given that most discussion about monads comes from people who do not grok them.

                                                                                                    1. 3

                                                                                                      Monads arise naturally from adjoint functors. Perhaps they are not obvious, but that does not mean that they are artificially complex.

                                                                                                      Such technobabble explanations are why I try to distance myself from functional programming. While technically correct, they offer no insight for people who do not already understand what monads are.

                                                                                                      It sounds like you vaguely disapprove of effects-oriented programming, but you need to offer concrete alternatives. Handwaves are not sufficient here, given that most discussion about monads comes from people who do not grok them.

                                                                                                      I do, in this comment. It might not be the most understandable, it might not have the strong mathematical foundations, and it definitely is wildly different as to how people usually think about programs. But I still think that it can offer better understanding of the effects your program does, besides giving a bunch of other advantages.

                                                                                                      Also, I don’t disapprove of effects-oriented programming, it’s just that monads are a terrible way of doing it. I feel like there are a lot of better ways of making sure effects are explicit, my suggestion being one of them, effect handlers being the other one about which I learned recently.

                                                                                                      1. 2

                                                                                                        I looked this up and it seems that the idea that every monad comes up as an adjunction occurs, if you define a category based on that monad first. isn’t this totally cyclic?

                                                                                                        1. 3

                                                                                                          In many cases, the algebras for a monad will be things we already cared about. In fact, that was sort of the original point of monads – a way of abstractly capturing and studying a wide variety of algebraic theories together.

                                                                                                          For example, if you’re familiar with the list monad, its algebras are simply monoids, and so its Eilenberg-Moore category is (at least equivalent to) the category of monoids.

                                                                                                          There are other monads whose algebras would be groups, or rings, or vector spaces over a field K, or many others.

                                                                                                          But I think Corbin was probably not referring to the way in which every monad comes from at least one adjunction (or two, if you also throw in the one involving the Kleisli category), but rather that if you already have adjunctions hanging around, you get a monad (and a comonad) from each of them in a very natural way. If you’re familiar with order theory by any chance, this is a direct generalisation of how you get a closure operator from a Galois connection between partially ordered sets.

                                                                                                          This entire discussion is almost completely irrelevant to someone using monads to get programming tasks done though. As an abstraction of a common pattern that has been showing up in combinator libraries since the early days of functional programming, you can fully understand everything you need to know about it without any of this mathematical backstory.

                                                                                                          Why we recognise the monad structure in programming is mostly not really to be able to apply mathematical results – maybe occasionally there will be a spark of inspiration from that direction, but largely, it’s just to save writing some common code over and over for many libraries that happen to have the same structure. Maybe monad transformers take that an additional step, letting us build the combinator libraries a bit more quickly by composing together some building blocks, but these would all still be very natural ideas to have if you were just sitting down and writing functional programs and thinking about how to clean up some repetitive patterns. It would still be a good idea even if the mathematicians hadn’t got to it first.

                                                                                                    2. 2

                                                                                                      It’s completely opaque to me how to get them to do what I need them to do. I found myself randomly trying things, hoping something would work. And this is for someone who found Rust lifetimes to be quite straightforward, even before NLL.

                                                                                                  1. 2

                                                                                                    In Nix’s case, however, you’re building the package from source.

                                                                                                    So Nix is building everything from source? Like Gentoo?

                                                                                                    1. 8

                                                                                                      Because you know the exact hash of each derivation’s inputs, you can query a cache for binary packages. This is cache.nixos.org by default, but you can add your own.

                                                                                                      1. 6

                                                                                                        So, if the question is about whether ~Nix is building it from source: yes (roughly). If the question is about whether you have to wait for it to build from source: it depends :)

                                                                                                        1. 7

                                                                                                          So, if the question is about whether ~Nix is building it from source:

                                                                                                          It is always possible to build from source.

                                                                                                          If the question is about whether you have to wait for it to build from source: it depends :)

                                                                                                          For almost all cases for most people: no.

                                                                                                          1. 4

                                                                                                            If the question is about whether you have to wait for it to build from source: it depends :)

                                                                                                            For almost all cases for most people: no.

                                                                                                            To expand on this, setting up your own cache using Cachix is actually really easy. Two steps in GitHub Actions (cachix/install-nix-action and cachix/cachix-action) and about four lines in GitLab CI. (I don’t think I’m allowed to post links yet.)

                                                                                                            1. 1

                                                                                                              Cachix, while pretty convenient, is quite pricey for simple use cases, no?

                                                                                                              1. 2

                                                                                                                From their pricing page:

                                                                                                                Users have a free 5 GB limit for open source projects.

                                                                                                                I’m just using that, and it’s plenty for a few small projects. Only the derivations which are not available from nixpkgs are uploaded, after all, so you’d need to be doing something fairly complex (or building a bunch of packages differently from nixpkgs) to run into the limit.

                                                                                                    1. 7

                                                                                                      Better Resource Management

                                                                                                      Certainly better than in many other languages but things like the bracket function (the “default” version of which is broken due to async exceptions lol oops) are rather “meh” compared to RAII-style ownership. Because nothing forces you to avoid resource leaks… well, now Linear Haskell can do that, but being a newly retrofitted extension it’s not gonna be instantly pervasive.


                                                                                                      TBH, Haskell is in kind of an awkward spot these days:

                                                                                                      • if you want to play with type-level magic (and proofs) Idris is a much better fit, all the type-level programming that was done with awkward hacks in Haskell becomes much easier in a fully dependent system;
                                                                                                      • meanwhile if you want to get practical things done quickly and safely, Rust is absolutely kicking ass on all fronts and it’s really hard to pick anything else!
                                                                                                      1. 10

                                                                                                        I love Rust, but I don’t think it’s a clear winner over Haskell personally.

                                                                                                        In Rust, the affine types and lack of garbage collection are really great when I’m working on low-level code. As someone who has written a lot of C and a lot of Haskell, Rust undeniably hits a lot of my requirements. For a lot of day-to-day work though, I still find that I’m much more likely to pick up Haskell. Little things like higher kinded types and GADTs end up being a big force multiplier for me being able to build the sorts of APIs that work best for me. I also really value laziness and the syntactic niceties like having universal currying when I’m working in Haskell.

                                                                                                        None of that is anything negative about Rust. I really admire what the Rust community has done. If anything, I think rustaceans are in a great position to leverage all of the things they’ve learned from Rust so that they can more quickly and easily dip a toe into Haskell and see if it might be useful to them sometimes. In the end I don’t think we have to view each other as competitors, so much as two languages that sit in somewhat different spots of the ecosystem that can learn and benefit one another.

                                                                                                        1. 7

                                                                                                          I think rustaceans are in a great position to leverage all of the things they’ve learned from Rust so that they can more quickly and easily dip a toe into Haskell and see if it might be useful to them sometimes.

                                                                                                          This is exactly where I am in my PL journey (outside of work). I’ve been writing Rust for 5 years and it’s a great language for all the reasons you mentioned and more. I also found Rust a nice intro to writing real world code that is more FP than OO (i.e: structs/record and traits/type-classes instead of classes and interfaces) while still having static types (I love lisps but I tend to work better with types). Now I’m getting into Haskell and so far the process have been fairly smooth and very enlightening. The type system is far more expressive and I can see myself being highly productive in Haskell (far more than Rust) not having to worry about memory management and some of the more restrictive aspects of the Rust type system. If the learning process continues I wouldn’t be surprised if Haskell becomes my “go-to” for most problems, but Rust is still there for when I care more about performance and resource usage.

                                                                                                          1. 2

                                                                                                            It will be interesting to see how attitudes towards resource collection shift with the advent of linear types in Haskell.

                                                                                                          2. 1

                                                                                                            Yes, my opinion is that Rust has successfully stolen the best ideas from Haskell and made them more palatable to a mass audience.

                                                                                                          1. 14

                                                                                                            I have often said that Postel’s Law only really makes sense in a slow-moving world of software, where updates are performed by posting tapes around. These days we can and should demand stricter conformance. I’m glad someone has written this up as an internet draft.

                                                                                                            1. 4

                                                                                                              The output of any scientific paper that uses a computer program should be buildable from a Nix expression.

                                                                                                              1. 8

                                                                                                                Nix without root access didn’t seem very user-friendly last time I checked.

                                                                                                                But more importantly, how would Nix have helped with that GPU driver compatibility problem?

                                                                                                                1. 1

                                                                                                                  It’s hard to control for dependencies on super closed platforms like GPUs. You’d think we’d have more control with open source technologies but things like dynamic shared objects wrestle away control in a similar way. Web software and online platforms have always provided very durable solutions, but they require surrounding control of a different kind. Solutions like Cosmopolitan Libc or Nix can help you create something as durable as the Super Mario Bros ROM while you remain in full control, but they’re only target a subset of the software that’s in play. There’s no silver bullet for Ozymandias, because the root challenge is human cooperation not technical.

                                                                                                                2. 3

                                                                                                                  I feel like a better approach would be to have “archival” languages with defined semantics like Standard ML (or we could probably define something for Oberon or the like). Then we have tooling for translating those into whatever the tool chain du jour is. Write your core logic in the archival form, and then glue it together in whatever.

                                                                                                                  1. 3

                                                                                                                    Here’s a similar technique that was used before efficient and portable high-level languages.

                                                                                                                    Some older software systems were implemented in application specific macro languages. Porting them to a new platform is done by defining macros that map to assembly language or a high-level language.

                                                                                                                  2. 1

                                                                                                                    When the broader IT industry solves this problem, I’m sure academia will follow.

                                                                                                                  1. 3

                                                                                                                    I would love to learn a Scheme but man is it daunting.

                                                                                                                    1. 8

                                                                                                                      R7RS is simultaneously much larger and smaller than R5RS (which is the version I’m most familiar with). The addition of a module system made things more, well, modular, but might increase perceived complexity.

                                                                                                                      Scheme is nice, though. To me, Common Lisp is big in a lot of the wrong places and too small in a lot of the others, and has one too many namespaces. Scheme has the right number of namespaces, hygienic macros, and is consistently small.

                                                                                                                      If you wanna do Lisp, Scheme might be a good choice. The multiplicity of implementations can be an issue, but there are some really high-quality ones out there. I haven’t played with it in years but I remember Chicken being especially beautiful (and I still remember the paper describing how Chicken does thunks and garbage collection, though it’s probably long out of date by now).

                                                                                                                      1. 9

                                                                                                                        paper describing how Chicken does thunks and garbage collection

                                                                                                                        https://www.more-magic.net/posts/internals-gc.html ?

                                                                                                                        1. 1

                                                                                                                          Yep, that’s it.

                                                                                                                          1. 7

                                                                                                                            I wrote that, and it’s not out of date. The core algorithm hasn’t changed (and probably won’t). Besides, the article abstracts away some technical details that don’t matter to the algorithm and has therefore aged well. We’ve since changed the way procedure calls compile to C in CHICKEN itself, but that doesn’t matter to the algorithm.

                                                                                                                            1. 1

                                                                                                                              It’s a beautiful piece of work.

                                                                                                                        2. 2

                                                                                                                          Is there an implementation you would recommend?

                                                                                                                          1. 4

                                                                                                                            Depending on what you’re trying to do, I’d go with Chicken (for application development) or Guile (for scripting). Guile is the one I’m most familiar with, but honestly it’s been a decade since I’ve written any Scheme.

                                                                                                                            1. 2

                                                                                                                              Thank you!

                                                                                                                          2. 2

                                                                                                                            To me, Common Lisp is […] too small in a lot of [places]

                                                                                                                            Curious, where do you find it lacking? A couple of things are outright missing (like threads), but that is a case of ‘nothing at all’, not ‘too small’.

                                                                                                                            and has one too many namespaces. Scheme has the right number of namespaces

                                                                                                                            That is … an interesting take, considering CL has a multiplicity of namespaces and most schemes I know of have just one :)

                                                                                                                            (S7 scheme is the other lisp-n, sort of.)

                                                                                                                            Personally, my take is that CL has the right number of namespaces for CL, but that that is probably the wrong number of namespaces for not-cl.

                                                                                                                            1. 3

                                                                                                                              Curious, where do you find it lacking? A couple of things are outright missing (like threads), but that is a case of ‘nothing at all’, not ‘too small’.

                                                                                                                              CL has a lot of standardized functionality that would be considered niche in other languages (even if it’s very useful), while completely missing things like threads and networking (I realize it’s a product of its time). It’s “too small” in that it doesn’t specify some things it needs to for it to be competitive in the modern world (I realize there are plenty of workarounds, other language specs are similarly small, etc). I just remember the last time I seriously considered using CL (and this was many years ago), some of the things we needed were not available for the CL implementation we had chosen (clisp, IIRC). I’m sure there was a way around it, but we were either unaware of it or considered it not worth the effort.

                                                                                                                              That is … an interesting take, considering CL has a multiplicity of namespaces and most schemes I know of have just one :)

                                                                                                                              I was making a joke about the whole Lisp-1 vs Lisp-2 debate. If we’re going to have first-class functions, they should have the same namespace as first-class data! :)

                                                                                                                            2. 1

                                                                                                                              These days, I’m becoming even less sure about having separate namespaces for terms and types…

                                                                                                                            3. 5

                                                                                                                              Just dive into Racket. It’s great fun!

                                                                                                                              1. 2

                                                                                                                                What about it do you find daunting?

                                                                                                                                Sure, all the parenthesis are kind of weird compared to other commonly used PLs, but you can get used to it, and paren matching in modern editors helps.

                                                                                                                                The only strange bit is that how much of the language is based around the linked list. Like Forth is based around the stack, and Lua is based around the table.

                                                                                                                                Otherwise, it is GC language with functions, arguments, return values and such. Macros, which you don’t need to worry about when starting out, can be more useful than, for example, the C preprocessor.

                                                                                                                                1. 2

                                                                                                                                  I find the syntax of Lisps to be especially weird, and I’m not even talking about the () stuff, but things like assignment and even just understanding what type something is.