Threads for technomancy

  1. 7

    Traditionally, JVM threads were built around OS threads.

    Java started out with virtual/green threads. In fact the term “green threads” comes from the name of the the Sun research project that created Java.

    According to that WIkipedia article, they were replaced with native threads in JDK 1.3. La plus ce change…

    1.  

      Interesting; while I was reading this my main thought was “why are these things that seem to clearly be green threads called virtual threads instead?” and now I’m guessing it’s to avoid ambiguity with that ancient 1.2-era construct?

      1.  

        Probably, although the original green threads were replaced with OS Threads without much change to the APIs IIRC

        1.  

          My understanding is that green threads and virtual threads are both user-space constructs that present a thread-style API, but the similarity pretty much ends there. Early JVM versions only ran on a single OS thread, so green threads didn’t have to deal with any parallelism concerns or with what happens when a single user-space thread gets scheduled on different OS threads over time (which has an impact on things like thread-local variables).

      1. 11

        Forges are online collaborative platforms to support the development of distributed open source software

        OK, so like github?

        For example, of the top 10 forges in 2011, only one survives today—SourceForge

        OK, so … not github?

        I don’t get it.

        1. 5

          I was trying to figure out where the dividing line seems to be for them between forges and GitHub. From some context, it seems like they mean forges as places with centralised structure, as opposed to GitHub which explicitly supports distributed workflow and has some organisation features on top.

          For example sourceforge was the home for your project with news and releases up front and development on the side. But GitHub is source-first and even the releases are a not obvious addition.

          But that’s just me trying to read between the lines, maybe the authors had a different view.

          1. 3

            SourceForge was (is? I’m gonna go past tense from here on because I haven’t used it in forever) different from GitHub in some ways:

            • It offered a bunch of things besides source hosting, including things like forums and mailing lists, file hosting and so on. Github eventually got some extra features as well, such as wikis, but the general idea was that it was possible (to some degree) for SourceForge to be the place where a project’s community was, rather than just the place where the source code is hosted and bugs are triaged. Remember that this was before Slack, Discord et al., if a project got big enough it would maybe also set up an IRC channel but having a mailing list for developers and a forum for the wider community was pretty much peak communication.
            • Project pages (this was true for some other “forges”, too) were somewhat more user-oriented, and aggregated news, user reviews and a bunch of other things – sort of like a modern app store, except you could also get at the source. A project’s page was almost like a barebones version of a project’s website. You can still kind of see it: https://sourceforge.net/projects/turbovnc/ , I just picked this one at random. This is kind of like the default repo view page, what with the README and stars/forks display and whatever, but not quite as user-oriented – no reviews, for example, no news (there’s a small mention of releases in the upper-right corner but that’s still a lot more “release management”-oriented than “hello, Random J. User, here’s where you can download our software if you like it”-oriented).
            • SourceForge, but also some of the other “forges” (like Savannah) allowed people to browse projects by topic, platform, license and many others. This is honestly the one thing I really think Github et al. are doing far, far worse. This was kindda cool, if you were interested in a particular topic (in my case, emulation) you could very easily discover projects on that topic, so if you wanted to learn something, you could find code from which to learn, or new applications to try, a lot easier than you can today. You can still see that, too: https://sourceforge.net/directory/desktop/windowmanagers/ , for example, except this differs from the 2001 experience kind of the way you expect it to differ (i.e. you could fit a lot more content in a much smaller screen and there weren’t as many ads). This was actually just one component of another idea…
            • …specifically that “forges” were meant to host source code just as much as help users discover new (i.e. your :P) software, find new developers (SourceForge had sort of a “job board” where projects which needed developers could let others know about it – most of these were obviously unpaid volunteer things, but I think there were some “real” jobs there, too) and so on – sort of like being an “interface” with the community.

            All that being said, as far as I recall, SourceForge really did end up mostly used for the code hosting feature. New and better communication channels were developed and it became a lot easier to do most of these things (including source code hosting, after a while) externally. Back in 2002 or so, me and some folks who worked on an extraordinarily bad FPS The Matrix rip-off were lucky enough to have our own, fancy phpBB forum for it because two of us, erm, commandeered a server at the company they worked for. But this was before cheap VPSs, it was otherwise unlikely that we could’ve afforded running our own forum, website, CVS, FTP server, IRC and so on. Nowadays you can do all that with like 10$/mo. which is far closer to what you can afford on a student’s budget, and configure as you please, with a lot fewer hoops to jump through, so “forges” are way more limited for how much cheaper they are.

            Edit: FWIW, though, all that stuff I wrote above should be taken with a small grain of historiographical salt. The term “forge” predates Github by a good five years, if not more. It likely embodies the same thing that Github embodies – our idea of how we wrote software together back at the time. But, since our idea about how to do it, and our technological means, were pretty different, especially with regards to source code access/version control and how software is distributed to users, the results were also pretty different. That’s why the two seem like they’re very much “the same” thing – they probably really are – but they’re separated by enough of a time difference that they “feel” a lot more different. The difference was a lot more in the “how” than in the “what”, so it’s likely that most of it is somewhat anachronistic.

            1. 2

              But, since our idea about how to do it, and our technological means, were pretty different, especially with regards to source code access/version control and how software is distributed to users, the results were also pretty different

              This is hinted at in the paper when they mention that SourceForge offered hosted CVS in 1999 when it launched. There really wasn’t an alternative to CVS at the time. Even having a public CVS repo was only just starting to become common at that point. A lot of projects had an FTP site with source tarballs and took patches via a mailing list. Even if they used CVS internally, they didn’t give random Internet people even read-only access to it.

              Subversion came out in 2000 and started to gain adoption quite quickly. Most of the hosting platforms supported it fairly quickly, though it took a long time for a lot of projects to move. I seem to remember that it took a long time for both SorceForge and Savannah[1] to gain git repo support and so GitHub was able to grow very rapidly by riding the hype wave that came from Linux’s move to git. This was amplified by the fact that ‘Linux’ has managed to become so conflated with open source that a thread yesterday had several people telling me that I was being unreasonable objecting to the fact that an article about two cross-platform desktop environments was describing itself as discussing the problems of Linux.

              [1] If you’ve never had to use Savannah, consider yourself fortunate.

              1.  

                Subversion came out in 2000 and started to gain adoption quite quickly. Most of the hosting platforms supported it fairly quickly, though it took a long time for a lot of projects to move. I seem to remember that it took a long time for both SorceForge and Savannah[1] to gain git repo support and so GitHub was able to grow very rapidly by riding the hype wave that came from Linux’s move to git.

                I honestly have no idea what happened back then and why, and… I mean I skimmed the article but it doesn’t shed as much light as I’d hoped, either. Many of the “canaries” they mention, like the DevShare kerfuffle, were pretty late; the way I recall it (but, granted, my memory may be a little foggy here), by 2013, SourceForge was already getting “oh, SourceForge, that’s a name I haven’t heard in a while” reactions all over the place.

                Presumably, between the frequent change of ownership and really poor management, the folks at SourceForge really just missed the whole git thing? Maybe some of the “forges” didn’t, but nobody wanted to use them. I mean I don’t think it would’ve helped if Savannah had switched earlier…

                It also helped, I guess, that git (and, early on, Mercurial) was pretty good hype wave material. Around 2012 or so there were people at some universities around here were already teaching students to use git, and they were using Github for homework. It certainly helped that there was a lot less ceremony involved in setting up a Github repo vs. a SourceForge project, and that git itself was really easy to use locally. Then you had whole generations of students coming out of school knowing git.

                But other than that… Github is one of those things that I admit to never getting. I jumped on the git bandwagon early on because I really did like the distributed VCS idea – my first choice would’ve been Mercurial but we all know how that ended. But Github… I always found the web-based pull request thingies really awkward to use, the issue tracking is pretty awful, and it’s practically useless for discovering new things. I kind of felt like Github was primarily about fixing a lot of things that made git really hard to use for most projects (which, for various reasons, can’t quite adopt the Linux kernel workflow), and there were so many of those that there was never any time to make it as good a platform as SourceForge was.

                1.  

                  Many of the “canaries” they mention, like the DevShare kerfuffle, were pretty late; the way I recall it (but, granted, my memory may be a little foggy here), by 2013, SourceForge was already getting “oh, SourceForge, that’s a name I haven’t heard in a while” reactions all over the place.

                  That matches my recollection. If I found something on SourceForge after about 2010, maybe a couple of years earlier, my reaction was ‘oh, they’re still on SourceForge, that probably means that it’s not maintained anymore’.

                  Presumably, between the frequent change of ownership and really poor management, the folks at SourceForge really just missed the whole git thing? Maybe some of the “forges” didn’t, but nobody wanted to use them. I mean I don’t think it would’ve helped if Savannah had switched earlier…

                  I think there was also a more subtle shift that they missed: the Google-led shift to minimalistic interfaces. The SourceForge and Savannah interfaces looked incredibly cluttered. In a world where Google had built one of the most successful companies in the world out of a UI that had one picture, a text field, and two buttons, SourceForge looked incredibly dated.

                  I kind of felt like Github was primarily about fixing a lot of things that made git really hard to use for most projects (which, for various reasons, can’t quite adopt the Linux kernel workflow), and there were so many of those that there was never any time to make it as good a platform as SourceForge was.

                  I’ve written about this a bit in an internal memo. Disruptive technologies grow out of flawed products. Subversion was better than CVS but was basically fine for what it did. Subversion GUIs were not very common because the command-line interface was not bad (not great, but basically fine). Git was significantly worse in terms of basic usability than either svn or hg. This meant that there was a big demand for tooling. The git GUIs that I’ve used are all far better than their svn or hg equivalents and are better than the command line interfaces for either. FreeBSD jails and Solaris Zones are another example: they both work well and make it trivial to create an environment with shared-kernel virtualisation. As a result, no one bothered building much tooling on top of them. To get the same thing on Linux, you needed to tie together a bunch of unrelated kernel subsystems and at that point you’ve got a big userspace tool stack that you can add packaging and things to. I don’t believe modern container workflows could have emerged from FreeBSD or Solaris because their shared-kernel virtualisation had reached local optima where the cost of innovation was higher than any benefit. At this point, I think Linux is probably at the ‘good enough’ state for a very large number of things and so I expect that it is going to be displaced by a disruptive technology in a number of domains over the next decade. It’s relatively easy to identify products that are at this kind of local optimum, it’s very hard to identify the things that will displace them.

              2. 2

                Sure, I get that they have somewhat different feature sets, but the definition of “forge” provided by the article itself does not encompass any of these distinctions.

                1.  

                  It doesn’t, and it probably can’t, since there were a lot of “forges” and their feature sets differed. Plus the term itself is way older than GitHub. If Github had been a thing back in 1999 it would’ve probably been called a forge, too.

                2. 1

                  I agree on the release management and end user focus. When I wanted some software: codecs, gimp as you mentioned, other things, and I needed it on Windows, that’s where I would look. Rather download the exe from SF then from whatever shady website somewhere.

                  I didn’t go there for sources, those i would mostly still be looking on FTP sites, if I needed them.

              1. 15

                “HTML Only”

                reads first sentence

                “I use HTML and CSS.”

                waaaaaait a minute

                1. 6

                  I feel certain I’m missing something… I never cared for Heroku. It always seemed slow, made me think I had to jump through weird hoops, and never seemed to work very well for anything that needed more horsepower than github/gitlab’s “pages” type services. And their pricing always had too much uncertainty for me.

                  Granted, I’m old, and I was old before heroku became a thing.

                  But ever since bitbucket and github grew webhooks, I lost interest in figuring out Heroku.

                  What am I missing? Am I just a grouch, or is there some magical thing I don’t see? Am I the jerk shaking my fist at dropbox, saying an FTP script is really just the same? Or am I CmdrTaco saying “No wireless. Less space than a Nomad. Lame.”? Or is it just lame?

                  1. 5

                    By letting and making developers only care about the code they develop, not anything else, they empower productivity because you just can’t yak shave nor bikeshed your infra nor deployment process.

                    Am I the jerk shaking my fist at dropbox, saying an FTP script is really just the same?

                    Yes and you’d be very late to it.

                    1. 3

                      Yes and you’d be very late to it.

                      That’s what I was referencing :-)

                      I think I’m missing this, though:

                      By letting and making developers only care about the code they develop, not anything else, they empower productivity because you just can’t yak shave nor bikeshed your infra nor deployment process.

                      What was it about Heroku that enabled that in some distinctive way? I think I have that with gitlab pages for my static stuff and with linode for my dynamic stuff. I just push my code, and it deploys. And it’s been that way for a really long time…

                      I’m really not being facetious as I ask what I’m missing. Heroku’s developer experience, for me, has seemed worse than Linode or Digital Ocean. (I remember it being better than Joyent back in the day, but that’s not saying much.)

                      1. 2

                        I just push my code, and it deploys.

                        If you had this set up on your Linode or whatever, it’s probably because someone was inspired by the Heroku development flow and copied it to make it work on Linode. I suppose it’s possible something like this wired into git existed before Heroku, but if so it was pretty obscure given that Heroku is older than GitHub, and most people had never heard of git before GitHub.

                        (disclaimer: former Heroku employee here)

                        1. 3

                          it’s probably because someone

                          me

                          was inspired by the Heroku development flow and copied it to make it work on Linode

                          Only very indirectly, if so. I never had much exposure to Heroku, so I didn’t directly copy it. But push->deploy seemed like good horse sense to me. I started it with mercurial and only “made it so” with git about 4 years ago.

                          Since you’re a former Heroku employee, though… what did you see as your distinctive advantage? Was it just the binding between a release in source control and a deployment into production, or was it something else?

                          1. 3

                            Since you’re a former Heroku employee, though… what did you see as your distinctive advantage? Was it just the binding between a release in source control and a deployment into production, or was it something else?

                            As a frequent customer, it was just kind of the predictability. At any point within the last decade or so, it was about three steps to go from a working rails app locally to a working public rails app on heroku. Create the app, push, migrate the auto-provisioned Postgres. Need to backup your database? Two commands (capture & download). Need Redis? Click some buttons or one command. For a very significant subset of Rails apps even today it’s just that few steps.

                            1. 1

                              I don’t really know anything about the setup you’re referring to, so I can only compare it to what I personally had used prior to Heroku from 2004 to 2008, which was absolutely miserable. For the most part everything I deployed was completely manually provisioned; the closest to working automated deploys I ever got was using capistrano, which constantly broke.

                              Without knowing more about the timeline of the system you’re referring to, I have a strong suspicion it was indirectly inspired by Heroku. It seems obvious in retrospect, but as far as I know in 2008 the only extant push->deploy pipelines were very clunky and fragile buildbot installs that took days or weeks to set up.

                              The whole idea that a single VCS revision should correspond 1:1 with an immutable deployment artifact was probably the most fundamental breakthru, but nearly everything in https://www.12factor.net/ was first introduced to me via learning about it while deploying to Heroku. (The sole exception being the bit about the process model of concurrency, which is absolutely not a good general principle and only makes sense in the context of certain scripting-language runtimes.)

                              1. 2

                                I was building out what we were using 2011-2013ish. So it seems likely that I was being influenced by people who knew Heroku even though it wasn’t really on my radar.

                                For us, it was an outgrowth of migrating from svn to hg. Prior to that, we had automated builds using tinderbox, but our stuff only got “deployed” by someone running an installer, and there were no internet-facing instances of our software.

                        2. 2

                          By letting and making developers only care about the code they develop

                          This was exactly why I never really liked the idea of it, even though the tech powering it always sounded really interesting. I think it’s important to have contextual and environmental understanding of what you’re doing, whatever that may be, and although I don’t like some of the architectural excesses or cultural elements of “DevOps”, I think having people know enough about what’s under the hood/behind the curtain to be aware of the operational implications of what they’re doing is crucial to be able to build efficient systems that don’t throw away resources simply because the developer doesn’t care (and has been encouraged not to care) about anything but the code. I’ve seen plenty of developers do exactly that, not even bothering to try and optimise poorly-performing systems because “let’s just throw another/bigger dyno at it, look how easy it is”, and justifying it aggressively with Lean Startup quotes, apparently ignorant of the flip-side of “developer productivity at all costs” being “cloud providers influencing ‘culture’ to maximize their profits at the expense of the environment” - and I’ve seen it more on teams using Heroku than anywhere else because of the opaque and low-granularity “dyno” resource division. It could be that you can granularize it much more now than you could a few years ago, I haven’t looked at it for a while, and maybe even that you could then if you dug really deep into documentation, but that was how it was, and how developers used (and were encouraged to use) it - and to me it always seemed like it made the inability to squeeze every last drop of performance out of each unit almost a design feature.

                      1. 11

                        Sadly, it’s just the in-kernel bits; the userspace blobs are all still proprietary.

                        1. 6

                          It also somehow doesn’t cover actually using your GPU to display graphics on a screen, so don’t get your hopes up.

                          https://blogs.gnome.org/uraeus/2022/05/11/why-is-the-open-source-driver-release-from-nvidia-so-important-for-linux/

                          1. 1

                            For now. If this is related to the previous hack, it wouldn’t be surprising if at least a subset of userspace bits follow down the line.

                            1. 6

                              I would not expect the userland to be released ever. It is the nvidia’s secret sauce doing all of the heavy lifting of implementing opengl/directx/vulkan and fixing various apps’ mistakes.

                              I also don’t think this release is related to the hack in more than timing coincidence. Grapevine says they’ve wanted to release kernel bits for long time but red tape was there.

                              1. 1

                                It’s very not related. Any leaked materials from them would be legally extremely toxic. AMD would not allow anyone employed to read them, and anyone trying to do independent development based on it would get sued by NVIDIA. Even reactos had a “can’t contribute if you’ve read the leaked windows source” rule.

                                1. 1

                                  Never claimed it was leaked. It’s obviously not. I’m suggesting that this may be the result of negotiation with the hackers.

                                  1. 2

                                    No, I get it. I’m saying NVIDIA has no business negotiating. Leaks wouldn’t really hurt it and complying wouldn’t guarantee anything.

                            2. 1

                              Was that the catch? If there had to be one, this actually makes me very relieved, happy and carefully positive to Nvidia again. Because in terms of what to fix first, the kernel module was always the problem, as far as I understand.

                              As a desktop user, all I want is to not have driver problems. Most importantly, to not have my desktop replaced with text on a black screen ever again – the single reason I have avoided Nvidia for a decade now. This used to happen every kernel upgrade.

                            1. 10

                              I read this as ‘Any character between ‘,’ and ‘.’ in ASCII, and wondered if the surprise was that ‘.’ comes before ‘,’ and somehow matching now happens between ‘.’ and the last character, and also ASCII 0 and ‘,’

                              To me then, it does what I expected, but the surprise was that the three characters are at those particular positions. The author’s surprise was different. I think this just adds more evidence that regular expressions are surprising.

                              That said, they are still the best tool for some jobs and I will use them, laid out over multiple lines and commented, and not shared with others, because the worst problem with regular expressions is that you can’t trust that everyone will read them the same, including your future self.

                              1. 3

                                wondered if the surprise was that ‘.’ comes before ‘,’

                                That was it for me; I saw that ‘,’ comes before ‘.’ on the QWERTY and Dvorak keyboard layouts, so I assumed it was that way in ASCII too.

                              1. 39

                                It’s the standard bearer of the “single process application”: the server that runs on its own, without relying on nine other sidecar servers to function.

                                Now there’s an “SPA” I can get behind.

                                1. 3

                                  the server that runs on its own, without relying on nine other sidecar servers to function.

                                  And what happens when that single server fails?

                                  1. 20

                                    You start a new instance and issue litestream recover s3://…. or whatever and go on your way… That’s the product.

                                    The post also outlines future enhancements that will do read-replicas, so only writes would be unavailable while the replacement server boots.

                                    1. 11

                                      Not every program needs All The Nines.

                                      1. 9

                                        What happens when your single Postgres server goes down? The whole service is down. Same thing.

                                        1. 10

                                          I have never used a single Postgres server in production. Have you?

                                          Postgres failovers have been 100% reliable for me, but that requires tradeoffs in terms of setup complexity and write latency. I am perfectly happy to take a slightly more complex setup involving ‘one more sidecar server’, thank you.

                                          1. 3

                                            Sure. Back in the 1990’s, mind you. Losing a box and a Postgres instance is fairly rare and most web apps don’t have really massive uptime requirements.

                                    1. 30

                                      What this article fails to mention is that none of the popups demonstrated are necessary, and hence of dubious legality. Rather than design a clear cookie consent banner, just defer displaying it until it is necessary. And no, your Google AdWords cookie is not necessary.

                                      It is also ironic that the article itself is covered by a floating banner:

                                      To make Medium work, we log user data. By using Medium, you agree to our Privacy Policy, including cookie policy.

                                      1. 23

                                        Plus the idea that the reason these bad dialogs exist because no one’s designed a better one is just … hopelessly misguided. Offering a better alternative won’t make sites switch to it, because they’re doing what they’re doing now because it’s bad, not because it’s good.

                                        1. 2

                                          Yes. It’s basically a form of civil disobedience.

                                          Basically operating on game theory, hoping the other sites don’t break rank

                                          1. 3

                                            It’s basically a form of civil disobedience.

                                            Uhhhh… that’s an odd analogy.

                                            Civil disobedience is what you do when the law is unjust or immoral; this is more like “the law doesn’t allow us to be as profitable as we would like so we are going to ignore it.”

                                            1. 1

                                              Civil disobedience doesn’t have to be ethically or morally just.

                                              civil disobedience, also called passive resistance, the refusal to obey the demands or commands of a government or occupying power, without resorting to violence or active measures of opposition; its usual purpose is to force concessions from the government or occupying power.

                                              1. 1

                                                Hm; never thought about it that way but fair point! It feels a bit off to compare corporate greed with like … Gandhi, but technically it fits.

                                        2. 7

                                          Yeah I find publishing an article like this through medium a bit ironic and hypocritical.

                                        1. 6

                                          100 versions later

                                          This seems to be playing a little loose with the facts. At some point Firefox changed their versioning system to match Chrome, I assume so that it wouldn’t sound like Firefox was older or behind Chrome in development. Firefox did not literally travel from 1.0 to 100. So it probably either has fewer or more than 100 versions, depending on how you count. UPDATE: OK I was wrong, and that was sloppy of me, I should have actually checked instead of relying on my flawed memory. There are in fact at least 100 versions of Firefox. Seems like there are probably more than 100, but it’s not misleading to say that there are 100 versions if there are more than 100.

                                          That said, this looks like a great release with useful features. Caption for picture-in-picture video seems helpful, and I’m intrigued by “Users can now choose preferred color schemes for websites.” On Android, they finally have HTTPS-only mode, so I can ditch the HTTPS Everywhere extension.

                                          1. 6

                                            Wikipedia lists 100 major versions from 1 to 100.

                                            https://en.m.wikipedia.org/wiki/Firefox_version_history

                                            What did happen is that Mozilla adopted a 4 week release cycle in 2019 while Chrome was on a 6 week cycle until Q3 2021.

                                            1. 4

                                              They didn’t change their version scheme, they increased their release cadence.

                                              1. 7

                                                They didn’t change their version scheme

                                                Oh, but they did. In the early days they used a more “traditional” way of using the second number, so we had 1.5, and 3.5, and 3.6. After 5.0 (if I’m reading Wikipedia correctly) they switched to increasing the major version for every release regardless of its perceived significance. So there were in fact more than 100 Firefox releases.

                                                https://en.wikipedia.org/wiki/Firefox_early_version_history

                                                1. 3

                                                  I kinda dislike this “bump major version” every release scheme, since it robs me of the ability to visually determine what may have really changed. For example, v2.5 to v2.6 is a “safe” upgrade, while v2.5 to v3.0 potentially has breaking changes. Now moving from v99 to v100 to v101, well, gotta carefully read release notes every single time.

                                                  Oracle did something similar with JDK. We were on JDK 6 for several years, then 7 and then 8, until they ingested steroids and now we are on JDK 18! :-) :-)

                                                  1. 7

                                                    Sure for libraries, languages and APIs, but Firefox is an application. What is a breaking change in an application?

                                                    1. 4

                                                      I got really bummed when Chromium dropped the ability to operate over X forwarding in SSH a few years ago, back before I ditched Chromium.

                                                      1. 1

                                                        Changing the user interface (e.g. keyboard shortcuts) in backwards-incompatible ways, for one.

                                                        And while it’s true that “Firefox is an application”, it’s also effectively a library with an API that’s used by numerous extensions, which has also been broken by new releases sometimes.

                                                        1. 1

                                                          My take is that it is the APIs that should be versioned because applications may expose multiple APIs that change at different rates and the version numbers are typically of interest to the API consumers, but not to human users.

                                                          I don’t think UI changes should be versioned. Just seems like a way to generate arguments.

                                                      2. 6

                                                        It doesn’t apply to consumer software like Firefox, really. It’s not a library for which you care if it’s compatible. I don’t think version numbers even matter for consumer software these days.

                                                        1. 5

                                                          Every release contains important security updates. Can’t really skip a version.

                                                          1. 1

                                                            Those are all backported to the ESR release, right? I’ve just noticed that my distro packages that; perhaps I should switch to it as a way to get the security fixes without the constant stream of CADT UI “improvements”…

                                                            1. 2

                                                              Most. Not all, because different features and such. You can compare the security advisories.

                                                        2. 1

                                                          Oh, yeah, I guess that’s right. I was focused in on when they changed the release cycle and didn’t think about changes earlier than that. Thank you.

                                                    1. 22

                                                      For the uninitiated: Style insensitivity is a unique feature of the Nim programming language that allows a code base to follow a consistent naming style, even if its dependencies use a different style. It works by comparing identifiers in a case-insensitive way, except for the first character, and ignoring underscores.

                                                      Another advantage of style insensitivity is that identifiers such as itemId, itemID or item_ID can never refer to different things, which prevents certain kinds of bad code. An exception is made for the first letter to allow the common convention of having a value foo of type Foo.

                                                      There’s a common misconception that this feature causes Nim programmers to mix different styles in a single codebase (which, as mentioned, is precisely the opposite of what it does), and it gets brought up every time Nim is mentioned on Lobsters/HackerNews/etc, diverting the discussion from more valuable topics.

                                                      1. 3

                                                        There’s a common misconception that this feature causes Nim programmers to mix different styles in a single codebase (which, as mentioned, is precisely the opposite of what it does)

                                                        But… isn’t it exactly what the feature does? If my coding habits would make me write itemId and a coworker’s code habits would make them write item_id, style insensitivity makes it likely that I would accidentally use a different style than my coworker for the same variable in the same codebase, right? While most languages would make this impossible by making item_id be a different name than itemId, right?

                                                        How is this a misconception?

                                                        To be clear, I’m not saying it’s a huge deal or that it warrants all the attention it’s getting (that’s a different discussion), but since you brought it up…

                                                        1. 3

                                                          Thanks for providing some context. Is this a thing that gets applied by default any time you use any library, or a feature you can specifically invoke at the point during which the library is imported?

                                                          The former seems … real bad. The latter seems … kinda neat? but a bit silly.

                                                          1. 6

                                                            Currently it’s always on, and there’s an opt-in compile flag --styleCheck:error that makes it impossible to use an identifier inconsistently within the same file. The linked issue discusses if and how this behavior should be changed in Nim 2.

                                                            Personally, I wouldn’t mind if it was removed, as long as:

                                                            • --styleCheck:error was on by default
                                                            • there was a mechanism to restyle identifiers when importing a library.
                                                            1. 2

                                                              I agree. People outside the Nim community can add real value to this discussion, since it is just speculation what they really think based on a few loud complainers.

                                                              1. 4

                                                                I’m someone who looked at Nim, really liked it, then saw the “style insensitivity” and thought “this isn’t for me”. (Not co-incidentally, I’ve been involved in a major, CEOs-gettting-involved, fiasco that was ultimately due to SQL case-insensitivity.)

                                                                Nim occupies a nice space - compiled but relatively “high level” - with only really Go as a competitor (zig/rust/c++ all seem a little too low level.) I personally recoil at the idea of “style insensitivity”, but hopefully in a friendly, lobste.rs manner.

                                                                1. 4

                                                                  I’ve been involved in a major, CEOs-gettting-involved, fiasco that was ultimately due to SQL case-insensitivity.

                                                                  You can’t just say this and leave us hanging 😆 tell us the story! How did that cause a fiasco?

                                                                  1. 4

                                                                    Our software wouldn’t start for one customer (a large bank). The problem was unreproducible and had been going on for weeks. The customer was understandably very unhappy.

                                                                    The ultimate cause was a “does the database version match the code” check. The database had a typical version table that looked like:

                                                                    CREATE TABLE DB_STATE (VERSION INT, ...);
                                                                    

                                                                    Which was checked at startup using something like select version from db_state. This was failing for the customer because in Turkish, the upper case of “version” is “VERSİON” (look closely). Case-insensitivity is language-specific and the customer had installed the Turkish version of SQL Server.

                                                                    Some java to demonstrate:

                                                                    public class Turkish {
                                                                      public static void main(String[] args) {
                                                                        System.out.println("version".toUpperCase(java.util.Locale.forLanguageTag("tr")));
                                                                      }
                                                                    }
                                                                    

                                                                    If you look at the java documentation for toUpper, they specifically mention Turkish - others have been bit by this same issue, I’m sure.

                                                                    Which makes me wonder - how does Nim behave if compiled on a machine in Turkey, or Germany.

                                                                    1. 5

                                                                      This is making me wonder if anyone has ever used this as a stack-smashing attack. Find some C code that uppercases the input in place and send a bunch of “ß”s. Did the programmer ensure the buffer is big enough?

                                                                      1. 3

                                                                        I’m pretty sure Nim’s style insensitivity is not locale-specific. That would be very dumb.

                                                                    2. 2

                                                                      Seems plenty friendly to me. Programmers can get awfully passionate about style guides/identifier conventions.

                                                                      I think it tracks individual, personal history a lot - what confusions someone had, what editor/tool support they had for what sort of “what meta-kind of token is this?” questions and so on. In extreme cases it can lead to, e.g. Hungarian notation like moves. It can even be influenced by font choices.

                                                                      Vaguely related to case insensitivity, early on, Nim only used '\l' to denote a newline character..Not sure why it was not '\n' from the start, but because lower case “l” (elle) and the numeral “1” often look so similar, '\L' was the dominant suggestion and all special character literals were case insensitive. (Well, one could argue it was just tracking general insensitivity that made them insensitive…).

                                                                      1. 2

                                                                        Personally I’d say this compares unfavorably to go, where style arguments are solved by the language shipping a single blessed formatting style.

                                                                        Confusions/disagreements over formatting style are - imo - a waste of the teams engineering time, so I see the go approach as inherently better.

                                                                        1. 2

                                                                          Go doesn’t enforce a style for identifiers. Try it online!

                                                                          1. 2

                                                                            Thanks, I hate it.

                                                                            Fair point to nim though!

                                                                      2. 2

                                                                        There’s also crystal, but ii is failing to reach critical mass in my opinion.

                                                                        I think crystal did a great job providing things people usually want upfront. I want a quick and direct way to make an HTTP request. I want to extract na value from a JSON string with no fuss. I want to expose functionality via CLI or a web interface with minimal effort. Crystal got these right.

                                                                        I agree that above-mentioned languages are too low level.

                                                                        1. 1

                                                                          Not sure about the others, but I think exposing functionality via CLI is pretty easy in Nim. cligen is not in the Nim stdlib, though.

                                                                          1. 1

                                                                            I was not comparing to Nim directly. Just giving examples of the kind of thing I believe are the strongest drives to success of a language.

                                                                            But for an example of one thing that I found lacking in Nim was concurrency primitives. Crystal makes it relatively simple and direct with fairly simple and familiar fibre API.

                                                                            A quick way to spin up an HTTP service was another one. It even had support for websockets.

                                                                  2. 4

                                                                    It is always on - even for keywords like f_Or in a loop. I was trying to perhaps help guide the group towards a usage like you articulate.

                                                                    EDIT: The main use case cited is always “use a library with an alien convention according to your own convention”. So, being more precise and explicit about this as an import-time construct would seem to be less feather ruffling (IMO).

                                                                  3. 3

                                                                    Just for the record - the Z shell (Zsh) had style insensitivity for its setopt builtin waaay back in the very early 90s. They did not make the first letter sensitive, though. :-)

                                                                    As this seems to be a very divisive issue and part of what is divisive is knowing how those outside the community (who do not love/have not made peace with the feature) feel, it might be helpful if Lobster people could weigh in.

                                                                    1. 2

                                                                      As this seems to be a very divisive issue and part of what is divisive is knowing how those outside the community (who do not love/have not made peace with the feature) feel, it might be helpful if Lobster people could weigh in.

                                                                      I looked into Nim and was at least partially dissuaded by style insensitivity. I don’t think it’s fatal persay, but it did hit me very early in my evaluation. I would liken it to the dot calls in Elixir: Something that feels wrong and makes you question other decisions in the language. That said, Elixir is a fabulous language and I powered through. I imagine others feel similarly.

                                                                      1. 2

                                                                        What specifically do you not like about style insensitivity?

                                                                        1. 1

                                                                          Here’s the thing: I haven’t used style insensitivity so I can’t really say I dislike it. However, it struck me as unnecessarily inconsistent. I don’t care about snake case or camel case. I just want code to be consistent. Of course, code I write can be consistent with style insensitivity, but code I read probably won’t be.

                                                                          Additionally, I imagined that working in a team could have issues: repos use the original author’s preferred styling. Of course, having a clear style guide helps, but in small teams sometimes people are intractable and resistant to change. In a way, it triggers a feeling of exasperation: a memory of all the stupid little arguments you have with other developers.

                                                                          So here I am kicking the tires on a new exciting language and I am already thinking about arguing with people. Kind of takes the wind out of your sails. It may be a great feature, but I imagine it’s a barrier to adoption for some neurotic types like myself. (Maybe that’s a blessing?)

                                                                          1. 1

                                                                            You have it the other way around. With style insensitivity, code is much more likely to follow a consistent style — because it can’t be forced into inconsistency by using libraries from different authors.

                                                                            1. 1

                                                                              I can see how code I write is consistent, but code I read is going to be more inconsistent. If it wasn’t then why would we need style insensitivity in the first place?

                                                                              1. 1

                                                                                I can see how code I write is consistent, but code I read is going to be more inconsistent.

                                                                                Can you show me a serious Nim project that uses an inconsistent style?

                                                                                If it wasn’t then why would we need style insensitivity in the first place?

                                                                                Because libraries you’re using may be written in different styles.

                                                                                1. 1

                                                                                  I’m saying it’s inconsistent across projects. Not within projects. Sometimes you have to read other people’s code. Style insensitivity allows/encourages people to pick the nondefault style.

                                                                                  Ultimately, I’m not in the nim ecosystem. I posted my comment about why style insensitivity made me less interested in nim. I can tell you that this is the exact sort of argument I was looking to avoid so you have proven my initial concerns correct.

                                                                                  1. 1

                                                                                    I don’t see what the problem is with reading code written in a different style, as long as it’s consistent. And in practice, most Nim projects follow NEP1.

                                                                    2. 2

                                                                      Doesn’t JRuby have something similar around Java native methods?

                                                                      1. 2

                                                                        I think it’s an interesting comparison, but it’s important to keep in mind that a language based around message passing is fundamentally different from what’s going on here where the compiler itself is collapsing a bunch of different identifiers during compile-time. When you call a method in Ruby, you’re not really supposed to care how it’s resolved, but when you make a call in a language like Nim, you expect it to be resolved at compile-time.

                                                                        1. 3

                                                                          I’d disagree there. The mechanism in JRuby is that the method is made available under multiple names to the application after it’s loaded. That’s not extremely different from what Nim does, except if we go down to the level to say we can’t compare languages with different runtime and module loading models.

                                                                          https://github.com/jruby/jruby/wiki/CallingJavaFromJRuby

                                                                          1. 2

                                                                            I guess what I meant was that even if the implementation works the same way, Rubyists fundamentally have different expectations around what happens when you call foo.bar(); they’ve already given up on greppability for other reasons.

                                                                    1. 5

                                                                      Looks great! Might be worth linking to from the Fennel wiki: https://github.com/bakpakin/Fennel/wiki

                                                                      1. 7

                                                                        I think there’s definitely a bunch of differences in various areas of the world here. The one time I did a interview for a London-based Clojure role it involved a terrible tech test with me writing Clojure by hand on paper.

                                                                        1. 6

                                                                          writing Clojure by hand on paper

                                                                          I’m trying to imagine how this would work and it’s just not coming together in my head. Do you use like … different colored magic markers for syntax highlighting? Asking people to write code without paredit feels almost inhumane, much less not having access to a repl.

                                                                          1. 19

                                                                            I’m picturing a red button on the desk, and every time you write an unbalanced “)” the interviewer hits it and there’s a BZZZZZ

                                                                            1. 6

                                                                              It was just me and a biro. It was excruciating and I kept asking the interviewer if I could take my laptop out of my bag and do it there, and he kept refusing. These days I’d just walk if they suggested I do such a thing.

                                                                              1. 3

                                                                                These days I’d just walk if they suggested I do such a thing.

                                                                                Bingo.

                                                                                There’s a point at which even if you pass the interview, if the process was so broken, you know that if you take the job you’ll be stuck only working with co-workers who also passed the broken process.

                                                                                1. 1

                                                                                  Jeez. I’d have just asked the interviewer how often they wrote their clojure code on paper.

                                                                                2. 4

                                                                                  you could simply drop the parens altogether. You can get away saying something like “you are not supposed to see/count parens anyway!”. /me ducks.

                                                                              1. 3

                                                                                Question: is becoming a niche programmer possible for someone with little to no experience? Is it a good nontraditional route to enter the industry?

                                                                                1. 5

                                                                                  I think it depends on the niche? For Clojure, my sense is that you can pick it up with little or no experience, but you have to be intelligent in a particular way

                                                                                  e.g. In my experience probably some smart music majors can pick it up, but others might have problems. That is, people “just looking for jobs” will likely have issues with Clojure. There is a tendency to do more from first principles and not follow canned patterns. Also smaller number of StackOverflow answers

                                                                                  1. 5

                                                                                    Absolutely, my previous job hired lots of interns to work with Clojure who had very little programming experience. The interesting part we found was that people without experience often have an easier time learning Clojure because they don’t have any preconceptions about how code should be written. A few of the students my team hired ended up specializing in Clojure after and haven’t had to work with anything else since.

                                                                                    Since Clojure is niche, companies end up having to train most of their devs, so having familiarity with the language is seen as a big plus. Having some small project on GitHub that you can link to in your resume would go a long way here.

                                                                                    1. 2

                                                                                      I think all this post shows is that it’s possible to do much better than the mainstream in a niche, but it’s also possible to do a lot worse.

                                                                                      The thing about a niche is it’s unique, so how can you generalize about it? It completely depends on what niche it is.

                                                                                    1. 2

                                                                                      Honestly I don’t understand this at all. It seems like the point of using Docker is to have a single image that is your unit of deployment but like … you already have that with an uberjar. Your deploy is one file.

                                                                                      With an uberjar you have to ensure you’ve got a JVM installed on the system before you can deploy but with a docker image you have to ensure you’ve got docker installed, so … what have you gained, exactly? Is it just about uniformity and using tools that are easier to hire for because they also work with non-JVM deployments?

                                                                                      1. 2

                                                                                        Both options are certainly viable. If one works for you, it works.

                                                                                        At 200ok, we deploy our Clojure based micro services with Docker for relatively easy scaling, logging, monitoring. “Relatively easy” because it works the same way in development and production as well as for services written in other languages.

                                                                                        1. 1

                                                                                          Java runtimes require Java on the underlying host. It may just be me, but I think the JVM sees more churn that an app would care about (GC, versions, features, flags) than a docker env would. So the point of shoving an Uberjar into a container is to allow getting your app tuned without redeploying the underlying server.

                                                                                          1. 1

                                                                                            Based on my usage of docker at work (not my own choice) I see more churn in docker than the JVM, but that’s because staying on an older version of the JVM is an option, and as far as I can tell, staying on an older version of docker isn’t. (or at least not an option for me personally given the decisions other people in my org are making for me). The version of the JVM we use is backwards-compatible with the same stuff I’ve been using since I first started using the JVM in 2008, but yeah the newer ones don’t have the same guarantees. (That’s why I don’t use em! for now anyway)

                                                                                          2. 1

                                                                                            You’re right that an uberjar is effectively a container. However, using Docker has become the standard way to manage containers in production nowadays. Packaging apps the same way you package everything else makes sense in this context.

                                                                                          1. 14

                                                                                            I still think it’s more of a N+M vs N*M thing, but the critical change LSP brought was a change in thinking about the problem.

                                                                                            Language support used to be assumed to be IDE’s job. We’ve had “Java IDEs” and “Visual Studio for $LANG”, and we were hoping someone would write a “Rust IDE”. Not a language server, not a compiler infrastructure for an IDE, but an actual editor with Rust support built-in.

                                                                                            The problem was that it took twice as much effort to write an IDE with an integrated analyzer, than just a bare editor, or just a GUI-less analyzer. On top of that people good at writing IDEs aren’t necessarily also good at writing analyzers, so we’ve had many good editors with very shallow language support.

                                                                                            1. 4

                                                                                              the critical change LSP brought was a change in thinking about the problem.

                                                                                              I see it as a Betamax/VHS situation. We had language-agnostic, editor-agnostic editor protocols before, (like nREPL) but they never saw mainstream adoption because they didn’t have a megacorp backing them.

                                                                                              1. 3

                                                                                                I mean, VHS beat Beta for a variety of reasons, but the big one was that you could record more than 60m of video at a time.

                                                                                                1. 1

                                                                                                  So how did rental video on Betamax work?

                                                                                                  1. 1

                                                                                                    Multiple tapes.

                                                                                                    1. 1

                                                                                                      Are you sure you’re not thinking of Philips/Grundig VCR (with the square cassettes) rather than Betamax?

                                                                                                  2. 1

                                                                                                    I’ll admit I’m too young to have ever owned either so maybe that wasn’t the best analogy!

                                                                                                    1. 1

                                                                                                      It’s cool. It used to be a very common, canonical example of the “better” technology losing to the inferior one; but “better” is a multidimensional comparison, and by certain important measures, VHS was technically superior to Beta. Beta did have superior picture quality, though.

                                                                                                      1. 3

                                                                                                        Also note it’s not cut and dry either - while Beta “lost” the consumer market, it massively won the professional market, and variants there dominated.

                                                                                                        I think the biggest takeaway is there’s no binary winner-loser, especially when multiple market segments exist.

                                                                                                        1. 1

                                                                                                          This is a very good point.

                                                                                                  3. 2

                                                                                                    Sure, compatibility with a rapidly growing editor from a big corp was definitely a big motivation that helped LSP.

                                                                                                    But I’m not sure if nREPL was really an alternative. From the docs it seems very REPL-centric, and beyond that quite generic and unopinionated, almost to the point of being a telnet protocol. You can do anything over telnet, but “what methods are available in line 7 col 15 after typing ‘Z’” needs a more specific protocol.

                                                                                                    1. 3

                                                                                                      From the docs it seems very REPL-centric, and beyond that quite generic and unopinionated

                                                                                                      This is a fair criticism; the docs on the reference implementation’s site are not very clear on what nREPL actually is. I think this might be because that site is maintained by the same person who runs Cider, the most widely used client for nREPL, which only works for Clojure; the language-agnostic aspects which were key to the protocol’s original designer appear on that site as a bit of an afterthought because that’s not where his interest is.

                                                                                                      But the protocol is extensible; eval is only one of the supported operations. You can use it to look up documentation with the describe op, and they recently added a complete op as well: https://github.com/nrepl/nrepl/issues/174

                                                                                                      The fact that the docs don’t make this clear is admittedly a big problem, as is the fact that the protocol doesn’t standardize a wider range of declarative queries and operations. I think of these as both being part of the “doesn’t have megacorp resources behind it” problem but that doesn’t help.

                                                                                                    2. 1

                                                                                                      TextMate language definition files didn’t have a megacorp backing them, yet they’re the defacto standard for syntax highlighting.

                                                                                                  1. 1

                                                                                                    I really hope they allow moderators to disable this on a per-channel basis; otherwise this is going to wreck my channels that are bridged to IRC, and bridging to IRC is the only reason I’m interested in Matrix.

                                                                                                    1. 11

                                                                                                      And, when compiler authors start thinking about IDE support, the first thought is “well, IDE is kinda a compiler, and we have a compiler, so problem solved, right?”.

                                                                                                      Yep, and I think most languages were / are not developed with tooling in mind and this makes a huge difference. LSP provides a means to commonly needed features around code analysis but debugging support and code evaluation / incremental change at runtime are missing. Smalltalk and Common Lisp are the only ones that provide low-level language support for providing this, developing with them is a stellar experience that IMHO no other language has achieved yet, for exactly that reason.

                                                                                                      1. 9

                                                                                                        Right; exactly. The problem here wasn’t the IDE authors, it was the compiler authors. The way to do this is to use the actual compiler, not for the tooling authors to reinvent the wheel into a bunch of features that are supposed to work the same as the compiler but have a ton of edge cases where they don’t match.

                                                                                                        The reason many languages didn’t do a good job at this was actually that most compilers just suck at exposing an API that exposes the functionality needed for tooling, so tooling authors had to reinvent the wheel and write mountains of duplicated code.

                                                                                                        Compilers that don’t have this problem tended to be pretty easy to adapt to new editors even before LSP existed. Common Lisp had the swank server which contained all the smarts; even though it was originally developed for Emacs, you could write new clients for it in other editors without that much trouble even on a shoestring effort budget compared to what LSP has available. Of course in that case it’s not language agnostic, but that’s why the nREPL protocol was developed that actually has a bunch of different implementations across several languages. (and writing an nREPL client is a lot easier than writing an LSP client because it just needs bencode+sockets; bencode takes a couple pages of code vs the complexity of JSON.)

                                                                                                        1. 4

                                                                                                          I feel that nREPL is a different kind of thing. Image-based IDEs and static analysis-based IDEs are very different technologies, even if they can power similar end-user features.

                                                                                                          1. 5

                                                                                                            Well, it’s a different thing in that it requires a specific evaluation model in a language in order to be supported, but for those supported languages, it’s dramatically more capable than what LSP offers: “show me the test results for running this given file inline”, “perform completion on the fields of this specific hashmap”, “toggle tracing for the function at point”. All super useful functionality that LSP can’t provide because it’s hamstrung in its evaluation model. Clojure’s LSP server can’t even perform macroexpansion in a way that makes such a basic feature as “find definition” work reliably.

                                                                                                            I’m just objecting to the idea that “LSP-shaped things” weren’t around before LSP. It wasn’t that LSP introduced a new category of tooling; it just brought these tools which have existed for decades into more mainstream languages whose tooling have historically really sucked.

                                                                                                      1. 1

                                                                                                        I find it interesting that the longest comment on there is about 1-based indexing.

                                                                                                        I very much agree with the author about the strength of 0-based indexing for general purpose programming.

                                                                                                        I wonder if fennel the language, and its compiler, could hypothetically easily allow one to use 0-based indexing and transform it to the lua-native 1-based index.

                                                                                                        1. 2

                                                                                                          That would run counter to what Fennel, as a language, intends to do. Fennel is intended as a small layer over Lua, not to use Lua only a compilation target.

                                                                                                          This enables some nice things, like being able to easily use existing Lua libraries without having to write interop layers. A change like you’re suggesting would turn Fennel into something fundementally different, because it’s not just Fennel that you have to change, it’s also how you interact with the greater Lua ecosystem.

                                                                                                          1. 1

                                                                                                            I did not mean to say that I thought it should use 0-based indexing.

                                                                                                            I totally understand the fact that it is a small layer, aiming to provide easy interoperability. I was mostly wondering about what it would entail, from a PL/compiler design standpoint, to create a language which uses a different basis. I think the rote translation of array[i] to array[i + 1] is no challenge. The problem begins once you need to interact with the rest of the ecosystem : any foreign function call could be using an integer as an index!

                                                                                                            1. 2

                                                                                                              Well, and I think it’d fundamentally be at odds with the rest of the Lua ecosystem in two ways:

                                                                                                              1. Most Lua libraries don’t have the typing annotations that it’d take to get this sort of information in place.
                                                                                                              2. Fennel itself doesn’t track types closely enough to enable this, as far as I’m aware.

                                                                                                              Maybe the Type Script To Lua compiler might be able to annotate enough type info to do that? But I’d expect that you’d need language that tracks indices, and very few languages do that, and compile to Lua. Rust comes to mind as an example of something that might track types enough for this sort of thing?

                                                                                                              It is an interesting thought experiment.

                                                                                                              1. 1

                                                                                                                It’s a reasonable question, just to consider what it would take to do this.

                                                                                                                You can’t do this fully at compile time, which is the way Fennel works. You would have to change the compiler to turn table literals to emit zero-indexed Lua data structures and change destructuring to work that way, but you would also need to override the ipairs iterator and the table.unpack function (and maybe a couple more I’m overlooking) which could only be done at runtime. And of course as you’ve noted, now crossing language boundaries becomes a big headache.

                                                                                                                In practice no one actually does this, because by the time you learn enough about Lua to understand how to do it, you also understand that there’s no reason to.

                                                                                                          1. 6

                                                                                                            The 0 vs 1 indexing is just bike shedding at this point IMO. It is very simple to understand what a language uses and go with it. There is absolutely no need to go over the top with rage because a language is not using one’s preferred indexing.

                                                                                                            0-based indexes have their place. Especially when dealing with C, arrays, and pointer addresses. 1-based indexes also have their place, I for one find them easier to reason about. I don’t go on C forums demanding that C25 becomes 1-based…

                                                                                                            Really tired of those arguments. And it is always with Lua. I posted about it in the past: Lua, a misunderstood language.

                                                                                                            1. 5

                                                                                                              0-based indexes have their place. Especially when dealing with C, arrays, and pointer addresses.

                                                                                                              I’ve more or less settled on the opinion that “having to care about what index your data structure starts at” is the mark of a low-level language. High level code uses iterators and destructuring to access the data structures such that the details of where the indexing starts at is almost never relevant. For instance, this code:

                                                                                                              (let [color (. colors idx)
                                                                                                                    color-str (string.format "#%02X%02X%02X" (. color 1) (. color 2) (. color 3))]
                                                                                                              

                                                                                                              should be instead

                                                                                                                (let [[r g b] (. colors idx)
                                                                                                                      color-str (string.format "#%02X%02X%02X" r g b)]
                                                                                                              

                                                                                                              The author of the article got confused because they were used to using languages which have been influenced by C, not because of any inherent advantage of one system over another.

                                                                                                              1. 2

                                                                                                                Just for the sake of completion, I just want to mention that Lua has iterators. You can iterate using pairs() and ipairs(), example:

                                                                                                                    a = {"one", "two", "three"}
                                                                                                                    for i, v in ipairs(a) do
                                                                                                                      print(i, v)
                                                                                                                    end
                                                                                                                
                                                                                                              2. 1

                                                                                                                Cool to see NewtonScript as the metatable example, thank you :). Interesting perhaps to note that NS originated from a Lua-like goal, a simple language used to glue together “native” components to construct an app. It grew to be a little more complex than Lua because of its context (e.g., dual/proto inheritance to save RAM), but still the Newtonian functionality was arguably all library/interop.

                                                                                                                1. 1

                                                                                                                  omg @wrs, I really like your language. I carried a Newton way into 2006, just stopped using the thing because someone sat on it. Still have an eMate 300 here, it is one of my favourite writing machines. NewtonScript deserves way more love than it gets. I remember it very fondly. The dual proto inheritance was genius, I often think about it when developing components for web usage, it was so much simpler.

                                                                                                                  Anyway, thanks for building one of my favourite devices.

                                                                                                                  1. 1

                                                                                                                    I’d like to know more about “dual proto inheritance.” Is it like in Io, which has a list of parents? https://iolanguage.org/guide/guide.html#Objects-Prototypes

                                                                                                                    1. 1

                                                                                                                      It is not exactly like Io. The inheritence in Io is more flexible. In NewtonScript, the dual inheritance has a purpose that makes it easier to write GUI code as you can have something like a form in which the fields and buttons all inherit from their prototype objects, but also from the form container thus allowing you to handle them in the form container script. Err, I think I’m not explaining it well. You might have a better understanding from checking Chapter 5 of https://www.newted.org/download/manuals/NewtonScriptProgramLanguage.pdf

                                                                                                                      1. 1

                                                                                                                        Thanks for replying!

                                                                                                              1. 9

                                                                                                                Finally it means you need two separate sets of keys to do almost the same thing inside or outside of Emacs. Pretty nasty and pointless!

                                                                                                                You can also solve this by going in the other direction by making Emacs your window manager. I moved to EXWM a few years ago and have never looked back, it’s incredible how much better basically all of my workflows have become. It also doesn’t really need any frequent handholding, I seem to touch this config about once per month.

                                                                                                                This post reminds me of another nested version of this problem that some EXWM users have: Applications (especially browsers) with tabs, which they’d prefer to move into Emacs as well.

                                                                                                                1. 1

                                                                                                                  And even if you don’t use exam - this problem becomes less significant the more you live inside Emacs. So that’s one motivation to do things in Emacs.

                                                                                                                  1. 1

                                                                                                                    Applications (especially browsers) with tabs, which they’d prefer to move into Emacs as well.

                                                                                                                    surf is a handy little browser for this sort of thing

                                                                                                                    1. 1

                                                                                                                      I tried one-tab-per-window inside exwm with firefox but it was significantly slower to create a new window than creating a new tab, so I ended up having to ditch it. Hadn’t tried it with one of the webkit ones; maybe once I get forced to stop using exwm by wayland I’ll give it a shot. Is it easy to configure surf with emacs keys? When I looked at it years ago it didn’t look straightforward.

                                                                                                                      1. 1

                                                                                                                        Is it easy to configure surf with emacs keys?

                                                                                                                        I don’t use surf, but there are similar patches for dwm. I added emacs keys to tabbed (haven’t published the patch yet), and it was pretty easy there too. I’m guessing surf would be similar to these other suckless projects.