1. 14

    Weird coincidence: this release is mostly about the Racket-on-Chez effort and lobste.rs (randomly?) decided this story’s slug should be chezwl. Clearly, we live in a simulation and the PRNG is pretty poor.

    1. 2

      Yeah, and the world is probably running with universe.rkt!

    1. 10

      I switched off of Google products about 6 months ago.

      What I did was I bought a Fastmail subscription, went through all my online accounts (I use a password manager so this was relatively easy) and either deleted the ones I didn’t need or switched them to the new e-mail address. Next, I made the @gmail address forward and then delete all mail to my new address. Finally, I deleted all my mail using a filter. I had been using mbsync for a while prior to this so all of my historical e-mail was already synced to my machine (and backed up).

      Re. GitHub, for the same reasons you mentioned, I turned my @gmail address into a secondary e-mail address so that my commit history would be preserved.

      I still get the occasional newsletter on the old address, but that’s about it. Other than having had to take a few hours to update all my online accounts back when I decided to make the switch, I haven’t been inconvenienced by the switch at all.

      1. 4

        It’s really exciting to see people migrating away from Gmail, but the frequency with which these posts seem to co-ocur with Fastmail is somehow disappointing. Before Gmail we had Hotmail and Yahoo Mail, and after Gmail, perhaps it would be nicer to avoid more centralization.

        One of the many problems with Gmail is their position of privilege with respect to everyone’s communication. There is a high chance that if you send anyone e-mail, Google will know about it. Swapping Google out for Fastmail doesn’t solve that.

        Not offering any solution, just a comment :) It’s damned hard to self-host a reputable mail server in recent times, and although I host one myself, it’s not really a general solution

        1. 5

          Swapping Google out for Fastmail solves having Google know everything about my email. I’m not nearly as concerned about Fastmail abusing their access to my email, because I’m their customer rather than their product. And with my own domain, I can move to one of their competitors seamlessly if ever that were to change. I have no interest in running my own email server; there are far more interesting frustrations for my spare time.

          1. 2

            I can agree that a feasible way to avoid centralization would be nicer. However, when people talk about FastMail / ProtonMail, they still mean using their own domain name but paying a monthly fee (to a company supposedly more aligned with the customer’s interests) for being spared from having to set up their own infrastructure that: (A) keeps spam away and (B) makes sure your own communication doesn’t end up in people’s Junk folder.

            To this end, I think it’s a big leap forward towards future-proofing your online presence, and not necessarily something comparable to moving from Yahoo! to Google.

            1. 3

              for being spared from having to set up their own infrastructure that: (A) keeps spam away and (B) makes sure your own communication doesn’t end up in people’s Junk folder.

              I’m by no means against Fastmail or Proton, and I don’t think everyone should setup their own server if they don’t want to, but it’s a bit more nuanced.

              Spamassassin with defaults settings is very effective at detecting obvious spam. Beyond obvious spam it gets more interesting. Basically, if you never see any spam, it means that either you haven’t told anyone your address, or the filter has false positives.

              This is where the “makes sure your own communication doesn’t end up in people’s Junk folder” part comes into play. Sure, you will run into issues if you setup your server incorrectly (e.g. open relay) or aren’t using best current practices that are meant to help other servers see if email that uses your domain for From: is legitimate and report suspicious activity to the domain owner (SPF, DKIM, DMARC). A correctly configured server SHOULD reject messages that are not legitimate according to the sender’s domain stated policy.

              Otherwise, a correctly configured server SHOULD accept messages that a human would never consider spam. The problem is that certain servers are doing it all the time, and are not always sending DMARC reports back.

              And GMail is the single biggest offender there. If I have a false positive problem with someone, it’s almost invariably GMail, with few if any exceptions.

              Whether it’s a cockup or a conspiracy is debatable, but the point remains.

            2. 2

              We’re not going to kill GMail. Let’s be realistic, here. Hotmail is still alive and healthy, after all.

              Anyone who switches to Fastmail or ProtonMail helps establish one more player in addition to GMail, not instead of it. That, of course, can only be a good thing.

              1. 1

                Just to bring in one alternative service (since you are right, most people here seem to advice Fastmail, Protonmail): I found mailbox.org one day. No experience with them though.

              2. 1

                I still get the occasional newsletter on the old address, but that’s about it.

                Once you moved most things over, consider adding a filter on your new account to move the forwarded mails to a separate folder. that way it becomes immediately clear what fell through the cracks.

                1. 1

                  Sorry, I wasn’t clear. E-mails sent to the old address are forwarded to the new one and then deleted from the GMail account. When that happens I just unsubscribe, delete the e-mail and move on. It really does tend to only be newsletters.

                  I suppose one caveat to my approach and the reason this worked so well for me is that I had already been using my non-gmail address for a few years prior to making the change so everyone I need to interact with already knows to contact me using the right address.

              1. 1

                if I understand the docs correctly, the racket struct is always the source of truth, and the db table is defined from it. I would love to see support for the other use case too, where you can use racket to quickly connect to an existing database and have structs autogenerated for the tables.

                1. 3

                  the racket struct is always the source of truth, and the db table is defined from it.

                  I will update the docs to clarify this; thank you for bringing it up. The table is meant to be the source of truth, with the schema able to represent either the whole table or just a part of it (to support use cases with really wide tables). Creating the table from the schema/struct is convenient for the tutorial but not really the intended use case (in fact, in real projects I use north for migrations and do not use deta’s DDL features at all).

                  I would love to see support for the other use case too, where you can use racket to quickly connect to an existing database and have structs autogenerated for the tables.

                  racquel does support this, but, unfortunately, is based around the class system and not on structs.

                  1. 1

                    i filed a bug against racquel which just sat there, which makes me think the project isn’t really active any more :(

                1. 2

                  I watched this live and thought it was pretty cool. I don’t think it’ll replace paredit for me anytime soon, but I’m always happy to see folks experimenting with structured editing.

                  The repo is here and you can find the installation instructions here if you want to play around with it.

                  1. 1

                    Andrew presented it at our local Clojure meetup a little while back, and we had some discussion around it. The one thing that makes it very appealing is that it’s designed to behave very closely to a regular to a regular text based editor. Everything is keyboard driven, and you can just type as you go. It’s the first visual editor that I can see myself using as it doesn’t really compromise the efficiency.

                    I think the big value add on top of something like paredit is that it can be much more semantically aware. For example, in the demo the editor knows that if form accepts 3 arguments. Another useful aspect is that you can do structural transformations like if -> cond automatically. I really like the idea of working with code on a semantic level as opposed to just manipulating text.

                  1. 0

                    In the era of clusterfsck of dozens of different platforms, creating a language which can’t easily produce a dependency-free binary which can run on customers machine without any sort of additional effort in runtime bring up (even an “installation”) can’t be called “acceptable”.

                    The same goes for Python, except its interpreter is by default available on all of three major desktop platforms since last month (thanks Microsoft) at least until next major macOS release (thanks Apple!).

                    1. 19

                      While it’s true that Racket can’t produce a single statically-linked binary for your program (assuming that’s what you mean by a dependency-free binary), it can certainly produce a self-contained distribution of your program by packing your program and the Racket runtime into a single executable along with all of its dependencies. This is how I distribute all of my web application and it works out nicely.



                      1. 2

                        Do you have any end-to-end examples that you can share of using these tools to build a self-contained web app?

                        Thanks! either way (the links are a good start!)!

                        1. 12

                          It really doesn’t take much beyond having an app with a main submodule as its entrypoint and then running

                          raco exe -o app main.rkt

                          followed by

                          raco distribute dist app

                          Any libraries referenced by any of the modules in the app as well as any files referenced with define-runtime-path will automatically be copied into the distribution and their paths updated for the runtime code (no need for any special configuration files (and, especially, no need for a MANIFEST.in, which, if you’ve ever tried to do this with Python you might know is a horrible experience)).

                          For Linux distributions (since I am on macOS), I run the same process inside Docker (in CI) to produce a distribution. Here are my test-app.sh and build-app.sh scripts for one of my applications:

                          The raco koyo dist line you see in the second file is just a combination of the two commands (exe and distribute). In fact, if you want to give this a go yourself, then you can use my yet-to-be-fully-documented koyo library to create a basic webapp from a template and then create a distribution for it:

                          $ raco pkg install koyo
                          $ raco koyo new example  # think "rails new" or "django startproject"
                          $ cd example
                          $ less README.md  # optional
                          $ raco koyo dist
                          $ ./dist/bin/example  # to run the app

                          Hope that helps!

                          P.S.: Here is the implementation for koyo dist and here is the blueprint (template) that is used when you run raco koyo new.

                          1. 2

                            Thanks, this was just the kind of example I was hoping for.

                      2. 4

                        can’t be called “acceptable”.

                        To you, of course. Even the go nerds moved on to docker for deployments, you should consider it - I use docker for python codebases to manage things without needing to remember the exact invocation of venv, pip, etc.

                        However, for me raco exe has been more than enough. Have you tried it?


                        1. 7

                          [edit, spell roost correctly]

                          I’m not sure which go nerds you’re referring to. Perhaps I’m the exception that proves the rule, or perhaps the go community’s more diverse than you’ve seen. I love the fact that terraform, hugo, direnv, and a big handful of my other tools (large and small) are simple single file executables (ok, terraform is bigger than that, but…). It’s one of the things that attracts me to the language.

                          I’m burnt out on solving a problem at build time and then having to solve it again each time I install an application (PERL5LIB, PYTHONPATH, LD_LIBRARY_PATH, sigh…). Thank goodness for Spack for my work deployments.

                          I’ve found Docker to be useful in the small (”I use Docker on my laptop to X, Y, and Z.”) and in the big (”We have an IT team that wrangles our Docker hosting servers.”). For the stuff in the middle (”Dammit Jim, I’m a bioinformatician, not a syadmin!”) it turns into a problem on its own.

                          If you know how to use venv, pip, and etc to build the Docker image, you could use them to do the deployment (though not for free…). I’ve seen many situations where people didn’t understand the Python tool chain but could hide the mess from review, at least until it came home to roose roost.

                          1. 5

                            I agree with you. I build lots of tiny, glue-style Go tools (mostly for my coworkers on my ops team), and somebody always ends up contributing a Dockerfile.

                            I still prefer

                            ./ldap_util --validate user1


                            docker run --rm -it ldap_util:latest -e "USER_TO_VALIDATE=user1"
                            1. 1

                              I just think of docker images as universal server executables, which makes it easier to accept docker as a whole.

                            2. 2

                              I don’t think it’s bad that they have executables, but installing racket is very simple, and most of your complaints are actually places where python & perl are much worse than racket.

                              This all sounds like you haven’t tried racket, and tried to rag on a general complaint where in racket it’s not nearly the same problem, and you haven’t worked with it or researched it.

                              1. 1

                                I think that you’re replying to my grumpy comments above….

                                Most of that grumpiness was meant to be targeted at Docker and the anti-pattern of using it to hide a mess rather than clean the mess up; I’ve spent a lot of time (though it has earned me a fair bit of income) cleaning up problems downstream of Docker solutions. I made a similar comment somewhere else here on Lobste.rs that sums up my feelings; I’ve seen Docker used effectively in the small (e.g. on individual machines) and in the large (sites with savvy teams and resources to invest in keeping everything Docker-related happy) but the middle seems to run into problems.

                                Other than the grumpiness above, I really don’t mean to rag on Racket, I’ve seen some neat things built with it (e..g Pollen).

                                You’re right that I haven’t spent much time with racket; and big part of that is burnout with installing things that require finding and installing dependencies at “install-time”.

                                I’m excited by @bodgans nicely packaged demo of distribute earlier in the thread.

                                Are there any examples of tools written in racket (or it’s various sub-languages) that have been packaged up for installation by tools like Homebrew or Spack or …?

                        1. 20

                          I’ve been using Racket as my daily driver for 10 months at this point and I much prefer it to Python, which I had previously been using professionally for a decade. Everything is more well thought out and well integrated. The language is significantly faster, the runtime is evented and the concurrency primitives are based on Concurrent ML, which I find to be a nice model. There is support for real parallelism (in the same manner that Python 3 is going to support real parallellism in the future (one interpreter per system thread, communicating between them via channels))). I could go on and on. The overall the experience is just much, much nicer.

                          The only real downsides are

                          • the learning curve – there is a lot of documentation and a lot of concepts to learn coming from a language like Python – and
                          • the lack of 3rd party libraries.

                          To be honest, though, I don’t consider the latter that big of a deal. My approach is to simply write whatever library I need that isn’t available. I’ve released 11 such libraries for Racket in the past year, but I only invested about two weeks total working on all of them and the upside is they all behave in exactly the way I want them to. Part of the reason for that is that you can get shit done quickly in Racket (not unlike Python in that regard) and part of it is knowing what I want and exactly how to build it which comes with experience.

                          EDIT: I just ran cloc over all of my racket projects (public and private) and it appears I’ve written over 70k sloc in Racket so I would say I’m way past the honeymoon phase.

                          1. 14

                            To be honest, though, I don’t consider [he lack of 3rd party libraries] that big of a deal. My approach is to simply write whatever library I need that isn’t available.

                            I use Python almost exclusively for its ecosystem, and I imagine that’s pretty common. As much as I would love to reinvent the world, I don’t think that reimplementing numpy (for example) is ever going to be on my plate. But, for projects with limited needs for external libraries, there are certainly many better languages, and Racket’s a great one.

                            1. 3

                              the learning curve – there is a lot of documentation and a lot of concepts to learn coming from a language like Python

                              I’ve been meaning to pick up Racket since a few weeks now. Also coming from a language like Python, are there any specific resources you would recommend?

                              1. 12

                                Racket’s documentation is excellent.

                                1. 5

                                  I’ve been using a combination of the official guides, the reference and reading the source code for Racket itself and for whatever library I’m interested in using. I also learned a lot by joining the racket-users mailing list.

                                  If you’re like me, you might be used to skimming the Python documentation for the information that you need. I learned the hard way that it’s not a good idea to do that with Racket. You’ll often save time by just taking a few minutes to read and absorb the wall-of-text documentation you hit when you look up a particular thing than you are interested in.

                                  You might also find this blog post by Alex Harsányi useful.

                                  1. 2

                                    The have an official book that teaches both programming and Racket. Might be worth looking at.

                                1. 15

                                  Note that this is from 2010 and a lot has changed since then in Racket-land. I think both Racket and Clojure are fine languages, but Racket has been my daily driver this past year and I prefer it for various reasons.

                                  Unfortunately, Racket provides no built-in lazy list/stream, so you’d need to realize the entire list.

                                  It certainly does now:

                                  There’s even a whole language dedicated to this that interoperates with normal racket code!

                                  But even if that’s what you’d want to do, Racket doesn’t provide a built-in function to give you back the list of keys, values or pairs in a hash table.

                                  It does provide all three:

                                  Instead, you’re encouraged to iterate through the pairs using an idiosyncratic version of its for construct, using a specific deconstructing pattern match style to capture the sequence of key/value pairs that is used nowhere else in Racket. (Speaking of for loops, why on earth did they decide to make the parallel for loop the common behavior, and require a longer name (for*) for the more useful nested loop version?) Put simply, using hash tables in Racket is frequently awkward and filled with idiosyncracies that are hard to remember.

                                  This is more a matter of taste and it seems the author is bothered by Racket’s use of values. The sequence implementation for hash tables returns key value pairs as values and all values can be destructured within the for form so that seems quite natural to me.

                                  There are downloadable libraries that offer an assortment of other data structures, but since these libraries are made by a variety of individuals, and ported from a variety of other Scheme implementations, the interfaces for interacting with those data structures are even more inconsistent than the built-ins, which are already far from ideal.

                                  All of the libraries I’ve worked with in the past year have followed more or less the same conventions.

                                  1. 1

                                    The tracking website (matomo) has an invalid / expired certificate for the analytics.linki.tools domain

                                    1. 1

                                      Thanks! I’ll let Paulo know.

                                      1. 2

                                        Thanks for the heads-up.

                                        1. 1


                                    1. 3

                                      Interesting, thanks for the share. From the Introduction to Marionette page:

                                      If this sounds similar to Selenium/WebDriver then you’re correct! Marionette shares much of the same ethos and API as Selenium/WebDriver, with additional commands to interact with Gecko’s chrome interface. Its goal is to replicate what Selenium does for web content: to enable the tester to have the ability to send commands to remotely control a user agent.

                                      - https://firefox-source-docs.mozilla.org/testing/marionette/marionette/Intro.html

                                      This is an aside from your submission, but from that description, I’m still not entirely sure what differentiates it from Selenium/Webdriver?

                                      1. 5

                                        I don’t know the full history either, but from what I’ve gathered:

                                        • the Web Driver up until quantum was built into a Firefox extension,
                                        • after quantum they built the Marionette Protocol directly into the browser,
                                        • the Marionette Protocol is a custom TCP protocol that’s not compatible with the Web Driver protocol so they also release geckodriver which proxies the Web Driver protocol (REST over HTTP) to Marionette.

                                        I opted to go w/ Marionette directly partly because I’d rather run Firefox by itself rather than Firefox + geckodriver and partly because implementing this sort of thing (the Web Driver protocol) over HTTP seems wasteful.

                                        That said, the code is written in such a way that switching protocols down the line (and potentially supporting more browsers) is easy.

                                        1. 3

                                          switching protocols down the line … is easy

                                          Except you named the repository marionette :)

                                          1. 2

                                            Ha! Fair enough.

                                      1. 1

                                        do you have a link to the full protocol specs? the link on the mozilla page 404s :/

                                        1. 4

                                          I haven’t been able to find them either so I’ve been getting by by reading the sources for geckodriver:

                                          There’s also the marionette implementation itself:

                                          1. 2

                                            thanks for the links! i wonder if protocol.md ever existed, archive.org doesn’t know about it too.. seems that i have to use the source too :)

                                        1. 11

                                          Summary: author’s expectations of a young language exceed the actual implementation, so they write a Medium article.

                                          If you can’t tell: slightly triggering article for me, and I don’t use/advocate for Elm. I’d much prefer if the author either pitched in and helped, or shrugged and moved on to something else. Somehow, yelling into the void about it is worse to me, I think because there are one or two good points in there sandwiched between non-constructive criticisms.

                                          1. 32

                                            The article provides valuable information for people considering using Elm in production. The official line on whether Elm is production ready or not is not at all clear, and a lot of people suggest using it.

                                            1. 6

                                              I didn’t like that he makes unrelated and unsupported claims in the conclusion (“Elm is not the fastest or safest option”). That’s not helpful.

                                              1. 5

                                                I read “fastest” and “safest” as referring to “how fast can I can get work done” and “is this language a safe bet”, not fast and safe in the sense of performance. If that’s the right interpretation, then those conclusions flow naturally from the observations he makes in the article.

                                                1. 1

                                                  Right, the author made the same clarification to me on Twitter, so that’s definitely what he meant. In that sense, the conclusion is fine. Those are very ambiguous words though (I took them to mean “fastest runtime performance” and “least amount of runtime errors”).

                                                  1. 1

                                                    Yeah definitely. I also was confused initially.

                                              2. 3

                                                TBF, I was a little too snarky in my take. I don’t want to shutdown legitimate criticism.

                                                The official line on whether Elm is production ready or not is not at all clear, and a lot of people suggest using it.

                                                That ambiguity is a problem. There’s also a chicken/egg problem with regard to marketing when discussing whether something is production ready. I’m not sure what the answer is.

                                                1. 3

                                                  It’s even more ambiguous for Elm. There are dozens of 100K+ line commercial code bases out there. How many should there be before the language is “production ready”? Clearly, for all those companies, it already is.

                                                  Perhaps the question is misguided and has reached “no true Scotsman” territory.

                                                  1. 3

                                                    That’s one reason why this topic is touchy to me: things are never ready until the Medium-esque blogosphere spontaneously decides it is ready, and then, without a single ounce of discontinuity, everyone pretends like they’ve always loved Elm, and they’re excited to pitch in and put forth the blood, sweat, and tears necessary to make a healthy, growing ecosystem. Social coding, indeed.

                                                    In a sense, everyone wants to bet on a winner, be early, and still bet with the crowd. You can’t have all those things.

                                                    1. 2

                                                      I like your last paragraph. When I think about it, I try to reach the same impossible balance when choosing technologies.

                                                      I even wrote a similar post about Cordova once (“is it good? is it bad?”). Hopefully it was a bit more considered as I’d used it for 4 years before posting.

                                                      The thing that bothers me with the developer crowd is somewhat different, I think. It’s the attempt to mix the other two unmixable things. On one hand, there’s the consumerist attitude to choosing technologies (“Does it work for me right now? Is it better, faster, cheaper than the other options?”). On the other hand, there are demands for all the benefits of open source like total transparency, merging your PR, and getting your favourite features implemented. Would anyone demand this of proprietary software vendors?

                                                      I’m not even on the core Elm team, I’m only involved in popularising Elm and expanding the ecosystem a bit, but even for me this attitude is starting to get a bit annoying. I imagine it’s worse for the core team.

                                                      1. 2

                                                        Hey, thanks for your work on Elm. I’m much less involved than you, but even I find the “walled garden” complaints a little irritating. I mean, if you don’t like this walled garden, there are plenty of haphazard dumping grounds out there to play in, and even more barren desert. Nobody’s forcing anybody to use Elm! For what it’s worth, I think Evan and the Elm core team is doing great work. I’m looking forward to Elm 1.0, and I hope they take their time and really nail it.

                                                      2. 2

                                                        The author of this article isn’t pretending to be an authority on readiness, and claiming that they’ll bandwagon is unwarranted. This article is from someone who was burned by Elm and is sharing their pain in the hopes that other people don’t get in over their heads.

                                                        Being tribal, vilifying the “Medium-esque blogosphere” for acts that the author didn’t even commit, and undermining their legitimate criticisms with “well, some people sure do love to complain!” is harmful.

                                                  2. 3

                                                    I’d like to push back on this. What is “production ready”, exactly? Like I said in another comment, there are dozens of 100K+ line commercial Elm code bases out there. Clearly, for all those companies, it already is.

                                                    I’ve used a lot of other technologies in production which could easily be considered “not production ready”: CoffeeScript, Cordova, jQuery Mobile, Mapbox. The list goes on. They all had shortcomings, and sometimes I even had to make compromises in terms of requirements because I just couldn’t make particular things work.

                                                    The point is, it either works in your particular situation, or it doesn’t. The question is meaningless.

                                                    1. 5

                                                      I disagree that the question is meaningless just because it has a subjective aspect to it. A technology stack is a long term investment, and it’s important to have an idea how volatile it’s going to be. For example, changes like the removal the of the ability to do interop with Js even in your own projects clearly came as a surprise to a lot of users. To me a language being production ready means that it’s at the point where things have mostly settled down, and there won’t be frequent breaking changes going forward.

                                                      1. 1

                                                        By this definition, Python wasn’t production ready long after the release of Python 3. What is “frequent” for breaking changes? For some people it’s 3 months, for others it’s 10 years. It’s not a practical criterion.

                                                        Even more interestingly, Elm has been a lot less volatile than most projects, so it’s production ready by your definition. Most people complain that it’s changing too slowly!

                                                        (Also, many people have a different perspective about the interop issue; it wasn’t a surprise. I don’t want to rehash all that though.)

                                                        1. 4

                                                          Python wasn’t production ready long after the release of Python 3.

                                                          Python 3 was indeed not production-ready by many people’s standards (including mine and the core team’s based on the changes made around 3.2 and 3.3) after its release up until about version 3.4.

                                                          Even more interestingly, Elm has been a lot less volatile than most projects, so it’s production ready by your definition. Most people complain that it’s changing too slowly!

                                                          “it’s improving too slowly” is not the same as “it’s changing too slowly”.

                                                          1. 1

                                                            Sorry, this doesn’t make any sense.

                                                            By @Yogthos’s definition, neither Python 2 nor Python 3 were “production ready”. But if we’re going to write off a hugely popular language like that, we might as well write off the whole tech industry (granted, on many days that’s exactly how I feel).

                                                            Re Elm: again, by @Yogthos’s definition it’s perfectly production ready because it doesn’t make “frequent breaking changes”.

                                                            1. 4

                                                              By @Yogthos’s definition, neither Python 2 nor Python 3 were “production ready”.

                                                              Python 2 and 3 became different languages at the split as evidenced by the fact that they were developed in parallel. Python 2 was production ready. Python 3 was not. The fact that we’re using numbers to qualify which language we’re talking about proves my point.

                                                              It took five years for Django to get ported to Python 3. (1 2)

                                                              Re Elm: again, by @Yogthos’s definition it’s perfectly production ready because it doesn’t make “frequent breaking changes”.

                                                              You’re hanging on the wording here and “frequent” is not as important to Yogthos’ argument as “breaking changes” is.

                                                              1. 1

                                                                I don’t think we’re going to get anywhere with this discussion by shifting goalposts.

                                                          2. 1

                                                            I think most people agree that Python 3 was quite problematic. Your whole argument seems to be that just because other languages have problems, you should just accept random breaking changes as a fact of life. I strongly disagree with that.

                                                            The changes around ecosystem access are a HUGE breaking change. Basically any company that invested in Elm and was doing Js interop is now in a really bad position. They either have to stay on 0.18, re-implement everything they’re using in Elm, or move to a different stack.

                                                            Again, as I noted there is subjectivity involved here. My standards for what constitutes something being production ready are different than yours apparently. That’s fine, but the information the article provides is precisely what I’d want to know about when making a decision of whether I’d want to invest into a particular piece of technology or not.

                                                            1. 1

                                                              I don’t think you are really aware of the changes to Elm because you’re seriously overstating how bad they were (“re- implement everything” was never the case).

                                                              I agree that there is useful information in the article – in fact, I try to read critical articles first and foremost when choosing technologies so it’s useful to have them. I never said that we should accept “random breaking changes” either (and it isn’t fair to apply that to Elm).

                                                              I still don’t see that you have a working definition of “production ready” – your definition seems to consist of a set with a single occupant (Clojure).

                                                              As an aside, this is the first time I’ve had an extended discussion in the comments here on Lobsters, and it hasn’t been very useful. These things somehow always end up looking like everyone’s defending their entrenched position. I don’t even have an entrenched position – and I suspect you may not either. Yet here we are.

                                                              1. 2

                                                                Perhaps I misunderstand the situation here. If a company has an Elm project in production that uses Js interop, what is the upgrade path to 0.19. Would you not have to rewrite any libraries from the NPM ecosystem in Elm?

                                                                I worked with Java for around a decade before Clojure, and it’s always been rock solid. The biggest change that’s happened was the introduction of modules in Java 9. I think that’s a pretty good track record. Erlang is another great example of a stack that’s rock solid, and I can name plenty of others. Frankly, it really surprises me how cavalier some developer communities regarding breaking changes and regressions.

                                                                Forum discussions are always tricky because we tend to use the same words, but we assign different meanings to them in our heads. A lot of the discussion tends to be around figuring out what each person understands when they say something.

                                                                In this case it sounds like we have different expectations for what to expect from production ready technology. I’m used to working with technologies where regressions are rare, and this necessarily colors my expectations. My views on technology adoption are likely more conservative than majority of developers.

                                                                1. 2

                                                                  Prior to the 0.19 release, there was a way to directly call JS functions from Elm by relying on a purely internal mechanism. Naturally, some people started doing this, despite repeated warnings that they really shouldn’t. It wasn’t widespread, to my knowledge.

                                                                  All the way in 2017, a full 17 months before 0.19 release, it was announced that this mechanism would be removed. It was announced again 5 months before the release.

                                                                  Of course, a few people got upset and, instead of finding a migration path, complained everywhere they could. I think one guy wrote a whole UI framework based on the hack, so predictably he stomped out of the community.

                                                                  There is an actual JS interop mechanism in Elm called ports. Anybody who used this in 0.18 (as they should have) could continue using it unchanged in 0.19. You can use ports to integrate the vast majority of JS libraries with Elm. There is no need to rewrite all JavaScript in Elm. However, ports are asynchronous and require marshalling data, which is why some people chose to use the internal shortcut (aka hack) instead.

                                                                  So, if a company was using ports to interop with JS, there would be no change with 0.19. If it was using the hack, it would have to rewrite that portion of the code to use ports, or custom elements or whatever – but the rework would be limited to bindings, not whole JS libraries.

                                                                  There were a few other breaking changes, like removing custom operators. However, Elm has a tool called elm-upgrade which helps to identify these and automatically update code where possible.

                                                                  There were also fairly significant changes to the standard library, but I don’t think they were any more onerous than some of the Rails releases, for example.

                                                                  Here are the full details, including links to previous warnings not to use this mechanism, if you’re interested: https://discourse.elm-lang.org/t/native-code-in-0-19/826

                                                                  I hope this clarifies things for you.

                                                                  Now, regarding your “rock solid” examples by which I think you mean no breaking changes. If it’s achievable, that’s good – I’m all for it. However, as a counterexample, I’ll bring up C++ which tied itself into knots by never breaking backward compatibility. It’s a mess.

                                                                  I place less value on backward compatibility than you do. I generally think that backward compatibility ultimately brings software projects down. Therefore, de-prioritising it is a safer bet for ensuring the longevity of the technology.

                                                                  Is it possible that there are technologies which start out on such a solid foundation that they don’t get bogged down? Perhaps – you bring up Clojure and Erlang. I think Elm’s core team is also trying to find that kind of foundation.

                                                                  But whether Elm is still building up towards maturity or its core team simply has a different philosophy regarding backward compatibility, I think it’s at least very clear that that’s how it is if you spend any time researching it. So my view is that anybody who complains about it now has failed to do their research before putting it into production.

                                                                  1. 1

                                                                    I feel like you’re glossing over the changes from native modules to using ports. For example, native modules allowed exposing external functions as Tasks allowing them to be composed. Creating Tasks also allows for making synchronous calls that return a Task Never a which is obviously useful.

                                                                    On the other hand, ports can’t be composed like Tasks, and as you note can’t be used to call synchronous code which is quite the limitation in my opinion. If you’re working with a math library then having to convert the API to async pub/sub calls is just a mess even if it is technically possible to do.

                                                                    To sum up, people weren’t just using native modules because they were just completely irresponsible and looking to shoot themselves in a foot as you seem to be implying. Being able to easily leverage existing ecosystem obviously saves development time, so it’s not exactly surprising that people started using native modules. Once you have a big project in production it’s not trivial to go and rewrite all your interop in 5 months because you have actual business requirements to work on. I’ve certainly never been in a situation where I could just stop all development and go refactor my code as long as I wanted.

                                                                    This is precisely the kind of thing I mean when I talk about languages being production ready. How much time can I expect to be spending chasing changes in the language as opposed to solving business problems. The more breaking changes there are the bigger the cost to the business is.

                                                                    I’m also really struggling to follow your argument regarding things like Rails or C++ to be honest. I don’t see these as justifying unreliable tools, but rather as examples of languages with high maintenance overhead. These are technologies that I would not personally work with.

                                                                    I strongly disagree with the notion that backwards compatibility is something that is not desirable in tooling that’s meant to be used in production, and I’ve certainly never seen it bring any software projects down. I have however seen plenty of projects being brought down by brittle tooling and regressions.

                                                                    I view such tools as being high risk because you end up spending time chasing changes in the tooling as opposed to solving business problems. I think that there needs to be a very strong justification for using these kinds of tools over ones that are stable.

                                                                    1. 3

                                                                      I think we’re talking past each other again, so I’m going to wrap this up. Thank you for the discussion.

                                                        2. 5

                                                          Here are my somewhat disjoint thoughts on the topic before the coffee has had a chance to kick in.

                                                          What is “production ready”, exactly?

                                                          At a minimum, the language shouldn’t make major changes between releases that require libraries and codebases to be reworked. If it’s not at a point where it can guarantee such a thing, then it should state that fact up front. Instead, its creator and its community heavily promote it as being the best thing since sliced bread (“a delightful language for reliable webapps”) without any mention of the problems described in this post. New folks take this to be true and start investing time into the language, often quite a lot of time since the time span between releases is so large. By the time a new release comes out and changes major parts of the language, some of those people will have invested so much time and effort into the language that the notion of upgrading (100K+ line codebases, as you put it) becomes downright depressing. Not to mention that most of those large codebases will have dependencies that themselves will need upgrading or, in some cases, will be have to be deprecated (as elm-community has done for most of my libraries with the release of 0.19, for example).

                                                          By promoting the language without mentioning how unstable it really is, I think you are all doing it a disservice. Something that should be perceived as good, like a new release that improves the language, ends up being perceived as a bad thing by a large number of the community and so they leave with a bad taste in their mouth – OP made a blog post about it, but I would bet the vast majority of people just leave silently. You rarely see this effect in communities surrounding other young programming languages and I would posit that it’s exactly because of how they market themselves compared to Elm.

                                                          Of course, in some cases it can’t be helped. Some folks are incentivized to keep promoting the language. For instance, you have written a book titled “Practical Elm” so you are incentivized to promote the language as such. The more new people who are interested in the language, the more potential buyers you have or the more famous you become. I believe your motivation for writing that book was pure and no one’s going to get rich off of a book on Elm. But, my point is that you are more bought into the language that others normally are.

                                                          sometimes I even had to make compromises in terms of requirements because I just couldn’t make particular things work.

                                                          That is the very definition of not-production-ready, isn’t it?

                                                          Disclaimer: I quit Elm around the release of 0.18 (or was it 0.17??) due to a distaste for Evan’s leadership style. I wrote a lot of Elm code (1 2 3 4 and others) and put some of it in production. The latter was a mistake and I regret having put that burden on my team at the time.

                                                          1. 1

                                                            From what I’ve seen, many people reported good experiences with upgrading to Elm 0.19. Elm goes further than many languages by automating some of the upgrades with elm-upgrade.

                                                            FWIW, I would also prefer more transparency about Elm development. I had to scramble to update my book when Elm 0.19 came out. However, not for a second I’m going to believe that I’m entitled to transparency, or that it was somehow promised to me.

                                                            To your other point about marketing, if people are making decisions about putting Elm into production based on its tagline, well… that’s just bizarre. For example, I remember looking at React Native in its early stages, and I don’t recall any extensive disclaimers about its capabilities or lack thereof. It was my responsibility to do that research - again, because limitations for one project are a complete non-issue for another project. There’s just no one-size-fits-all.

                                                            Finally, calling Elm “unstable” is simply baseless and just as unhelpful as the misleading marketing you allege. I get that you’re upset by how things turned out, but can’t we all have a discussion without exaggerated rhetoric?

                                                            That is the very definition of not-production-ready, isn’t it?

                                                            Exactly my point: there is no such definition. All those technologies I mentioned were widely used at the time. I put them into production too, and it was a good choice despite the limitations.

                                                            1. 4

                                                              From what I’ve seen, many people reported good experiences with upgrading to Elm 0.19. Elm goes further than many languages by automating some of the upgrades with elm-upgrade.

                                                              And that’s great! The issue is the things that cannot be upgraded. Let’s take elm-combine (or parser-combinators as it was renamed to), for example. If you depended on the library in 0.18, then, barring the invention of AGI, there’s no automated tool that can help you upgrade because your code will have to be rewritten to use a different library because elm-combine cannot be ported to 0.19 (not strictly true, because it can be ported but only by the core team, but my point still stands because it won’t be). Language churn causes ecosystem churn which, in turn, causes pain for application developers so I don’t think it’s a surprise that folks get angry and leave the community when this happens given that they may not have had any prior warning before they invested their time and effort.

                                                              Finally, calling Elm “unstable” is simply baseless and just as unhelpful as the misleading marketing you allege. I get that you’re upset by how things turned out, but can’t we all have a discussion without exaggerated rhetoric?

                                                              I don’t think it’s an exaggeration to call a language with breaking changes between releases unstable. To be completely honest, I can’t think of a better word to use in this case. Fluctuating? In flux? Under development? Subject to change? All of those fit and are basically synonymous to “unstable”. None of them are highlighted anywhere the language markets itself, nor by its proponents. I’m not making a judgement on the quality of the language when I say this. I’m making a judgement on how likely it is to be a good choice in a production environment, which brings me to…

                                                              Exactly my point: there is no such definition. All those technologies I mentioned were widely used at the time. I put them into production too, and it was a good choice despite the limitations.

                                                              They were not good choices, because, by your own admission, you were unable to meet your requirements by using them. Hence, they were not production-ready. Had you been able to meet your requirements and then been forced to make changes to keep up with them, then that would also mean they were not production-ready. From this we have a pretty good definition: production-readiness is inversely proportional to the likelihood that you will “have a bad time” after putting the thing into production. The more that likelihood approaches 0, the more production-ready a thing is. Being forced to spend time to keep up with changes to the language and its ecosystem is “having a bad time” in my book.

                                                              I understand that our line of work essentially entails us constantly fighting entropy and that, as things progress, it becomes harder and harder for them maintain backwards-compatibility but that doesn’t mean that nothing means anything anymore or that we can’t reason about the likelihood that something is going to bite us in the butt later on. From a business perspective, the more likely something is to change after you use it, the larger risk it poses. The more risks you take on, the more likely you are to fail.

                                                              1. 1

                                                                I think your definition is totally unworkable. You’re claiming that technologies used in thousands upon thousands of projects were not production ready. Good luck with finding anything production ready then!

                                                                1. 5

                                                                  I’ve been working with Clojure for almost a decade now, and I’ve never had to rewrite a line of my code in production when upgrading to newer versions because Cognitect takes backwards compatibility seriously. I worked with Java for about a decade before that, and it’s exact same story. There are plenty of languages that provide a stable foundation that’s not going to keep changing from under you.

                                                                  1. 3

                                                                    I am stating that being able to put something in production is different from said thing being production ready. You claim that there is no such thing as “production ready” because you can deploy anything which is a reduction to absurdity of the situation. Putting something into production and being successful with it does not necessarily make it production ready. It’s how repeatable that success is that does.

                                                                    It doesn’t look like we’re going to get anywhere past this point so I’m going to leave it at that. Thank you for engaging and discussing this with me!

                                                                    1. 1

                                                                      Thank you as well. As I said in another comment, this is the first time I tried having an extended discussion in the comments in here, and it hasn’t been very useful. Somehow we all end up talking past each other. It’s unfortunate. In a weird way, maybe it’s because we can’t interrupt each other mid-sentence and go “Hang on, but what about?…”. I don’t know.

                                                                    2. 3

                                                                      This doesn’t respond to bogdan’s definition in good faith.

                                                                      production-readiness is inversely proportional to the likelihood that you will “have a bad time” after putting the thing into production. The more that likelihood approaches 0, the more production-ready a thing is.

                                                                      In response to your criticisms, bogdan proposed a scale of production-readiness. This means that there is no such distinction between “production-ready” and not “production-ready”. Elm is lower on this scale than most advocates imply, and the article in question provides supporting evidence for elm being fairly low on this scale.

                                                                      1. 1

                                                                        What kind of discussion do you expect to have when the first thing you say to me is that I’m responding in bad faith? Way to go, my friend.

                                                                        1. 3

                                                                          Frankly, I don’t really want to have a discussion with you. I’m calling you out because you were responding in bad faith. You didn’t address any of his actual points, and you dismissed his argument condescendingly. The one point you did address is one that wasn’t made, and wasn’t even consistent with bogdan’s stance.

                                                                          1. 1

                                                                            In my experience, the crusader for truth and justice is one of the worst types of participants in a forum.

                                                                            We may not have agreed, but bogdan departed from the discussion without histrionics, and we thanked each other.

                                                                            But you still feel you have to defend his honour? Or are you trying to prove that I defiled the Truth? A little disproportionate, don’t you think?

                                                                            (Also: don’t assign tone to three-sentence comments.)

                                                              2. 3

                                                                The question isn’t even close to meaningless… Classifying something as “production ready” means that it is either stable enough to rely on, or is easily swapped out in the event of breakage or deprecation. The article does a good enough job of covering aspects of elm that preclude it from satisfying those conditions, and it rightly warns people who may have been swept up by the hype around elm.

                                                                Elm has poor Interop, and is (intentionally) a distinct ecosystem from JS. This means that if Elm removes features you use, you’re screwed. So, for a technology like Elm (which is a replacement of JS rather than an enhancement) to be “production ready” it has to have a very high degree of stability, or at least long term support for deprecated features. Elm clearly doesn’t have this, which is fine, but early adopters should be warned of the risks and drawbacks in great detail.

                                                                1. 0

                                                                  What is “production ready”, exactly?

                                                                  Let’s keep it really simple, to me ‘production-ready’ is when the project version gets bumped to 1.0+. This is a pretty established norm in the software industry and usually a pretty good rule of thumb to judge by. In fact Elm packages enforce semantic versioning, so if you extrapolate that to Elm itself you inevitably come to the conclusion that hasn’t reached production-release readiness yet.

                                                                2. 3

                                                                  The term “production ready” is itself not at all clear. Some Elm projects are doing just fine in production and have been for years now. Some others flounder or fail. Like many things, it’s a good fit for some devs and some projects, and not for some others – sometimes for reasons that have little to do with the language or its ecosystem per se. In my (quite enjoyable!) experience with Elm, both official and unofficial marketing/docs/advocates have been pretty clear on that; but developers who can’t or won’t perceive nuance and make their own assessments for their own needs are likely to be frustrated, and not just with Elm.

                                                                  I agree that there’s valuable information in this article. I just wish it was a bit less FUDdy and more had more technical detail.

                                                                3. 9

                                                                  I think there’s an angle to Elm’s marketing that justifies these kinds of responses: Those “author’s expectations” are very much encouraged by the way the Elm team presents their language.

                                                                  Which criticisms do you find unfair, which are the good points?

                                                                  1. 5

                                                                    think there’s an angle to Elm’s marketing that justifies these kinds of responses

                                                                    I’m sympathetic to both Elm and the author here. I understand Elm’s marketing stance because they ask devs to give up freely mixing pure/impure code everywhere in their codebase on top of a new language and ecosystem. (In general, OSS’s perceived need for marketing is pretty out of hand at this point and a bit antithetical to what attracts me to it in the first place). OTOH it shouldn’t be possible to cause a runtime error in the way the author described, so that’s a problem. I’d have wanted to see more technical details on how that occurred, because it sounded like something that type safety should have protected him from.

                                                                    Fair criticisms:

                                                                    • Centralized ecosystem (though this is by design right now as I understand)
                                                                    • Centralized package repo
                                                                    • Official docs out of date and incomplete

                                                                    Unfair criticisms:

                                                                    • PRs being open after 2 years: one example alone is not compelling
                                                                    • Tutorials being out of date: unfortunate, but the “Cambrian explosion” meme from JS-land was an implicit acknowledgement that bitrot was okay as long as it was fueled by megacorps shiny new OSS libs, so this point is incongruous to me (even if he agrees with me on this)
                                                                    • “Less-popular thing isn’t popular, therefore it’s not as good”: I understand this but also get triggered by this; if you want safe, established platforms that have a big ecosystem then a pre-1.0 language is probably not the place to be investing time

                                                                    The conclusion gets a little too emotional for my taste.

                                                                    1. 2

                                                                      Thanks for the detailed reply; the criticism of the article seems valid.

                                                                      (As a minor point, the “PRs being open” criticism didn’t strike me as unsubstantiated because I’ve had enough similar experiences myself, but I can see how the article doesn’t argue that well. Certainly I’ve felt that it would be more honest/helpful for elm to not accept github issues/prs, or put a heavy disclaimer there that they’re unlikely to react promptly, and usually prefer to fix things their own way eventually.)

                                                                  2. 6

                                                                    A lot of the things listed in the articles are things that have been explicitly done to make things harder for contributions to happen. The development of Elm has explicitly made choices to make things harder, and not in a merely incidental way.

                                                                    This isn’t “the language is young” (well except for the debug point), a lot of this is “the language’s values go against things useful for people deploying to production”)

                                                                    1. 2

                                                                      I don’t know, other than the point about the inability to write native modules and the longstanding open PR’s, all of the rest of the issues very much seem symptomatic of a young language.

                                                                      The native module point sounds very concerning, but I don’t think I understand enough about elm or the ecosystem to know how concerning it is.

                                                                      1. 4

                                                                        I’ve been vaguely following along with Elm, and the thng that makes me err on agreeing with this article is that the native module thing used to not be the case! It was removed! There was a semi-elegant way to handle interactions with existing code and it was removed.

                                                                        There are “reasons”, but as someone who has a couple ugly hacks to keep a hybrid frontend + backend stack running nicely, I believe having those kinds of tricks are essential for bringing it into existing code bases. So seeing it get removed is a bit red flag for me.

                                                                        Elm still has a lot of cool stuff, of course

                                                                        1. 2

                                                                          I never relied on native modules, so I didn’t really miss them. But we now have ports, which I think is a much more principled (and interesting) solution. I felt that they worked pretty well for my own JS interop needs.

                                                                          Stepping back a bit, if you require the ability do ugly hacks, Elm is probably not the right tool for the job. There are plenty of other options out there! I don’t expect Elm to be the best choice for every web front-end, but I do appreciate its thoughtful and coherent design. I’m happy to trade backward compatibility for that.

                                                                        2. 2

                                                                          If you spend any amount of time in the Elm community you will find that contributions to the core projects are implicitly and explicitly discouraged in lots of different ways. Even criticisms of the core language and paradigms or core team decisions are heavily moderated on the official forums and subreddit.

                                                                          Also how are we using the term “young”? In terms of calendar years and attention Elm is roughly on par with a language like Elixir. It’s probably younger in terms of developer time invested, but again this is a direct result of turning away eager contributors.

                                                                          I think it’s fine for Elm to be a small project not intended for general production usage, but Evan and the core team have continually failed to communicate that intent.

                                                                    1. 6

                                                                      I’ve known about the dangers of this for a long time, but how exactly do you exploit this? Where do I read my coworkers’ private keys on our bastion?

                                                                      1. 18

                                                                        I don’t think you can steal keys using this method (and the man page agrees).

                                                                        That said, agent forwarding creates a unix domain socket on the bastion at /tmp/ssh-$(RANDOM_STRING)/agent.$(RANDOM_INT) so you can just:

                                                                        $ find /tmp -type d -name "ssh*"

                                                                        find the socket within it:

                                                                        $ ls /tmp/ssh-zRgUMt1ARg

                                                                        piggyback on the connection:

                                                                        $ env SSH_AUTH_SOCK=/tmp/ssh-zRgUMt1ARg/agent.12461 ssh jordi@example.internal

                                                                        This shouldn’t have to be stated, but I feel the need to cover my ass: only do this on machines you own.

                                                                        1. 3

                                                                          $(RANDOM_STRING) comes from mkdtemp(3), but $(RANDOM_INT) is in fact the SSH agent’s parent PID.


                                                                          1. 1

                                                                            I would be surprised that it could be that simple. Doesn’t the agent file has permissions like 0400 on the socket or similar protection? OFC root could still use it, but that would at least cover your ass on shared remote boxes.

                                                                            edit: answering my own question, from ssh-agent(1):

                                                                            A UNIX-domain socket is created and the name of this socket is stored in the SSH_AUTH_SOCK environment variable. The socket is made accessible only to the current user. This method is easily abused by root or another instance of the same user.

                                                                            1. 2

                                                                              The premise is that the jump box is compromised.

                                                                        1. 12

                                                                          I was speaking with a client this week who was struggling to test the membership of an IP in a large list of IP ranges within a sub-millisecond deadline. The root cause, I felt, is that most developers only turn to use two data structures, lists and hashes, and when those fail them they change the tooling to something like SQL, Spark, ….or maybe decide “PHP is slow, I need C”.

                                                                          It made me recall a library I wrote and use with a few clients that solves just this problem, I thought I would share it, not so much for “shiny shiny”, to remind everyone that even in a slow language, if you pick and use your data structure correctly (eg. gb_trees[1]) then it rarely matters what programming language you use.

                                                                          I would be really interested to hear stories where swapping out for a more appropriate data structure helped save the day for others.

                                                                          [1] though I really wanted a (radix) trie but you work with what you have baked in the core language offering

                                                                          1. 3

                                                                            I was speaking with a client this week who was struggling to test the membership of an IP in a large list of IP ranges within a sub-millisecond deadline.

                                                                            I would be really interested to hear stories where swapping out for a more appropriate data structure helped save the day for others

                                                                            I had very good results using a simple bisection method. In my case, I used the bisect module in Python.

                                                                            I used a fairly big list of IPv4 ranges: all the public ranges assigned by the five RIRs. I wrote a blog post about it.

                                                                            Hope it’ll be useful!

                                                                            1. 1

                                                                              q/k has a bin operator which implements binary search.

                                                                              q)a:asc update b:floor a+2 xexp 32-b from flip `a`b!"II"$flip "/"vs'1_ read0 hsym `$"fullbogons-ipv4.txt"
                                                                              q){x within value a a.a bin x} "I"$""
                                                                              q){x within value a a.a bin x} "I"$""

                                                                              To see why this works, note my table a is pre-sorted, so a.a bin x will find the index of the row which has a value greater-or-equal to the target. Because application and indexing are the same, I can then get the row as a pair with value and do a quick check to make sure the input is within that range.

                                                                              How fast is it?

                                                                              q)\t:1000 {x within value a a.a bin x} each a.a

                                                                              That’s 5 milliseconds for all of the input IP addresses.

                                                                              q)\t:1000 {x within value a a.a bin x} each 1000?0i

                                                                              That’s 1.5 µseconds per lookup! Wow!

                                                                              I tried to do the same thing with Erlang, but building a vector in erlang is tricky – there are no standard library calls for it, so I went for tuples and rolling bin myself:

                                                                              bin(A, Key) -> bin(A, Key, 1, tuple_size(A)).
                                                                              bin(_, _, X, Y) when X > Y -> Y;
                                                                              bin(A, K, X, Y) -> I = (Y + X) div 2, V = element(I, A), if K > V -> bin(A, K, I + 1, Y); K < V -> bin(A, K, X, I - 1); true -> I end.

                                                                              This can probably be improved, but timing it:

                                                                              23> timer:tc(fun () -> lists:map(fun (X) -> a:check(X,D) end, tuple_to_list(hd(D))) end).

                                                                              Wow! Am I reading that right?

                                                                              26> timer:tc(fun () -> a:check(rand:uniform(4294967296), D) end).                        

                                                                              Checking with some random values:

                                                                              31> M = lists:map(fun (_) -> rand:uniform(4294967296) end, lists:seq(1,1000)).
                                                                              32> timer:tc(fun () -> lists:map(fun(I) -> a:check(I, D) end, M) end).               

                                                                              Looks like we’re only 2x-ish slower than kdb+! Not bad for a “slow” language :)

                                                                              1. 1

                                                                                Can you share the file you used to test the performance of ranges?

                                                                                I’d like to try it against my own IP address library (for Racket).

                                                                                1. 3

                                                                                  I have updated the performance section to use publicly available larger lists.

                                                                                  1. 2

                                                                                    Thanks! Out of curiosity, I tried a binary search. That gets me these results

                                                                                    min: 0.057861328125 max: 2.095947265625 avg: 0.063962158203125

                                                                                    on my i7-6820HQ. It’s pretty fast once the JIT has a chance to warm up! I might give the trie-based approach a try (pun not intended) if I have time next week. It would be nice to see how much of a difference it makes.

                                                                                    1. 3

                                                                                      Compare it to a list iteration implementation and then pour yourself a drink to celebrate how substantially better you pushed the envelope in two hours :)

                                                                              1. 4

                                                                                Similar to JordiGH I know about the dangers of agent forwarding, but how exactly am I supposed to do the work I need to do on remote machines without it? Namely, I need to be able to e.g. git clone private repos using the creds on my laptop when I’m on a remote host. What other solution is there?

                                                                                1. 12

                                                                                  Use more different keys. The key to run git clone should not be the key to become root on the database server.

                                                                                  1. 4

                                                                                    That’s a use case I haven’t hit, personally. You could invoke ssh-add with the -c flag so that ssh asks for confirmation before the key is used. On macOS, you’ll probably need something like theseal/ssh-askpass for the prompting to work.

                                                                                    1. 2

                                                                                      Or use gpg-agent with a yubikey. With tap confirmation enabled on the yubikey :)

                                                                                      1. 4

                                                                                        This is actually only of limited use. The attacker waits for you to attempt a connection, then uses that opportunity to have your agent auth their connection. Then, oops, network interruption and you disconnect.

                                                                                        1. 2

                                                                                          I always found the GPG setup too much of a faff. Using PKCS#11 with OpenSSH directly is much easier. I wrote up a howto (mostly for my own sanity): https://github.com/jamesog/yubikey-ssh

                                                                                      2. 3

                                                                                        You can’t do that without SSH agent forwarding, but there are other options and remediations:

                                                                                        You can clone locally and scp it to the remote host, or you can turn on SSH agent forwarding only for a very brief period of time while the clone is happening, then log out immediately after it completes and reconnect without forwarding. You should not do forwarding by default.

                                                                                        Edit: upon further reflection, turning on agent forwarding for a short amount of time isn’t that much better than having it on regularly since the compromised host can be set up to watch for incoming connections and automatically hijack your agent as soon as you connect so I don’t recommend that.

                                                                                        1. 2

                                                                                          In some cases it can be worthwhile to generate key material on the remote host itself, then authorise that material to pull the repo you need. GitHub can do this with, I think, “deploy keys” – and if you have gitosis, or whatever, you can obviously do whatever you like.

                                                                                          1. 2

                                                                                            is ProxyJump enough?

                                                                                            1. 6

                                                                                              No. That lets you login to a third machine via a second machine, but not from the second machine.

                                                                                              You are on apple. You login to banana, a public host. You want to access carrot, a private host reachable from banana. Proxy lets apple login and access carrot. But a shell on banana can’t connect to carrot. Forward lets banana access carrot.

                                                                                              If, for example, you want to transfer a file via sftp from carrot to banana, your choices are agent forwarding and a direct transfer, or a proxy and downloading to apple and uploading back to banana.

                                                                                              1. 1

                                                                                                You could also use SSH TCP forwarding on the apple-banana connection to make it so that you could use apple as a ProxyJump host from banana. So your connection would go banana->apple->banana->carrot. But that’s pretty convoluted, and is basically just an optimization of the “download to apple and upload back to banana” plan.

                                                                                            2. 1

                                                                                              Separate keys for git authentication & login?

                                                                                            1. 3

                                                                                              This is a very cool idea and I think the idea of incorporating security tokens improves on the suggestions given by Queinnec in section 7 of his paper.

                                                                                              To combat this and to prevent servers’ memory usage from growing indefinitely, the web-server library has a robust implementation of an LRU continuation manager that expires continuations quicker the more memory pressure there is.

                                                                                              Does this run the risk of expiring live sessions?

                                                                                              1. 3

                                                                                                Yes, it does, though you do get control over what happens when someone tries to execute a continuation that has been expired via the instance-expiration-handler. What I do when that happens is I redirect the user back to the calling page (stripping out the continuation parameter from the current URL) and add a flash message letting them know they should try again: https://github.com/Bogdanp/racket-webapp-template/blob/master/app-name-here/components/page/common.rkt#L21-L29 .

                                                                                                Thank you for linking that paper, btw. I hadn’t seen it before.

                                                                                              1. 11

                                                                                                This is a very old idea and it has some merit (asides from being extremely cool technically speaking), but if you look critically at it there are several issues with it:

                                                                                                • Deploying a new version of the code may make old sessions stale and completely invalid because they refer to code that doesn’t exist anymore.
                                                                                                • Serializing a continuation requires an indeterminate amount of memory space (it takes care to ensure you don’t accidentally serialize too much state).
                                                                                                • In general, I’ve learned the hard way (via lots of bad PHP programs) that saving too much state in the session totally breaks the back button, multi-tabbed browsing and so on.
                                                                                                • If you work around the previous issue by not serializing it in the session but in the browser itself, it will become easier to inspect and manipulate the state.
                                                                                                1. 2

                                                                                                  Though it doesn’t come off very clearly from the article, I argue for a mixed approach. Keep the session separate from continuations (via a session id cookie and some form of backend session storage), use stateless, restful URLs for the parts of the application that ought to be shareable (the vast majority of pages in an application) and only sprinkle continuations in places where you’re having to manipulate objects local to a page. I think this approach has its merits.

                                                                                                1. 5

                                                                                                  The Radicle IPFS daemon is the process that runs an IPFS daemon, bootstrapping to our own separate Radicle IPFS network.

                                                                                                  I know next to nothing about IPFS so this is a genuine question: why is the Radicle network separate from the main IPFS network? Does this mean that if I want to use both IPFS and Radicle then I need two daemons?

                                                                                                  1. 4

                                                                                                    For you second question: yes - they run independently of each other.

                                                                                                    They are separated as we don’t replicate general files, you can’t really make use of them without being in the context of radicle. A better way is to think of ipfs as the technology we picked for replication and not the ipfs network which one partakes when starting the default ipfs daemon.

                                                                                                    1. 3

                                                                                                      The main reason is that currently IPNS name resolutions take a much longer time in a larger network. It seems like we’ll soon have fixes for that, so in theory we could move back to using the main network.

                                                                                                      1. 2

                                                                                                        But can’t you use IPFS proper, while just ignoring IPNS for now? I thought the latter is just an extra layer over the former?

                                                                                                        1. 3

                                                                                                          We use both - IPFS for storing the data, and IPNS to point to the head of the list/log of expressions to a machine. Thus, when you update a machine (e.g., add a new issue), you’re adding an item to IPFS that has the new data plus a pointer to the old data, and updating the IPNS to point to that. When someone reads, they resolve the IPNS, and then read the corresponding data.

                                                                                                          In theory we could implement such a pointer system ourselves, or use something other than IPNS. But it seems like improvements to IPNS are happening faster than we could come up with an alternative, so we settled on IPNS.

                                                                                                          1. 1

                                                                                                            Ah, so you’re using a separate IPFS network because the normal one is too big for IPNS now, and yours is still at an early stage, thus small enough that IPNS is fast enough? If yes, then please do move to normal IPFS as soon as you find that the improvements make it feasible. And, for the sake of future generations, please don’t use an “It’s too late” argument to avoid such a move. It won’t be.

                                                                                                    1. 8

                                                                                                      Interesting, I’d also like to be less reliant on Google, but apparently my use case is near 100% different.

                                                                                                      • Google App Engine - nope
                                                                                                      • Google Analytics - nope
                                                                                                      • Google Fonts - nope
                                                                                                      • YouTube - nope
                                                                                                      • GMail as main account - nope (I have one, but only check it every few weeks)

                                                                                                      But here’s the kicker, I am an Android user and I don’t see myself switching to Apple in the near future - I usually pay ~300 EUR for a decent new mobile phone, and I refuse to pay 600-1000.

                                                                                                      So while I’d usually say I’m kind of not relying on Google, not using the services on mobile is too much to ask for me. But apart from the photos part (which get backuped to Google Photos) and location history (which I really, really love) my online life is decoupled from Google quite a bit.

                                                                                                      I’ve also tried using DuckDuckGo at times, but the search results always make me cry. Maybe I’m holding it wrong.

                                                                                                      TLDR: Depending on how exactly you use a service and how good the alternatives are, it’s either easy or hard to change. Bah :)

                                                                                                      1. 13

                                                                                                        I’ve also tried using DuckDuckGo at times, but the search results always make me cry. Maybe I’m holding it wrong.

                                                                                                        My experience is that Google search got a lot worse, so DDG doesn’t seem so bad anymore nowadays. Until very recently I used !g a lot with DDG, but no longer.

                                                                                                        1. 11

                                                                                                          Unasked-for pro tip: g! is the same thing as !g, which is super nice for us using DDG on mobile while also having the “insert space after punctuation marks” setting turned on (i.e. trying to write !g becomes “! g”, try to find that space, and then backspace…)

                                                                                                          1. 2

                                                                                                            Also note, you can put the g! or !g anywhere in the search, doesn’t need to be at the front, its nice to just go g! to have a quick look at what google finds if you’re not finding anything of note.

                                                                                                            And to the tree parent’s post, I find google search less and less useful every year for finding technical things personally. I rarely have to use the bang operator for google in ddg lately.

                                                                                                          2. 2


                                                                                                            It’s almost as if Google has been actively lowering the bar for competitors the last few years.

                                                                                                          3. 10

                                                                                                            I’ve gone from all-in Google fanboi to using almost none of their services, including on Android. I’m running LineageOS without Play Services. Things are surprisingly good, except:

                                                                                                            • OSM is OK but has nothing on Google Maps, and other location-based apps (eg Uber) seem to not work
                                                                                                            • Push notifications don’t work for a lot of apps
                                                                                                            • I still use GCal, and setting it up (with DavDroid) is possible but frustrating, especially when you have a lot of calendars (I have about 8)

                                                                                                            Despite that I’m happy with the move. It’s a bit like the early Android days - not exactly polished but usable and a bit of a challenge.

                                                                                                            1. 5

                                                                                                              Have you installed MicroG? That ought to solve your problems with other location-based apps by letting them use another location provider (like Mozilla’s). I think it also includes a push notification shim of some kind.

                                                                                                              Davdroid works really, really well with NextCloud calendars, and while setup is not super-easy, it doesn’t get any harder with lots of calendars than with one.

                                                                                                              1. 3

                                                                                                                I’ve been meaning to try MicroG, but haven’t yet. Thanks for the reminder.

                                                                                                                I’m somewhat tied to GCal until I migrate my wife off it (and G Suite in general).

                                                                                                            2. 7

                                                                                                              and I refuse to pay 600-1000

                                                                                                              Used to be in the same boat, but I see it differently now. You either buy a $300 phone from a Chinese company with pretty flagship-like specs, but it mines all your data or you pay $600 - $1000 and you buy a phone with flagship-like specs and it doesn’t mine your data. That’s what Apple’s biggest selling point is to me. You get privacy, but it doesn’t make everything unpolished or near unusable.

                                                                                                              You’re paying the 600-1000 price tag, just not upfront.

                                                                                                              1. 1

                                                                                                                I love LG from S.Korea… (have both LG5 and LG v20) . I changed battery recently ($9 bucks for a new from e-bay) added an SD card. Getting updates. Love video/sound/photo capabilities (may be not as ‘flashy’ as Samsung, but core quality is very good).

                                                                                                                Having a very thin phone, that slides into a pocket of my tight-fitting jeans –> is not something I value (not that age group, or body type :-) )

                                                                                                                1. 1

                                                                                                                  I might be misunderstanding your hint about the Chinese company. I use a Nexus 5X now (sure, it might be manufactured in China.) but in this case I see Google as the only company, no other one. Also not sure about my two HTC ones before that. They were all under 300 EUR.

                                                                                                                  1. 1

                                                                                                                    It was just the worst-case scenario. A Nexus 5X is a bit better, but using Google services on an Android phone is still pretty bad from privacy perspective. Maybe you should still reconsider and ask yourself if the extra money isn’t worth the privacy. Like I said, you’re just paying with you data right now instead of with your money. That’s Google’s MO.

                                                                                                                    1. 4

                                                                                                                      The extra cost for the Apple route is not just monetary; you’re also paying by giving up the ability to run the software of your choice on it.

                                                                                                                      1. 1

                                                                                                                        I got an Android to develop software on.

                                                                                                                        Have not done so a single time.

                                                                                                                2. 5

                                                                                                                  Duck Duck Go finds what I’m looking for most of the time, but I agree that Google is far better. If you care about privacy, consider installing the Tor browser. Google searches via Tor are more private, and this also lets you do Google searches outside of your personal Google filter bubble. Sometimes Google’s filter bubble prevents me from getting the results I need. The differences in search results can sometimes be astonishing and revelatory, so I recommend trying it.

                                                                                                                  1. 3

                                                                                                                    Just use StartPage if you want Google’s quality but (more) private queries.

                                                                                                                    1. 2

                                                                                                                      Yeah I absolutely agree, but Google has learned my interests (programming and games) good enough that it gives me good results. DDG is usually clicking 3 pages, then going to Google - every time I try it :(

                                                                                                                    2. 7

                                                                                                                      Google App Engine - nope


                                                                                                                      Google Analytics - nope


                                                                                                                      Google Fonts - nope

                                                                                                                      Just, uh, host the fonts?

                                                                                                                      YouTube - nope

                                                                                                                      For consuming video yep, for uploading you might try Vimeo

                                                                                                                      GMail as main account - nope (I have one, but only check it every few weeks)

                                                                                                                      I have just been slowly moving each and each service off gmail to my own domain.

                                                                                                                      1. 11

                                                                                                                        Not sure wink was asking for alternatives to the five services they don’t use…

                                                                                                                      2. 4

                                                                                                                        I’ve found DDG frustrating at times, too. I’m slowly learning how to better leverage it, though. i.e. I’ve automatically started changing my search queries in ways that help it figure out what I mean more easily: quoting certain words or including additional words that I wouldn’t normally include when doing a Google search.

                                                                                                                        1. 2

                                                                                                                          The quotes, added words, and using - to remove results improves any search engine. Including Google.

                                                                                                                          1. 6

                                                                                                                            I realize that. My point was that although the results from Google were better w/o those additions, you can work around DDG’s inadequacies by doing the things I listed.

                                                                                                                            1. 2

                                                                                                                              With Google you can’t be sure anymore. Sometimes it works, sometimes not it seems.

                                                                                                                          2. 3

                                                                                                                            I’ve also tried using DuckDuckGo at times, but the search results always make me cry. Maybe I’m holding it wrong.

                                                                                                                            DDG results range from better than Google to absolute garbage, but on average I find them workable.

                                                                                                                            However, I realized to started to depend on all kinds of Google search features that are not available in DDG. For example, when I type the name of an establishment in Google, I automatically get the hours and a real time graph with waiting time.

                                                                                                                            I wouldn’t have thought that trivial things like that would be important for me, but apparently they are, and I switched back to Google search.

                                                                                                                            1. 3

                                                                                                                              I’ve found mainly that DDG is ok in general but really terrible at certain specific kinds of searches. If you’re searching for an error message, specific line of code, bug report for a program, that sort of thing… Google is far better. For most other things DDG does just fine, and the ! shortcuts are real handy.

                                                                                                                              1. 2

                                                                                                                                It’s interesting that you love Location History so much; what do you like about it?

                                                                                                                                1. 3

                                                                                                                                  Well, for example looking up trip routes from a vacation. “When did I do home office 2 weeks ago? Monday or Tuesday?” “How long was this bike ride?”

                                                                                                                                  Nothing critical, just stuff I like to know and look up.

                                                                                                                                  1. 3

                                                                                                                                    Interesting, I also love the idea of having that data, but I hate the idea of other people also having my data, especially of a theoretically sensitive nature.

                                                                                                                                2. 1

                                                                                                                                  You can get a used, but perfect condition, iPhone 8 for less than 400 GBP, so maybe ~300 EUR is possible. The resale value is much higher, so I think the long term cost is comparable.

                                                                                                                                  1. 3

                                                                                                                                    But you’re still locked into Apple’s walled garden that way.

                                                                                                                                    1. 2

                                                                                                                                      I wish people would not parrot out thought terminating cliches like this. Locked into what exactly?

                                                                                                                                      1. 6

                                                                                                                                        I’m not the GP but I wouldn’t buy an iPhone because the only way to install apps on the phone is from the App Store, which makes something like F-Droid impossible. AFAIK the only way to install ‘non-official’ apps is to buy a Mac, sign up for a developer account, and then compile and self-sign apps.

                                                                                                                                        1. 2

                                                                                                                                          So you can install whatever you want; it just costs $300 (mac) + $99/yr (in true Apple fashion).

                                                                                                                                          Alternatively you could write a script that refreshes your certificates every night and do it for free!! (+the cost of a used Mac)

                                                                                                                                          Alternatively alternatively you could buy one of those sketchy “signing services” that force you to install a VPN so that they don’t get caught and use that.

                                                                                                                                          (Observation: Closed source software on non apple platforms is often worse than their open source counterparts. Practically no open source software exists for apple platforms, but the software quality is generally higher with some notable exceptions. I don’t know where I’m going with this, so it’s just an observation.)

                                                                                                                                        2. 4

                                                                                                                                          Locked into what exactly?

                                                                                                                                          Locked into not being able to run your own OS, and not being able to run your own programs without paying more (I understand you can install your own apps for 30 or 90 days, but you still have to pay for a developer license, IIRC).

                                                                                                                                        3. 1

                                                                                                                                          I use Dropbox for file storage, Gmail and Office365 for mail, OmniFocus for TODO, OneNote and Keep for notes, WhatsApp, Slack and Teams for chats, CrashPlan for backup, Google, Amazon and Apple for books, feed.ly for news, 1Password for secure info. where am I locked in?

                                                                                                                                          1. 4

                                                                                                                                            Yes, you are quite free to choose from any of the flowers Apple permits in the garden — but you are not free to choose something Apple does not permit.

                                                                                                                                      2. 1

                                                                                                                                        You don’t need to use Google services with Android. I’ve flashed LineageOS on my touch phone with FDroid software and I’m pretty happy with it. Edit: haven’t noticed this already got mentioned.

                                                                                                                                      1. 2

                                                                                                                                        Does any one here use Racket? What’s the typical distribution method for a Racket program? I mean, does it compile to a single binary, or is like trying to distribute python?

                                                                                                                                        1. 5

                                                                                                                                          I do!

                                                                                                                                          You can create stand-alone executables using raco exe from the base distribution. This packs the runtime along with your code and any necessary libraries into a single executable. In that sense it’s somewhat like Python in that there are similar tools for Python (like py2exe and py2app), but, having used both, Racket’s is a significantly nicer experience. One of the reasons the experience is nicer (aside from being well supported because it’s essentially a part of the language) is because Racket gives you the ability of defining runtime paths to certain files that your application uses in such a way that the packaging system can keep track of those files and distribute them for you. Anyone who’s had to deal with setup.py and MANIFEST files and all of the various ways that exist to package data files with one’s application can probably imagine how nice it must be to be able to say

                                                                                                                                          ;; Assuming the file lives at ../resources/icon.png relative to the current module.
                                                                                                                                          (define-runtime-path an-icon
                                                                                                                                            (build-path 'up "resources" "icon.png"))

                                                                                                                                          and let the system figure out that it needs to include that file for you.

                                                                                                                                          1. 1

                                                                                                                                            Nice. Thank you for the info. Maybe I’ll check racket out then.

                                                                                                                                        1. 5

                                                                                                                                          I have the same question as peter. When I fork a project I just add the original project as a remote and then occasionally rebase on top of it when I have to.

                                                                                                                                          Personally, I organize my stuff in two folders:

                                                                                                                                          • ~/work, where anything for-profit goes, and
                                                                                                                                          • ~/sandbox, where everything else goes.

                                                                                                                                          I don’t use autojump, but I came up with something similar (in purpose) to make jumping into particular projects easier. It’s a fish script I call workon and all it does is it finds the first occurrence of a project name within either ~/sandbox or ~/work and jumps into it. If the project happens to be a Python project and there’s a virtualenv for it, then it activates that as well. I’ve been using this in some shape or form for years (even before I started using fish) and it’s probably my most used command (excluding vcs commands).

                                                                                                                                          1. 2

                                                                                                                                            While I don’t use fish and we obviously work on different projects, your “workon” script is a great idea! I’m totally going to write my own version of that!

                                                                                                                                            1. 1

                                                                                                                                              If you don’t use fish (and maybe if you do) then you can set CDPATH to get this automatically. :)

                                                                                                                                              More info if you search for CDPATH here: https://www.gnu.org/software/bash/manual/html_node/Bourne-Shell-Builtins.html