1. 4

    to research programming languages and tools

    This series is democratizing access to tools one-step removed

    I’m sorry but right now this is a list of stuff curated by one? person who thinks they might be doing things better than others. But this is phrased in such a vague way that I don’t even know where to start.

    Maybe just rephrasing it to be sound less canonical might help, but right now my only question is: where are the roughly 200 tools and languages I’ve already used in the past?

    1. 2

      I agree that is one weird sentence, but I’m confused by your criticisms. This is a catalog of reviews for programming environments the author has deemed interesting / representative for their category. The name is a riff on The Whole Earth Catalog, a magazine/catalog that included reviews of things, and which, at least for a certain group of people, was seminal.

      1. 1

        Maybe it’s my perception of the word “catalog”. It says “expansive” or “big”, maybe even “close to complete”. My bad probably, but YMMV.

    1. 6

      That React Hooks can be confusing and frustrating is something even their creators share. However, once they click (no pun intended), I find them useful and not impossible to reason about. The author does a good job outlining some common head-scratchers, but the examples chosen don’t build a compelling case against their design.

      Take this for instance:

      What if the data we have has pagination links and we want to re-run the effect when the user clicks a link?

      The solution is already embedded in the definition of the effect — whenever the state (currentPage) changes, fetch its corresponding data. Therefore, on click … change the currentPage. (Or I am missing something glaringly obvious?)

      1. 2

        I think you’re right. I think it’s just that most developers (unless they have a lot of experience with reactive frameworks, such as RxJS) are not used to thinking of their data model as a dependency flow—or at least, not on a variable-by-variable level. For what it’s worth, I understand the author’a complaints; I just think Hooks are worth it, and my read is that, in the end, the author does, too.

      1. 12

        Okay, I know it’s not what you’re asking, but let me be that guy in the thread:

        You might be better off writing your own styles :-)

        What is the nature of the content you’re putting on your website, and what are your goals with it? (Unfortunately, the website doesn’t load for me at all, so I can’t get an idea). If it’s just your “corner of the internet” where you post your thoughts, you can get away with very little CSS and things will look good. Web design in 4 minutes is a wonderful explanation of how it only takes a few lines of CSS to improve an article page.

        If you’re out of ideas on how a personal site should / could look, feel free to use the Lobste.rs front page — most of the links are from people’s personal websites — start with a design you like and make it your own. I find that “imagining layouts” and “implementing layouts in CSS” are two separate modes of working for me; the former gives me impostor syndrome vibes, while I’m 100% in executing the latter, and I kind of intuit that I’m horrible at CSS is more of a I can’t imagine how it should look kind of thing? (Please correct me if I’m reading this wrong)

        Back to CSS — and the newest whatever.css thing. Discourse around CSS features goes like this: here’s a one-liner to do thing. But ah, the thing is not supported in browser X, so you also need to do Y, et cetera — and next thing you know you’re knee-deep into arcana, which makes people uneasy about starting out. But right now we’re living in a world where 90%+ of people are on evergreen browsers, and they tend to adhere to the specs pretty well. And, in the case of Flexbox and Grid, the consequences of lack of support from browsers are minimal — the occasional visitor might see the layout collapsed to a vertical stack, but the content’s still there.

        By taking the specs at face value, and flat-out ignoring the edge cases, you cut out a huge swathe of complexity. You only really need flex / grid when you want to change from the normal top-to-bottom flow. Here’s a good guide for Flex, and I found the Every Layout book is a good investment for a clear-eyed intro to the good parts of CSS. Jeremy Keith has recently posted a set of resources for starting out in web development, there are some great things in there as well.


        This is not to say paying someone to do it for you is not a legitimate approach. I’m just building the case for DIY, which can be very rewarding — knowing how / why the thing works, and changing things with confidence.

        1. 4

          You might be better off writing your own styles :-)

          I would have been hesitant to recommend this in the past, but in our new world of CSS Grid this might actually be pretty reasonable advice. I would say don’t even bother with learning Flex (yet), just start with Grid. Most personal sites (most sites!) don’t need a layout more complicated than what you can achieve with a few lines of grid.

          I think there are two things that need to be separated though:

          1. The design/layout and palette
          2. The actual CSS to implement it

          I ’m a lousy designer, so I always just steal a layout from a site I like, and either use a palette generator or steal the palette too. But I would roll my own CSS these days, which makes it easy to tweak things a bit if I want to.

          Having said that, I do a lot of FE work so I may be overstating how easy it is to do the CSS work for someone who is really unfamiliar.

        1. 3

          But do we really need to store conversion from every node to every node? What if we just stored the conversion rates from one node to every other node?

          I have to admit I chuckled at this part, where the ultimate solution is to use the metric system :-)

          1. 2

            Yeah, I’ve gotta say I was puzzled at the amount of unnecessary complex this person is asking for. Graph search? Linear algebra?

            Define one of your units as canonical (eg ‘meter = 1.0’). Define all others in terms of previously-defined units (eg centimeter = 0.1 meter; hand = 13 centimeters), and convert the definitions to be in terms of your canonical unit as you go. Then ‘1 light year in hands’ becomes 1 * (meters in a light year) / (hands in a meter).

            1. 1

              Nonetheless I feel it’s an accurate representation of working through a (hitherto unknown) problem, and a lesson on the importance of framing it.

              1. 1

                Two problems with this approach:

                1. This assumes you have control over the input list. What if it’s not ordered like that?
                2. One of his requirements was to minimize the number of floating point multiplications. Consider what happens when I take meter as the base unit, then sequentially define light year, furlong, inch, cubit, parsec, foot, centimeter. We’d probably not end up with a centimeter being 0.01 meters :) If you allow redundant definitions, then you’re back to the BFS.

                One fun problem he doesn’t bring up: how do you deal with prefixes? A kiloinch is a perfectly valid measurement, as is a centisecond. But that’s a string parsing problem.

                1. 1

                  One of his requirements was to minimize the number of floating point multiplications.

                  The stated approach has just two multiplications: to and from the reference unit. If you can’t get from a unit to the reference unit, you can never get there.

                  I don’t know why else this solution would be wrong. Maybe because it doesn’t require data structures more complicated than a hash table per type of unit?

                  1. 2

                    The floating point operations at initialisation are the problem. They’re a source of cumulative error if your input unit definitions are pathological. I’d argue that a perfectly good solution to this is to require all definitions to be in terms of the canonical unit, though.

                    1. 1

                      It’s not wrong, but the input is not ordered so you still need to build a graph, then reduce it to a hash.

              1. 2

                Copy and paste the following command into your Terminal. Before you pipe any script into your computer, always view the source code and make sure you understand what it does.

                This is terrible advice. There are ways to detect a script being piped from wget/curl, so the source you read in a browser is not necessarily the source that will be piped into sh.

                Tell people to download the script, and then read it before executing the already downloaded file.

                For example: https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/

                1. 1

                  Oh, interesting! Is there any generally-accepted alternative to this method? (That is, other than piping to a file, reviewing it, and executing it afterwards)

                  1. 3

                    Offering signed installer files for Mac and Windows, and supporting existing signed package infrastructures for Linux would be a helluva lot nicer.

                1. 4

                  This just has to be some sort of stunt / cultural commentary. It cannot be someone’s earnest intention. Can it?

                  1. 5

                    I don’t know specifically, but there’s prior art for opera and the amazon fire browser using cloud assist.

                    1. 1

                      I spoke to one of the creators the other day and my first impression was that this was really dumb, but I knew that it must have been my fault because this particular person builds amazing things.

                      And then, after talking for a few minutes, I started to get it. Here’s how I understand how this could be the future (Note: this is my interpretation, not their’s):

                      Data science is one domain that has stubbornly refused to move to the browser. JupyterLab is moving the UI to the browser, but the computation is still happening back on servers. I am building a data science studio that runs entirely in the browser (computation and all). But to do many types of data science in the medical field this is impractical, as you need hundreds of GB of ram, at least, and computers that you run Chrome on don’t have those specs typically. In comes Mighty. Suddenly I can run Chrome on a machine with 1TB of RAM and run complex data science as a web app. As a data science tool developer, this makes my life much easier. As a user, this makes my life much easier.

                      So I think this is a pretty cool idea…

                    1. 3

                      I like the thoughtful documentation, congrats on that! (As a minor nitpick, the link to the API Reference was a bit hard to find on the first skim).

                      I generally use Node for my scripting needs because JS is the language I’m most comfortable with, so I can quickly whip up whatever I need, and I think this project is up to something. I like that it provides straightforward helpers for the cli, fs, and http parts — I usually make do with commander.js, fs-exta + fast-glob and got, respectively, but they still need some boilerplate.

                      I’m curious about the rationale behind some aspects in the project, which to me seem to make the scripts less portable:

                      1. injecting the API in the script’s global namespace vs. explicit require() / a single lemon scope
                      2. the npm loading mechanism; did you consider allowing the normal require syntax and monkey-patching the internals?

                      P.S. Thanks for putting your ideas out there! :-)

                      1. 3

                        Oh, my, thank you for the kind words. I really needed these.

                        I’m glad you like the idea :)

                        You’re definitely right about the link to the reference being buried; the care that went into the docs is big part of what makes Tasklemon good, so I should make them way more prominent.

                        I wonder what you mean exactly by “portable”? TL was really thought up as a complete thing, so I might have had some blind spots in the design when it comes to integrating with other things. Despite this, since it’s built on top of Node.js, you can still do Node things: require() will work as expected, for instance, so you can still make use of neighboring JS files.

                        Of note, initially TL was never intended to be used as an API, as an imported package in a bigger project, and was designed accordingly; I’m starting to think it might be possible, after all.

                        1. 1

                          I wonder what you mean exactly by “portable”?

                          I guess I meant code that looks like normal JS as much as possible, and could potentially work without TL. I might have projected my own desire to have a way to quickly play around with npm modules in a way similar to Runkit (or that Mac app that I can’t can’t remember the name of right now) and so usage as an API somehow made more sense to me.

                          1. 1

                            Well, that might come eventually! For now though, it’s only the full-on approach.

                        2. 2

                          And some feedback from a toy script:

                          cli.accept({ color: '-c' });
                          let { formatter, lab, rgb } = npm.culori;
                          formatter('hex')(lab(rgb(cli.args.color)));
                          
                          1. Is there a way to accept/read positional arguments from the cli?
                          2. It would be cool to auto-generate a --help out of argument descriptions
                          3. The package resolution seems to fail (lemon@0.2)
                          1. 1
                            1. Indeed: you can specify a position (say, '#0' for the first positional argument) instead of, or in addition to, a named argument.
                              You could very write something like this: cli.accept({ color: '#0 -c --color' }) which would mean the script accepts the color argument in these three different ways.
                            2. I agree! I’d like to work on that at some point. If you look at the cli.accept() documentation, I actually mention you can add descriptive text for arguments, even though it’s not used yet.
                            3. The script seems to work for me, although it doesn’t output anything; I can see it correctly parses the color if I add a cli.tell call. What arguments are you passing to the script when running it?
                            1. 1

                              Weird… I’ve added an issue on your GitHub repo with some details.

                              1. 1

                                Perfect, thanks! I’ll try to increase the logging of the installation process; right now it outputs messages, but doesn’t persist them.

                        1. 2

                          Unsolicited suggestion :-), but for image optimization there are options that don’t require you to upload the files to some server.

                          1. 1

                            Thanks! I will test these tools and update the article.

                          1. 2

                            Buy a domain name and then do everything that people are saying here. Fastmail allows custom domains. For each new registration, create new email name. For example: for Hilton.com, when registering provide hilton@yourdomain.com. In fastmail you can catch all the names. That will allow to understand who sold your email later.

                            1. 6

                              I think that it is valuable to own your own domain, and I endorse this advice, but I do have a caveat to add to it.

                              Generally speaking, using a custom domain for your email adds attack surface to any account linked to your email address. See this story about the Twitter handle @N. You very likely want to have a non-custom-domain email which you use for account recovery purposes.

                              For me, the biggest thing about moving away from Google products is not actually that it loses the features or the network effects (Google is prone to shutting down everything I like, anyway), but that it loses the resilience against social engineering. Nothing like that is ever perfect, but at least you should try to minimize how many separate companies’ customer service process are part of your attack surface, and pick them carefully.

                              I’m using my Google Employee hat here not to give these words greater weight, but to disclose my bias.

                              1. 4

                                In addition, the ICANN recommends registrants use an external address for administrative domain contacts: https://www.icann.org/en/system/files/files/sac-044-en.pdf

                                1. 1

                                  This is exactly the kind of thing I was looking for, thank you for posting!

                                2. 1

                                  Thank you for pointing this out. Unless I’m missing some piece in the chain, that would be: 1. The registrar 2. The DNS provider and 3. the email service, right? (I’m under the impression that with FastMail, they can provide both 2 and 3, but I’d have to check)

                                  1. 1

                                    That agrees with my analysis, yes.

                                3. 3

                                  That will allow to understand who sold your email later.

                                  More importantly, it’ll make your identities across different websites unlinkable. Or at least harder to link.

                                  1. 1

                                    Would + tags (like you+hilton@example.net) help with that or are spammers getting smart and stripping them out?

                                    1. 2

                                      Some services [mistakenly] consider a plus character not valid for use in an email address.

                                      1. 1

                                        I’ve seen spammers that know about catch-all domains and are striping even the unique part. Oh well.

                                    1. 1

                                      If anyone is interested in contributing, or just bookmarking for reference, I’ve started taking some notes here: https://github.com/danburzo/au-revoir-gmail

                                      1. 1

                                        People is suggesting keeping your gmail account “alive” for a while, but in the case of that account being bound to something that you own, like your Git commits somewhere, it means that you’ll have to keep that account safe, forever.

                                        I have two questions:

                                        • Is there a way of changing your commit history to reflect to a new email address that does not belong to a centralized corporation but to you, in the form of a domain you own.
                                        • Is it possible to use another identification mechanism, a signature that is not bound to an email address? An email address requires infrastructure to work, and that eventually could belong to someone else, like your the domain your email is part of
                                        1. 2

                                          Is there a way of changing your commit history to reflect to a new email address that does not belong to a centralized corporation but to you, in the form of a domain you own.

                                          Yes in theory, however that changes all the hashes so no in practice.

                                          1. 2

                                            in my experience, just start committing with the new address and update any mailmap and authors files. can’t do anything about published history…

                                            1. 1

                                              You could use git filter-branch to rewrite the entire git repository to replace your old e-mail address with your new one, but that will change the hash of every commit so it will be a terrible experience for anyone who has an existing clone of your repository. I think it’s not worth it.

                                              1. 1

                                                Is it possible to use another identification mechanism, a signature that is not bound to an email address? An email address requires infrastructure to work, and that eventually could belong to someone else, like your the domain your email is part of

                                                In GitHub, you can choose to keep your email private and use something to the tune of username@users.noreply.github.com. See the details here

                                              1. 10

                                                I switched off of Google products about 6 months ago.

                                                What I did was I bought a Fastmail subscription, went through all my online accounts (I use a password manager so this was relatively easy) and either deleted the ones I didn’t need or switched them to the new e-mail address. Next, I made the @gmail address forward and then delete all mail to my new address. Finally, I deleted all my mail using a filter. I had been using mbsync for a while prior to this so all of my historical e-mail was already synced to my machine (and backed up).

                                                Re. GitHub, for the same reasons you mentioned, I turned my @gmail address into a secondary e-mail address so that my commit history would be preserved.

                                                I still get the occasional newsletter on the old address, but that’s about it. Other than having had to take a few hours to update all my online accounts back when I decided to make the switch, I haven’t been inconvenienced by the switch at all.

                                                1. 4

                                                  It’s really exciting to see people migrating away from Gmail, but the frequency with which these posts seem to co-ocur with Fastmail is somehow disappointing. Before Gmail we had Hotmail and Yahoo Mail, and after Gmail, perhaps it would be nicer to avoid more centralization.

                                                  One of the many problems with Gmail is their position of privilege with respect to everyone’s communication. There is a high chance that if you send anyone e-mail, Google will know about it. Swapping Google out for Fastmail doesn’t solve that.

                                                  Not offering any solution, just a comment :) It’s damned hard to self-host a reputable mail server in recent times, and although I host one myself, it’s not really a general solution

                                                  1. 5

                                                    Swapping Google out for Fastmail solves having Google know everything about my email. I’m not nearly as concerned about Fastmail abusing their access to my email, because I’m their customer rather than their product. And with my own domain, I can move to one of their competitors seamlessly if ever that were to change. I have no interest in running my own email server; there are far more interesting frustrations for my spare time.

                                                    1. 2

                                                      I can agree that a feasible way to avoid centralization would be nicer. However, when people talk about FastMail / ProtonMail, they still mean using their own domain name but paying a monthly fee (to a company supposedly more aligned with the customer’s interests) for being spared from having to set up their own infrastructure that: (A) keeps spam away and (B) makes sure your own communication doesn’t end up in people’s Junk folder.

                                                      To this end, I think it’s a big leap forward towards future-proofing your online presence, and not necessarily something comparable to moving from Yahoo! to Google.

                                                      1. 3

                                                        for being spared from having to set up their own infrastructure that: (A) keeps spam away and (B) makes sure your own communication doesn’t end up in people’s Junk folder.

                                                        I’m by no means against Fastmail or Proton, and I don’t think everyone should setup their own server if they don’t want to, but it’s a bit more nuanced.

                                                        Spamassassin with defaults settings is very effective at detecting obvious spam. Beyond obvious spam it gets more interesting. Basically, if you never see any spam, it means that either you haven’t told anyone your address, or the filter has false positives.

                                                        This is where the “makes sure your own communication doesn’t end up in people’s Junk folder” part comes into play. Sure, you will run into issues if you setup your server incorrectly (e.g. open relay) or aren’t using best current practices that are meant to help other servers see if email that uses your domain for From: is legitimate and report suspicious activity to the domain owner (SPF, DKIM, DMARC). A correctly configured server SHOULD reject messages that are not legitimate according to the sender’s domain stated policy.

                                                        Otherwise, a correctly configured server SHOULD accept messages that a human would never consider spam. The problem is that certain servers are doing it all the time, and are not always sending DMARC reports back.

                                                        And GMail is the single biggest offender there. If I have a false positive problem with someone, it’s almost invariably GMail, with few if any exceptions.

                                                        Whether it’s a cockup or a conspiracy is debatable, but the point remains.

                                                      2. 2

                                                        We’re not going to kill GMail. Let’s be realistic, here. Hotmail is still alive and healthy, after all.

                                                        Anyone who switches to Fastmail or ProtonMail helps establish one more player in addition to GMail, not instead of it. That, of course, can only be a good thing.

                                                        1. 1

                                                          Just to bring in one alternative service (since you are right, most people here seem to advice Fastmail, Protonmail): I found mailbox.org one day. No experience with them though.

                                                        2. 1

                                                          I still get the occasional newsletter on the old address, but that’s about it.

                                                          Once you moved most things over, consider adding a filter on your new account to move the forwarded mails to a separate folder. that way it becomes immediately clear what fell through the cracks.

                                                          1. 1

                                                            Sorry, I wasn’t clear. E-mails sent to the old address are forwarded to the new one and then deleted from the GMail account. When that happens I just unsubscribe, delete the e-mail and move on. It really does tend to only be newsletters.

                                                            I suppose one caveat to my approach and the reason this worked so well for me is that I had already been using my non-gmail address for a few years prior to making the change so everyone I need to interact with already knows to contact me using the right address.

                                                        1. 4

                                                          Project Fluent from Mozilla has some ideas on its wiki… but wow, I hadn’t seen the gettext manual before, it’s huge!

                                                          1. 2

                                                            Why I usually shy away from WordPress for my own projects, I have built a few for friends and WP is still a strong contender in the CMS space. This starter theme is a distillation of the approaches I’ve had with these projects.

                                                            I hope it can give you a 10-to-100-hour head start on a new project.

                                                            P.S. I would appreciate your feedback, since I’ve only looked into PHP and the WP API to the extent required by the things I needed to build.

                                                            1. 1

                                                              I’ve only read the abstract, but it seems like an equally important question would be the time-to-fix for post-release bugs with code coverage vs without. My own experience is that bugs are easier to fix when the associated code has test coverage.

                                                              1. 1

                                                                Intuitively: along with bug fixing, refactoring and behavior-altering rewrites are also facilitated by good test coverage. To my mind, preventing bugs is not necessarily the main purpose. Nonetheless, the results in this study are very interesting.

                                                                (See also the effect of JavaScript static typing on preventing bugs.)

                                                              1. 3

                                                                Skeuomorphism for palettes! Cool idea.

                                                                1. 3

                                                                  Ha! I never thought about it this way, but it makes total sense. Also a bit of Robin Sloan’s flip-flop.

                                                                1. 2

                                                                  if this process was repeated a second time, would it be largely idempotent?

                                                                  1. 3

                                                                    In this instance I believe it would. The Relative colorimetric rendering intent only alters colors that are out of gamut for CMYK, snapping them to the closest printable color, all the while keeping printable colors largely unchanged. Conversely, the Perceptual rendering intent will “shrink” the sRGB gamut to fit in the CMYK gamut, so all colors are shifted towards less saturated versions, and repeating the process would cause them to converge to… grey?

                                                                    (Based on my limited understanding of the process, and without having tested my assumptions)

                                                                    1. 1

                                                                      In theory it should be idempotent. In practice, the roundtrip is going to induce some errors. Doing it repeatedly is a way to gain more insight into the structure of those errors. Could be a fun project to explore!

                                                                      1. 1

                                                                        In general exploring repeated function application for fixed points is nearly always a good idea.

                                                                    2. 1

                                                                      I would think it would monotonically decrease in saturation and contrast but who knows.

                                                                    1. 1

                                                                      I was seeing lots of snark on Twitter the past week re: microfrontends and couldn’t tell what they’re on about. Is this the article that started it all?

                                                                      1. 1

                                                                        I would be interested in subscribing to your bookmarks for a topic like this, since you know the subject :-)

                                                                        Not saying that your ask is not valid, just if there is an option – I prefer to leverage a list curated by an expert in the subject.

                                                                        Another question, would things like Discourse, Disqus, Matrix.org, ActivityPub clinent/servers qualify as cms?

                                                                        [1] https://howlingpixel.com/i-en/Content_management_system

                                                                        1. 2

                                                                          Another question, would things like Discourse, Disqus, Matrix.org, ActivityPub clinent/servers qualify as cms?

                                                                          I’d say for Matrix.org and ActivityPub the distributed tag works pretty well already.

                                                                        1. 3

                                                                          Don’t miss the About page, it has me nodding throughout.

                                                                          1. 1

                                                                            That’s a very good About page. I like the diagram at the bottom: the way it explains what data + artefacts you need, which modules & intermediate artefacts its uses/creates, and what you get out at the end.

                                                                            1. 2

                                                                              Indeed, I was writing on Twitter a few days ago about how I couldn’t seem to find an example of a diagram to illustrate the typical pipeline in a static site generator