1. 65

    What should people use instead?

    Real secure messaging software. The standard and best answer here is Signal,

    Oh please. They aren’t even close to sharing the same level of functionality. If I want to use Signal, I have to commit to depending on essentially one person (moxie) who is hostile towards anyone who wants to fork his project, and who completely controls the server/infrastructure. And I’d have to severely limit the options I have for interfacing with this service (1 android app, 1 ios app, 1 electron [lol!] desktop app). None of those are problems/restrictions with email.

    I don’t know what the federated, encrypted ‘new’ email thing looks like, but it’s definitely not Signal. Signal is more a replacement for XMPP, if perhaps you wanted to restrict your freedom, give away a phone number, and rely on moxie.

    1. 12

      I think Matrix is getting closer to being a technically plausible email and IM replacement.

      The clients don’t do anything like html mail, but I don’t think I’d miss that much, and the message format doesn’t forbid it either.

      1. 27

        If you can’t send patches to mailing lists with them then they’re not alternatives to email. Email isn’t just IM-with-lag.

        1. 5

          Email can be exported as text and re-parsed by Perl or a different email client.

          Until that functionality is available, I won’t consider something a replacement for email.

          1. 4

            In all fairness: cmcaine says “Matrix is getting closer”.

            1. 3

              Matrix is a federated messaging platform, like XMPP or email. You could definitely support email-style use of the system it’s just that the current clients don’t support that. The protocol itself would be fine for email, mailing lists and git-send-email.

              The protocol also gives you the benefits of good end-to-end encryption support without faff, which is exactly what general email use and PGP don’t give you.

              1. 2

                Adding patch workflow to Matrix is no different to adding it to XMPP or any other messaging solution. Yes, it is possible but why?

                I can understand you like Matrix but it’s not clear how Matrix is getting closer to e-mail replacement with just one almost-stable server implementation and the spec that’s not an IETF standard. I’d say Matrix is more similar to “open Signal” than to e-mail.

                1. 2

                  “Getting closer” is a statement towards the future, yet all of your counter arguments are about the current state.

                  1. 2

                    If I only knew the future I’d counter argument that but given that the future is unknown I can only extrapolate the current and the past. Otherwise Matrix may be “getting closer” to anything.

                    Do you have any signs that Matrix is getting e-mail patch workflow?

              2. 2

                Mailing lists could move to federated chatrooms. They moved from Usenet before, and in some communities moved to forums before the now common use of Slack.

                I’m not saying it would be the best solution, but it’s our most likely trajectory.

                1. 6

                  Mailing lists existed in parallel with Usenet.

                  1. 5

                    Both still exist :)

                    I do think, actually, that converting most public mailing lists to newsgroups would have a few benefits:

                    1. It’d make their nature explicit.
                    2. It’d let us stop derailing designs for end-to-end encryption with concerns that really apply only to public mailing lists.
                    3. I could go back to reading them using tin.

                    Snark aside, I do think the newsgroup model is a better fit for most asynchronous group messaging than email is, and think it’s dramatically better than chat apps. Whether you read that to mean slack or any of the myriad superior alternatives to slack. But that ship sailed a long time ago.

                    1. 3

                      Mailing lists are more useful than Usenet. If nothing else, you have access control to the list.

                      1. 2

                        Correct, and the younger generation unfamiliar with Usenet gravitated towards mailing lists. The cycle repeats.

                      2. 4

                        Mailing lists don’t use slack and slack isn’t a mailing list. Slack is an instant messaging service. It has almost nothing in common with mailing lists.

                        It’s really important to drive this point home. People critical of email have a lot of good points. Anyone that has set up a mail server in the last few years knows what a pain it is. But you will not succeed in replacing something you don’t understand.

                        1. 4

                          The world has moved on from asynchronous communication for organizing around free software projects. It sucks, I know.

                          1. 3

                            Yeah. Not everyone, though.

                            Personally I think that GitHub’s culture is incredibly toxic. Only recently have there been tools added to allow repository owners to control discussions in their own issues and pull requests. Before that, if your issue got deep linked from Reddit you’d get hundreds of drive by comments saying all sorts of horrible and misinformed things.

                            I think we’re starting to see a push back from this GitHub/Slack culture at last back to open, federated protocols like SMTP and plain git. Time will tell. Certainly there’s nothing stopping a project from moving to {git,lists}.sr.ht, mirroring their repo on GitHub, and accepting patches via mailing list. Eventually people will realise that this means a lower volume of contributions but with a much higher signal to noise ratio, which is a trade-off some will be happy to make.

                            1. 2

                              Only recently have there been tools added to allow repository owners to control discussions in their own issues and pull requests. Before that, if your issue got deep linked from Reddit you’d get hundreds of drive by comments saying all sorts of horrible and misinformed things.

                              It’s not like you used to have levers for mailing lists, though, that would stop marc.org from archiving them or stop people from linking those marc.org (or kernel.org) threads. And drive-bys happened from that, too. I don’t think I’m disputing your larger point. Just saying that it’s really not related to the message transfer medium, at least as regards toxicity.

                              1. 3

                                Sure, I totally agree with you! Drive-bys happen on any platform. The difference is that (at least until recently) on GitHub you had basically zero control. Most people aren’t going to sign up to a mailing list to send an email. The barrier to sending an email to a mailing list is higher than the barrier to leaving a comment on GitHub. That has advantages and disadvantages. Drive-by contributions and drive-by toxicity are both lessened. It’s a trade-off I think.

                                1. 3

                                  I guess I wasn’t considering a mailing list subscription as being meaningfully different than registering for a github account. But if you’ve already got a github account, that makes sense as a lower barrier.

                    2. 5

                      Matrix allows sending in the clear, so I suppose this has the “eventually it will leak” property that the OP discussed?

                      (A separate issue: I gave up on Matrix because its e2e functionality was too hard to use with multiple clients)

                      1. 5

                        (A separate issue: I gave up on Matrix because its e2e functionality was too hard to use with multiple clients)

                        and across UA versions. When I still used it I got hit when I realized it derived the key using the browser user agent, so when OpenBSD changed how the browser presented itself I was suddenly not able to read old conversations :)

                        1. 2

                          Oh! I didn’t know that!

                    3. 5

                      Functionality is literally irrelevant, because the premise is that we’re talking about secure communications, in cases where the secrecy actually matters.

                      Of course if security doesn’t matter then Signal is a limited tool, you can communicate in Slack/a shared google doc or in a public Markdown document hosted on Cloudflare at that point.

                      Signal is the state of the art in secure communications, because even though the project is heavily driven by Moxie, you don’t actually need to trust him. The Signal protocol is open and it’s basically the only one on the planet that goes out of it’s way to minimize server-side information storage and metadata. The phone number requirement is also explicitly a good design choice in this case: as a consequence Signal does not store your contact graph - that is kept on your phone in your contact store. The alternative would be that either users can’t find each other (defeating the point of a secure messaging tool) or that Signal would have to store the contact graph of every user - which is a way more invasive step than learning your phone number.

                      1. 9

                        even though the project is heavily driven by Moxie, you don’t actually need to trust him

                        Of course you must trust Moxie. A lot of the Signal privacy features is that you trust them not to store certain data that they have access to. The protocol allows for the data not to be stored, but it gives no guarantees. Moxie also makes the only client you can use to communicate with his servers, and you can’t build them yourself, at least not without jumping hoops.

                        The phone number issue is what’s keeping me away from Signal. It’s viral, in that everyone who has Signal will start using Signal to communicate with me, since the app indicates that they can. That makes it difficult to get out of Signal when it becomes too popular. I know many people that cannot get rid of WhatsApp anymore, since they still need it for a small group, but cannot get rid of the larger group because their phone number is their ID, and you’re either on WhatsApp completely or you’re not. Signal is no different.

                        And how can you see that a phone number is able to receive your Signal messages? You have to ask the Signal server somehow, which means that Signal then is able to make the contact graph you’re telling me Signal doesn’t have. They can also add your non-Signal friends to the graph, since you ask about their numbers too. Maybe you’re right and Moxie does indeed not store this information, but you cannot know for sure.

                        What happens when Moxie ends up under a bus, and Signal is bought by Facebook/Google/Microsoft/Apple and they suddenly start storing all this metadata?

                        1. 5

                          Signal is a 501c3 non-profit foundation in the US, Moxie does not control it nor able to sell it. In theory every organization can turn evil but there is still a big difference between non-profits who are legally not allowed to do certain things vs corporations who are legally required to serve their shareholders, mostly by seeking to turn a profit.

                          And how can you see that a phone number is able to receive your Signal messages? You have to ask the Signal server somehow, which means that Signal then is able to make the contact graph you’re telling me Signal doesn’t have.

                          There are two points here that I’d like to make, one broader and one specific. In a general sense, Signal does not implement a feature until they can figure out how to do that securely and with leaking as little information as possible. This has been the pattern for basically almost every feature that Signal has. Specifically, phone numbers are the same: The Signal app just sends a cryptographically hashed, truncated version of phone numbers in your address book to the server, and the server responds with the list of hashes that are signal users. This means that Signal on the server side knows if any one person is a Signal user, but not their contact graph.

                          1. 3

                            In theory every organization can turn evil

                            Every organization can also be bought by an evil one. Facebook bought WhatsApp, remember?

                            The Signal app just sends a cryptographically hashed, truncated version of phone numbers in your address book

                            These truncated hashes can still be stored server-side, and be used to make graphs. With enough collected data, a lot of these truncated hashes can be reversed. Now I don’t think Signal currently stores this data, let alone do data analysis. But Facebook probably would, given the chance.

                            1. 6

                              Every organization can also be bought by an evil one. Facebook bought WhatsApp, remember?

                              WhatsApp was a for-profit company, 501(c)3 work under quite different conditions. Not saying they can’t be taken over, but this argument doesn’t cut it.

                        2. 3

                          The phone number requirement is also explicitly a good design choice

                          No, it’s an absolutely terrible choice, just like it is a terrible choice for ‘two factor authentication’

                          Oh but Signal users can always meet in person to re-verify keys, which would prevent any sim swap attack from working? No, this (overwhelmingly) doesn’t happen. In an era where lots of people change phones every ~1-2yr, it’s super easy to ignore the warning because 99% of the time it’s a false positive.

                          The alternative would be that either users can’t find each other (defeating the point of a secure messaging tool)

                          This is a solved problem. I mean, how do you think you got the phone numbers for your contacts in the first place? You probably asked them, and they probably gave it to you. Done.

                        3. -8

                          Careful there… you can’t say bad things about electron in here….

                        1. 1

                          I think a good option is to buy an X220 that has been retrofitted with a better screen. These go for ~£500 on eBay and can decode 1080p YouTube vids.

                          Won’t meet Drew’s standards because it has blobs and maybe they want a 16:10 screen.

                          1. 2

                            Contrasting: Elementary OS is an example of an open source community led by designers

                            1. 15

                              Complaints about interviewing seem rather similar to complaints about open-plan offices. Pretty much everyone agrees it sucks, but it also never seems to change 🤷‍♂️ Indeed, things seem to be getting worse, not better.

                              My biggest objection is that you’re often expected to spend many hours or even days on some task before you even know if you have a 1%, 10%, or 90% chance of getting hired, and then you get rejected with something like “code style didn’t conform to your standards”, which suggests they just didn’t like some minor details like variable naming or whatnot.

                              I usually just ignore companies that treats my time as some sort of infinitely expendable resource, which doesn’t make job searching easier, but does free up a whole lot of time for more fulfilling activities.

                              1. 2

                                Getting rid of open - plan offices seems doable. Yet, we do not know how to screen for good engineers reliably with our without wasting anyone’s time.

                                1. 1

                                  Hire them for an extremely short contract? Bootcamp does that. They pay travel and $1000 to work for them for a week, which doesn’t seem too bad.

                                  1. 5

                                    How would that work for anybody who’s currently employed?

                                    1. 2

                                      The last time I interviewed, I was talking to about 10 companies. If the hiring process involved a short contract, they’d have been off my list.

                                      1. 2

                                        “Hire a candidate for a week” doesn’t strike me as a solution to, “Screen people without wasting anyone’s time.” At the very least, it forces the candidate to spend a week in what amounts to an extended interview. It seems to me like it’d be far more time-consuming for the existing team as well, what with coming up with a steady stream of short projects that require little or no onboarding, working with the candidate for the week instead of whatever else they’d otherwise be working on, evaluating the candidate’s work, and so on.

                                        I’m not arguing that it’s a bad idea in general, just that I suspect it isn’t a good way to reduce wasted time.

                                        1. 2

                                          The responses show that it is just hard to come up with a process that works for everyone. Maybe one needs to offer choices but that has its own complications and reduces comparability.

                                    1. 3

                                      Depending on how complex it has to be you can get pretty far with Racket’s racket/gui. It starts becoming kind of a pain when you need non-standard controls (DrRacket probably already has what you need but you’ll have to search for it), but especially for distributing it’s great, because it’s all in the standard library so you just need to install DrRacket to be able to use it, and you can easily generate a self-contained executable. And it works cross-platform!

                                      1. 2

                                        How is the performance? I have heard racket is slow, but that may be out of date.

                                        1. 3

                                          It’s probably out of date; Racket has been doing JIT compiling in their VM for a while now. They’re in the middle of porting to the Chez Scheme runtime which can have significant performance improvements: https://docs.racket-lang.org/guide/performance.html#%28part._virtual-machines%29

                                          As far as the API goes, Racket is by far the most pleasant way to code a conventional desktop GUI that I’ve ever seen.

                                          1. 3

                                            Is it compatible with accessibility tools?

                                      1. 1

                                        Here’s most of mine (note this is from git alias these aren’t shell aliases, i use git aliases in my git aliases, i also removed the work related ones >.<) feel free to ask about some of these they’re a bit esoteric.

                                        b = branch
                                        ba = branch --all
                                        bcs = log --pretty=%H --first-parent --no-merges
                                        begin = !git init && git commit --allow-empty -m 'Initial empty commit'
                                        # get the branch name
                                        bn = rev-parse --abbrev-ref HEAD
                                        # branch with a new worktree uses the nwt alias later
                                        bwt = !git branch $1 ${2:-HEAD} && git nwt $1 #
                                        # what files are getting changed in a repo, really stupid
                                        churn = !git log --all -M -C --name-only --format='format:' $@ | sort | grep -v '^$' | uniq -c | sort -r | awk 'BEGIN {print count,file} {print $1 , $2}' | grep -Ev '^\s+$'
                                        ci = commit
                                        cia = commit --all
                                        # cherry-mark diff, useful for finding out if a commit is present in between branches
                                        cmd = log --no-merges --left-right --graph --cherry-mark --oneline
                                        co = checkout
                                        cp = log --no-merges --cherry --graph --oneline
                                        # similar to ^ but should report if a cherry pick was done
                                        cpd = log --no-merges --left-right --graph --cherry-pick --oneline
                                        # default remote name, aka origin, probably a better way 
                                        defremote = !git branch -rvv | grep -E 'HEAD' | awk '{print $1}' | sed -e 's|/HEAD||g'
                                        ds = diff --stat
                                        # abandon all hope!
                                        effit = reset --hard
                                        fa = fetch --all
                                        # find a common merge point between a commit and branch
                                        find-merge = !sh -c commit=$0 && branch=${1:-HEAD} && (git rev-list $commit..$branch --ancestry-path | cat -n; git rev-list $commit..$branch --first-parent | cat -n) | sort -k2 | uniq -f1 -d | sort -n | tail -1 | cut -f2
                                        # try to fix HEAD to point to origin/master or whatever remote/branch you want
                                        fixhead = !sh -c rem=${0:-origin} && branch=${1:-master} && git symbolic-ref refs/remotes/$rem/HEAD refs/remotes/$rem/${branch}
                                        fixup = commit --fixup
                                        # force push branch up
                                        fpbr = !git push --set-upstream --force $(git defremote) $(git bn)
                                        # used for worktree aliases, gets the GIT_DIR of the primary place also works in worktrees
                                        gr = !git rev-parse --absolute-git-dir | sed -e 's|/[.]git.*||' #
                                        hist = log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit --date=relative
                                        # safety first! don't allow yourself to push to origin on this checkout
                                        nopush = remote set-url --push origin no_push
                                        # new work tree, uses the gr alias to get the directory, then create a new work tree with DIR@branch 
                                        nwt = !git worktree add $(git gr)@$1 $1 #
                                        pbr = !git push --set-upstream $(git defremote) $(git bn)
                                        ra = rebase --abort
                                        ri = rebase --interactive --autosquash
                                        s = status --short --branch --untracked-files=no
                                        sc = !git clone --recursive $1
                                        short = rev-parse --short
                                        # use find-merge to show the common merge commit
                                        show-merge = !sh -c merge=$(git find-merge $0 $1) && [ -n "$merge" ] && git show $merge
                                        slog = log --oneline --decorate
                                        squash = commit --squash
                                        st = status --short --branch
                                        # submodules are annoying af to keep up to date
                                        sup = !git pull --rebase && git submodule update --init --recursive
                                        tlog = log --graph --color=always --abbrev-commit --date=relative --pretty=oneline
                                        trim = !git reflog expire --expire=now --all && git gc --prune=now
                                        # get the full history of a git --depth checkout/clone
                                        unshallow = pull --unshallow
                                        unstage = reset HEAD
                                        up = !git pull --rebase && git push
                                        wd = diff --color-words
                                        wsd = diff --color-words --ignore-space-at-eol --ignore-space-change --ignore-all-space
                                        wta = worktree add
                                        wtl = worktree list
                                        wtp = worktree prune
                                        # supposed to be worktree root, not sure its better than the other alias
                                        wtr = !git worktree list --porcelain | grep -B2 "branch refs/heads/$1" | head -n1 | sed -e 's|worktree ||' #
                                        
                                        1. 1

                                          git effit is fun ;)

                                        1. 17

                                          Spitballing a couple ideas here.

                                          On the topic of the 70 day new-user timer:I think this will just make the spammers much more difficult to notice as they will create dormant accounts waiting for the timer to expire while maybe posting a low-effort comment here and there. Possibly even automating it by copying or generating content based on previous comments on lobste.rs. I don’t see a direct solution to this, but it’s worth to keep in mind how their approach will change with this.

                                          Regarding the new-user limitations in general: How about having two different types of invites you can send out? One regular invite as it is now, and one for people you trust? Sending the trusted invite would mean you personally take responsibility for actions taken by the person you invite (for a reasonable time frame that is) and their limitations are relaxed somewhat. This means you still can extend invites to people you don’t trust all that much, and they would end up having to display their trustworthiness, or you can shortcut that mechanism and allow someone you already trust onto the platform.

                                          1. 12

                                            Why would you invite somebody you don’t trust? Two levels of that and we’re back to where we started.

                                            I’m not totally sold on a time-based “newness” metric. Something more informed by usage would be good–are they submitting new content, are they actively commenting, are they helping suggest tags and whatnot. And sure, those are all gameable, but if we can trick the growth droids into performing vital community service why not?

                                            1. 30

                                              I was invited by @flyingfisch because I asked on IRC. He invited me in good faith but doesn’t trust me as a personal friend. If I was banned he might get told “hey be more careful” but that’s about it. There are a couple of people I invited like that. I also invited @rwhaling because he’s someone I trust. If he got banned, I’d feel obligated to apologize to the lobsters community, because I personally “vouched” for him and was wrong.

                                              That’s how I see this. I’d come in as a “standard” newbie and would have all the restrictions at first. @rwhaling would come in as a “vouched-for” user and would have fewer restrictions. But if he got banned, I’d be suspended or put on probation or something.

                                              1. 6

                                                Nice explanation. As a newbie who’s not enitrely sure how the whole community “breathes”, I’m constantly afraid of posting something so bad that my inviter has to feel bad about me. And they only gave me invite because I saw on Mastodon that they are a user here, we have low interraction otherwise.

                                                1. 5

                                                  Thanks for your explanation!

                                                  But if he got banned, I’d be suspended or put on probation or something.

                                                  I like this approach for taking responsibility for downtree users.

                                                  1. 5

                                                    Suspend future invitations for some time.

                                                  2. 1

                                                    I think leaning into the social network/web of trust is a good idea. I think it may be useful to express beliefs and confidences, about users you do not invite.

                                                    What reward you get to balance the risk ventured, I don’t know. Maybe just the knowledge that you’re helping the anti-spam network.

                                                    1. 1

                                                      This is exactly the mechanics I was thinking about. The punishments and rules of it would need to be hashed out, but you are spot on with the idea.

                                                  3. 9

                                                    I think this will just make the spammers much more difficult to notice as they will create dormant accounts waiting for the timer to expire while maybe posting a low-effort comment here and there.

                                                    This is certainly existentially possible, but not actually true in my experience. One kind of abuse I deal with is fraudulent activity in my billing system. One surprising aspect of these behaviors is how impatient the people engaging in them are. Instead of waiting for the ‘right’ opportunity, they go to where there is an immediate ‘return’ on their effort. One explanation for this is that it reduces the evidence of the behavior, lowering legal risk. I think part of it is high time preference, however.

                                                    It is true that there are innovations in parasitism (e.g., spamming), just as there are innovations in productive and positive behaviors. What happened here was “innovative” in the sense that we had not seen it before. Since it provided an unearned benefit, the behavior was repeated until discovered and suppressed. That will happen again, perhaps even in the manner you articulate but probably in a more novel way–but that is going to be true every time we suppress an unwelcome behavior. Incremental suppression increases the cost of imposing on us–requiring more effort for the same reward is a feature, regardless of whether it fully eliminates the unproductive behavior or not.

                                                    1. 7

                                                      This also happens with SMTP servers, interestingly. One of postfix’s most basic spam-prevention settings is just waiting a small amount of time at the beginning of each SMTP session and canning it if the client talks first. Apparently, the server is meant to talk first, but most spambots are so impatient (because they have to spam as much as possible before they get blacklisted) that they send their HELO before the server has sent them anything.

                                                    2. 4

                                                      I wonder if it would be possible to take the max between 70 days and an arbitrary karma count (maybe the median user karma level?). New accounts that genuinely want to join the community and actively participate shouldn’t be tagged as potential spammers for over 2 months.

                                                      When I was invited 5 years ago, the person sending the invite was responsible for the new users that they invited and could lose their account/run into trouble if they abused the invite feature. It is my understanding that that was the main reason for showing the invite tree back then. Does anyone know if that policy has changed or if I just misremember the “good old times”?

                                                      1. 5

                                                        In practice “upstream” users have only been banned for downstream user’s indiscretions once or twice. It basically doesn’t happen.

                                                        1. 1

                                                          Off the top of my head, I only know of one user banned because of behavior by someone they invited, and they wrote the code to disable invites, so I don’t think there have been any since then.

                                                      2. 2

                                                        Expect owners of older accounts that aren’t used that much to start getting cash offers for them…

                                                      1. 7

                                                        Whenever I see a github bugtracker with half a dozen labels on each issue, I can’t help but think that they’ve reinvented Bugzilla’s Components, Bug states and resolutions, Flags, …

                                                        1. 2

                                                          There are a bunch of obvious useful features missing from github’s issues. GitLab is a bit better and supports sorting by priority at least.

                                                          It also allows you to define labels at the organisation rather than project level, which is potentially useful.

                                                          1. 3

                                                            I have said it before, but for what it is worth: I recommend using actual bug tracking software. Github is a source code repository hosting site. Separation of concerns and all that.

                                                            1. 3

                                                              Github issues are fine for small projects with simple and informal workflows, but I think if you need even a single label, then you need a real bugtracker.

                                                              Generally, if you find yourself implementing an ad hoc, unenforceable process, it may be time to look for tools that already implement it or allow implementing it in a machine-enforceable fashion.

                                                              “Beginner-friendly” and similar may be an exception, since they have no effect on the process and are only there to help new contributors pick things to work on.

                                                        1. 10

                                                          I’ve been thinking more and more of “leveling up to bsd”. I’m not bothered by this a lot, but json? JavaScript and JSON are my tools, love it, but please not on a linux system service. I might just be too old, but I think unless we move lot of other tools to read and output JSON, we’ll have a mess where those system tools don’t work with each other.

                                                          Another thing for me is the desktop use case, as mentioned in other comments. My media files are too big to live on a stick and even if the stick is the cloud, I probably still have to use cloud anyway, and sinceI’m a linux user, it’s half expected of me to then just mount the cloudstick in /mnt anyway and link to it. My work is all in the git repo, on migrating systems I just need basically the dotfiles, once per system.

                                                          Butt on the other hand, the possibility to “hand off work” from desktop to laptop, to have my IDE plugins be totally personalized, to just resume browsing from phone to the PC… Why stop the future? In some not-so-distant utopia, I’m walking around with the said stick and just renting nearby computation as I move, my work never pausing for such peasant triviality like changing toa different computer.

                                                          In any case, I hope they don’t make a mess :)

                                                          1. 6

                                                            I’m not bothered by this a lot, but json? JavaScript and JSON are my tools, love it, but please not on a linux system service. I might just be too old, but I think unless we move lot of other tools to read and output JSON, we’ll have a mess where those system tools don’t work with each other.

                                                            So you would prefer a bespoke text format where you need to write a parser out of sed, cut and awk instead of a structured text format that already is widely adopted and has tools like jq to deal with it?

                                                            Plenty bad can be said about JSON, for sure, but what would be a sensible alternative to store structured information? TOML? Or even, gasp YAML? Or maybe you’d be happier with GNOME2-style XML?

                                                            1. 9

                                                              No, I’ve just meant that the traditionalist in me (the “I might just be too old” part hints at that) thinks we shouldn’t have to have jq available for “system service”. You need this at boot time. At boot time, you don’t want to import other tools than what already is there. Or, that is how I thought of it. So this was not about json itself, just about depending on another format in a system service.

                                                              1. 8

                                                                So you would prefer a bespoke text format where you need to write a parser out of sed, cut and awk instead of a structured text format that already is widely adopted and has tools like jq to deal with it?

                                                                conversely: it’s quite amazing that one can build a parser just out of a shell pipeline, isn’t it? :)

                                                                Or maybe you’d be happier with GNOME2-style XML?

                                                                as much as i dislike it, xml has nice properties given the right tools. i’ve used tdom recently and was pleasantly surprised. json for more complex usage is still full of hacks and afterthoughts, life $ref, json-schemas, etc. (xml was over designed from the start, though).

                                                                1. 5

                                                                  conversely: it’s quite amazing that one can build a parser just out of a shell pipeline, isn’t it?

                                                                  Those parsers tend to suck. What happens when your file format is tab-delimited, and someone injects a tab as contents into one of the fields? (the answer is that a lot of stuff breaks)

                                                                  1. 2

                                                                    what happens when i have an extra/missing delimiter in json? if one is lucky the parser tells you roughly where the position of the error is, ironically most of the time as line number :)

                                                                    still, this isn’t my point. the fact that you can create a makeshift parser by putting together some otherwise unrelated tools is still amazing and helpful, especially for manual tasks which are not going to be repeated often.

                                                              2. 6

                                                                Both FreeBSD and OpenBSD are a breath of fresh air with regards to documentation and simplicity compared to Linux, so they’re definitely worth checking out.

                                                                On the other hand, you might lose some convenience compared to Linux if you use proprietary software that has Linux support (f.e. Dropbox, Steam). It’s a bit like using Linux 20 years ago :-D

                                                                1. 3

                                                                  I’ve interacted with JSON in a hobby project and I’m seeing it as a format for external data feeds at work. In neither case is JS involved.

                                                                  JSON is basically structured text now, filling the same niche as XML.

                                                                  1. 3

                                                                    Yes, it’s super easy to use, I get that. That was not my point.

                                                                    1. 2

                                                                      OK. What form of structured text would be more appropriate, in your opinion?

                                                                      1. 3

                                                                        I’d prefer a format that supported comments. Probably toml, but there’s also some JSON variants that do.

                                                                        1. 1

                                                                          That’s a good point. I’d like to see an example of this proposed file, if it’s a dozen entries the key names may be enough to comment the format.

                                                                1. 1

                                                                  I don’t think I would use any of these patterns :/

                                                                  All of these would be made clearer without the walrus and with a defaultdict and the syntax sugar for get and has. But in the general case where you do have to call a method, I think I’d still do something else.

                                                                  For the first few, I think I would have separate calls to has and get.

                                                                  I don’t know what I’d do for the case statement, but I don’t like it ;)

                                                                  In Julia I rely on constant propagation and just repeat my function calls knowing that they will really only be called once. If my function calls are too verbose I just add a lambda with a short name.

                                                                  1. 1

                                                                    The point of this post isn’t to promote a pattern. It’s to explain how the walrus operator works.

                                                                    Yes. defaultdict is a nicer solution. That’s besides the point. The author needed a way to display how the walrus operator could be used to gather some information and then act on it at the same time. Sure, you could have made the code prettier with a defaultdict, but the dict isn’t the point of the exercise.

                                                                    A case statement is antithetical to pythons design, but if you need one then this isn’t too bad of a solution. I’d solve it differently myself, but again, the point of the article is to showcase how the walrus operator works so you can use and abuse it in your code. It’s not a style guide.

                                                                    1. 1

                                                                      the dict isn’t the point of the exercise.

                                                                      I tried to cover that complaint already: “But in the general case where you do have to call a method, I think I’d still do something else.”

                                                                      The point of this post isn’t to promote a pattern.

                                                                      I disagree. The article makes a bunch of normative claims about how the walrus operator should be used to “streamline” or deduplicate code.

                                                                  1. 9

                                                                    I love everything about this. What a great solution. :-)

                                                                    1. 4

                                                                      Yep it’s really nice somehow to see these amazing results!

                                                                      Sad thing is since an hour of an average engineers time is worth more on the open market then this TV is, so rationality would say that these efforts are rarely worth the time spent, but yet we do it, there’s a less tangible recreational value in it.

                                                                      I’ve got an 10 year old flaky tv myself, it’s probably not worth anything but it’s good enough for me. Lately it’s had problems with powering on properly and a friend using the same model had the same problem, we fixed both our tv’s by screwing them appart and surely finding the same cheap capacitor building like there’s no tomorrow, procuring new replacements and replacing it (and the rest of them while at it) combined with time spent unmounting it from the wall etc must have largely out-weighted both it’s value and the price of a new tv, yet there’s a rewarding feeling in fixing it and knowing that I’ll be able to use it a couple of years longer.

                                                                      1. 7

                                                                        If the time did not displace working time, then there’s no loss of income for you from doing this.

                                                                        Electronics are also likely priced too cheaply because environmental and labour costs are discounted through poor living standards for the workers and lax environmental regulation. If we lived in a world with a flatter income distribution and better environmental controls then we’d probably reuse, repair and recycle a lot better.

                                                                      2. 2

                                                                        I want to have friends doing stuff like this!

                                                                        1. 0

                                                                          For me, it’s more at the level of “not bad”. Here’s the disappointing bit:

                                                                          It would be nice to apply the corrective filter to the whole screen instead of just a video playing in an application, but I couldn’t think of a way to do it.

                                                                          If the author had managed to get the filter into the TV’s firmware or something like that, I would be truly impressed.

                                                                          1. 4

                                                                            It would be indeed sweet to have the correction running all the time. An FPGA devboard with two HDMI ports could be a realistic solution here.

                                                                            However I can’t even begin to imagine the toll such hardware hacking would take on my free time. Sometimes a “80% there” solution is good enough.

                                                                        1. -6

                                                                          I stopped reading this as I rejected the premise early on, the example I had in mind was chess, try get in top 5% on Chess.com or Lichess. “Isn’t that good”? Well, it’s about a 2150 or 2200 chess rating.

                                                                          1. 24

                                                                            I think bragging about not reading the article is not a good habit to encourage on lobsters, but you’ve also misread–and the clarification was in the second paragraph.

                                                                            The relevant comparison isn’t players on chess.com, it’s people who play chess, and that’s a larger group. For instance, I’ve played games against my daughter in the past year, and against a friend or two within the past few years, but I’m not active on chess.com.

                                                                            Similarly, I’m about 50th percentile in people who play Go tournaments (maybe a little lower even, I can’t remember), but I’m well above average (at least for players in the US. I don’t know what the distribution is like in China/Japan/Korea–they have tons of strong players, but also millions of players overall). I don’t know if I’m 95% percentile, but definitely not near 50th.

                                                                            1. 2

                                                                              I skimmed the article, and it’s a rambling mess. There’s some nuggets there but they’re really hard to sift.

                                                                              I liked the author’s coinage(?) of the word “ridiculable”.

                                                                              It would generally be considered absurd to operate a complex software system without metrics or tracing, but it’s normal to operate yourself without metrics or tracing, even though you’re much more complex and harder to understand than the software you work on.

                                                                              This is a good observation, slightly marred by the existence of a plethora of products designed to track employee’s every move on screen. I guess a programmer interested in improving their productivity can get an evaluation license of this kind of software.

                                                                              Also this link looks interesting:

                                                                              1. 5

                                                                                I wonder if it’s the game analogy that’s giving people trouble, because it immediately clicked for me. My game of choice is different – Magic: The Gathering, which first came out right as I was a teenager likely to be able to pick it up and enjoy it – but my experience with it absolutely lines up with the article.

                                                                                There’s a very large population of people who play Magic. And reaching 95th percentile within that population is something literally anyone could do by putting in the work. It seems like a high bar, but it really isn’t, because even things like reading a few introductory articles on competitive strategy and practicing what you learn from that will quickly advance you past the average kitchen-table Magic playgroup. Not that much more effort will put you up to the level of being able to win at a typical Friday-night tournament in a local game shop. And at that point you are undeniably going to be 95th percentile, if not higher!

                                                                                Even within the specifically competitive-focused subset of the Magic-playing population I think this holds up. Within the last couple years a new digital version of the game (called “Arena”) has come out and been promoted heavily, and it has competitive play with a ladder of ranks and tiers. It’s attracted a fair number of streamers who are new to the game, and again it seems that anyone willing to put in some effort and practice can start consistently reaching the higher ranks, which again put them into the 95th percentile or higher of Magic players, and even of that specific subset who play on Arena.

                                                                                Though I think some of the problem here is also perspective: people won’t compare themselves to the general population, or even to the subset who do things like go to tournaments or participate in ranked play on Arena, where it would be clear just how low the skill-level bar of 95th-percentile really is. Instead they compare themselves to the population of established elite professional players, and see a huge skill gulf between themselves and the pros and draw the wrong conclusion. Being only 1% as good as a top-level pro (assuming we could quantify that) does not mean being only in the 1st-percentile of all players, simply because the pros are such a microscopically tiny subset of a very large population, but people often think about it in those terms.

                                                                                1. 1

                                                                                  I guess it’s only applicable to fields where casual and enjouable participation is possible.

                                                                                  On the one hand we have fields like algebraic geometry that you cannot participate in without extensive preparation. For someone with high school math level, it will take years to even start understanding the papers. Even then you are are only ready to start doing any research of your own at all.

                                                                                  On the other hand we have fields where, until some point of profiency, it doesn’t matter if you are are better than N% participants. There are many people (mostly kids) who play the violin. You can get better than most of them just by learning not to tune the strings to diminished fifths. It will take years of dedicated practice until anyone will genuinely want to listen to your playing though.

                                                                                2. 3

                                                                                  The existance of personal tracking software doesn’t imply that it is not “normal to operate yourself without metrics or tracing”. A fairly small proportion of the population use such software.

                                                                                  1. 2

                                                                                    I’m thinking of software that allows an employer to track how much time their employees spend in different windows and applications, so they can take action against “incorrect” behavior.

                                                                                3. 0

                                                                                  But 2200 is top 5% of people who have ever played chess online, including those 4 or 5 games and so on. As in top 5% of participation metric “has played chess online before”, I thought

                                                                                4. 12

                                                                                  This is explicitly addressed in the beginning of the article:

                                                                                  Note that when I say 95%-ile, I mean 95%-ile among people who participate, not all people (for many activities, just doing it at all makes you 99%-ile or above across all people). I’m also not referring to 95%-ile among people who practice regularly. The “one weird trick” is that, for a lot of activities, being something like 10%-ile among people who practice can make you something like 90%-ile or 99%-ile among people who participate.

                                                                                  1. -1

                                                                                    But this is for people who participate. 2200 is top 5% of people who have ever played more than, say, 5 or 10 games of chess online.

                                                                                    1. 9

                                                                                      This is triply incorrect and once misleading.

                                                                                      First, many chess players never play online. I’d even guess that most don’t, so that is not the correct population to compare to.

                                                                                      Second, chess.com’s displayed percentiles are not for every player who’s ever played, only for active players. There was a change was a number of years ago, before this chart was made.

                                                                                      Third, if you look at that chart, top 5% among active players is roughly 1600 on chess.com, not 2150 or 2200.

                                                                                      Fourth, when you say it’s an X chess rating without qualification, I think this would imply to people in the U.S. that this is a FIDE or USCF rating. 1600 on chess.com from when that table was made converts to 1500 USCF and, again, that’s an overestimate because that’s only active players on chess.com which is going to be overweighted towards players who have put more time in.

                                                                                      Your stated number, 2200, is in the top 0.2% of active chess.com rapid players. 2200 must come from lichess blitz ratings. At the top of their blitz ratings graph, it notes that it’s for players active this week, so that also has the incorrectness mentioned above. Additionally, it’s well known that lichess generally has inflated ratings and blitz is particularly inflated even for lichess. It is extremely misleading to say that top 5% is “2200 chess rating” when referring to lichess blitz ratings.

                                                                                      Even if you look at people with USCF ratings, which is a tiny subset of the people who have played or play chess in the U.S. (roughly 85k USCF players, out of probably over 100M people who have played in the U.S.; that chart is old and has 65k but the distribution shouldn’t be wildly different), top 5% is still only 2000 USCF. “2200 chess rating”, as you put it, is someone roughly in the top 1000 USCF. Across all U.S. players, even accounting for strong players who don’t maintain a USCF rating, that’s probably at least the top 0.001%.

                                                                                  2. 3

                                                                                    I got to around 1900 in lichess classical in about a year without any specific effort starting from scratch, but a lot of play time.

                                                                                    https://lichess.org/@/acham/perf/classical

                                                                                    My greatest victory is against a 2314 rated player.

                                                                                    In school I was about 90th percentile, so in general I think for a lot of tasks, with practice you just slot into where your intelligence level is, with deviation around quality and amount of practice.

                                                                                    I think looking at the graph of all time rating of players is really fun.

                                                                                  1. 5

                                                                                    I wonder why the Labour (centre-left) government put the apology in The Telegraph (a pretty right-wing paper).

                                                                                    Also, I am very glad this happened. I went to the University of Manchester, where Turing worked for a while, and although we had some celebrations of him, he wasn’t mentioned all that much in the CS department and I know his work was talked down within the department after he died. I can only assume that he faced discrimination within the department while he was alive, too.

                                                                                    On the bright side, the Maths department wisely named their whole building after him in 2007, two years before Gordon Brown issued an official apology. The city also features a nice statue of Turing in a park next to the gay village and every year academics from the university leave flowers there on the memorial of his death.

                                                                                    1. 5

                                                                                      My first attempt at recording myself coding is using ffmpeg -video_size 3840x2160 -framerate 2 -f x11grab -i :0.0 output.mp4 to record two frames per second of my native screen resolution. Any improvements or other ideas?

                                                                                      1. 7

                                                                                        I’ve found OBS Studio painless for desktop recordings, but it sounds like you’re set already.

                                                                                        1. 3

                                                                                          If most of your work happens in the terminal you might use script or asciinema. They result in smaller recording sizes.

                                                                                          1. 2

                                                                                            If it works for you, it’s probably not worth changing software.

                                                                                          1. 5

                                                                                            Can anyone explain why Clear Linux is consistently winning on benchmarks over Fedora/Ubuntu?

                                                                                            https://clearlinux.org/ says “Highly tuned for Intel platforms where all optimization is turned on by default.” - is that what this boils down to (-O3 -mtune= all the way, Gentoo style), or are the developers doing other clever things in the kernel/desktop env/etc?

                                                                                            1. 8

                                                                                              Have you been hearing about any other benchmarks, than those being done by phoronix? They seem to be the only ones talking about the distro…

                                                                                              1. 3

                                                                                                You could try it yourself, it’s just an ISO you can download. My experience matches up pretty closely with the phoronix benchmarks, but package support is severely lacking so I don’t use Clear anymore.

                                                                                              2. 4

                                                                                                It’s a combination of compiler flags like the ones you mentioned and setting the CPU governor to “performance”.

                                                                                                It also sprinkles in several other minor optimizations, but those two get you 95% of the way there and can be done on any source-based distro.

                                                                                                1. 2

                                                                                                  Aren’t they testing it using AMD CPU?

                                                                                                  1. 2

                                                                                                    Yes, but perhaps it’s only important that they’re compiling for modern CPUs?

                                                                                                    Looks like they’re probably not compiling with ICC or the performance would probably be worse on AMD than Ubuntu using GCC or clang.

                                                                                                    1. 1

                                                                                                      AMD also makes CPUs for Intel platforms. In fact, that’s probably what they are most known for.

                                                                                                      1. 1

                                                                                                        Are you talking about x64 (AKA x86-64)?

                                                                                                        1. 1

                                                                                                          Yes, which in an ironic twist I call amd64, to separate it from Intel’s IA-64.

                                                                                                          1. 1

                                                                                                            So that’s not really an “Intel platform”… unless you were using the term to refer to the x86 line.

                                                                                                            1. 1

                                                                                                              Which is clearly how it was used in the context we’re discussing.

                                                                                                              1. 1

                                                                                                                Clear to you, yes.

                                                                                                    2. 2

                                                                                                      Copying part of a comment [1] from the article:

                                                                                                      well it is worthwhile to have a look at their github repo - there is more ongoing eg. plenty of patches adding avx support to certain packages.

                                                                                                      [1] - https://www.phoronix.com/forums/forum/phoronix/latest-phoronix-articles/1157948-even-with-a-199-laptop-clear-linux-can-offer-superior-performance-to-fedora-or-ubuntu

                                                                                                      1. 1

                                                                                                        I don’t know about their Intel optimizations, but if it’s only that, it may be interesting to see how Clear Linux compares with a mainstream distribution on which the kernel has been compiled with all flags set.

                                                                                                      1. 1

                                                                                                        Can I test es6 or typescript modules easily with this?

                                                                                                        And how do you keep the test time low? Do you run all your tests on command+B or somehow only the tests for the file/function you’re working on?

                                                                                                        1. 1

                                                                                                          Test times are low because Baretest loads quickly into memory and I typically use test.only() to only test the code I’m actively developing.

                                                                                                        1. 2

                                                                                                          Drew says that they remember the transitive dependency graph for most of their projects and that they personally know the maintainers of most of those dependencies.

                                                                                                          I think that is very rarely true for me and my projects. I wonder if he includes dev-time only dependencies? Because those are big for me, especially in JS.

                                                                                                          Large runtime dependencies include BLAS, LAPACK, gdal, GEOS, mapbox GL (and transitively webGL engines, etc), mythril js, leaflet, R shiny, dataframe and plotting libraries in R and Julia, and sqlite.

                                                                                                          I wonder if Drew just depends on a lot less stuff because of the kind of work they do, or if they’re just really good at writing everything they need from scratch, or if they’re just a lot better at networking ;)

                                                                                                          1. 1

                                                                                                            Recording my screen as I write a program. Then reviewing the footage and seeing how I could have written the program faster.

                                                                                                            Is there software that allows you to live-stream programming but blurs out API-key-shaped stuff?

                                                                                                            1. 3

                                                                                                              There’s a VS Code plugin called Cloak that does this. I haven’t used it though.

                                                                                                              1. 1

                                                                                                                Note that there’s no need to stream the video for this exercise.

                                                                                                                That said, you could write a little script for your editor or terminal, I suppose. Probably easier to just manually pause/unpause the stream, tho.

                                                                                                                1. 2

                                                                                                                  Right. That was more a thought inspired by the article than a comment on the article itself.

                                                                                                                  The problem is, of course, that I might accidentally reveal keys.

                                                                                                                  1. 2

                                                                                                                    You can make OBS capture only a specific window. I choose a small Xephyr (i.e. nested X session) window so I know exactly what people will see and what they won’t.

                                                                                                                    I have seen people with special plugins which blocks/blurs windows except for ones which are whitelisted.

                                                                                                                    Other people have hotkeys which swap the stream to a static image.

                                                                                                              1. 3

                                                                                                                15/25 handouts and 7/19 lectures are spent on parsing, which seems – after a cursory glance – like a very unbalanced approach. I know parsing isn’t “solved”, but it seems like much of the current research and many of the interesting problems are in the later stages of the compiler. More advanced type systems, better code generation, etc.

                                                                                                                This is the Spring 2018 version, which doesn’t seem substantially different from the above linked 2012 version.

                                                                                                                1. 2

                                                                                                                  Can you recommend a course or text that covers these other topics better?

                                                                                                                  1. 3

                                                                                                                    Yeah, see, I don’t know. I’m still Joe Schmoe newbie in this area. Things I’ve referenced in the past include:

                                                                                                                    Also, a bit of a shameless plug for my own (admittedly very simple) resource on bytecode compilers. With any luck I’ll have a series following the Ghuloum paper soon…

                                                                                                                    1. 2

                                                                                                                      Appel’s Tiger book is in SML (not OCaml).

                                                                                                                      I didn’t like it very much because it’s using ml-yacc.

                                                                                                                      1. 1

                                                                                                                        Ahhhh yes, you’re right. Well, you could substitute OCaml and Menhir!

                                                                                                                        1. 1

                                                                                                                          Or a recursive descent parser.

                                                                                                                    2. 3

                                                                                                                      Ooh, and how could I forget munificent’s book?

                                                                                                                      1. 2

                                                                                                                        This is also another good resource, but more about making fast runtimes.

                                                                                                                        1. 2

                                                                                                                          Thanks a lot for this link! This provides a lot of useful knowledge I was looking for a while!

                                                                                                                        2. 2

                                                                                                                          The slides seem to be insufficient to understand the topics – especially the parsing part. I’d hope there is a course script available for actual students.

                                                                                                                          In general these kind of courses seem to spend

                                                                                                                          • 80% of their time telling people “how to do X”
                                                                                                                          • maybe 15% of the time on “why do X?”
                                                                                                                          • rarely more than 5% on “should you even do X?”

                                                                                                                          It’s kinda sad, but reflects the issues of this profession very well.

                                                                                                                          Even parsing itself, which the least interesting and important piece of a compiler course, doesn’t ask the important question of “should we build languages that would give us this much trouble parsing?”.

                                                                                                                          Popular example is “let’s assume we use <> for generics, what could possibly go wrong (ignoring the 20 years of examples of it going wrong)?”.

                                                                                                                        1. 3

                                                                                                                          This author shows that for one website their nginx config needed two files, each with 24+ lines, much of which has to be generated with some other tools. The author doesn’t mention that nginx then requires the website to be enabled by linking them to the magic /etc/nginx/sites-enabled/ directory.

                                                                                                                          In contrast, the author show’s their Caddy config is only one file for two websites, with less than 24 lines of config.

                                                                                                                          This was what prompted me to switch to Caddy from nginx four years ago. I have about forty websites at any given time running on my machine. I found the Caddyfile blocks within a single config file was refreshing coming from nginx. My entire config file for all my websites is just 342 lines (many server blocks are just 7 lines of config). For me this was great not having to wrangle a hundred nginx config files and typing ln -s dozens of times.

                                                                                                                          1. 3

                                                                                                                            You can have everything in one file for nginx too.

                                                                                                                            1. 2

                                                                                                                              The compact, well documented and easy to read config is also the main reason I use caddy.

                                                                                                                              Unlike other commenters, I also found it trivial to compile my own caddy for commercial use.

                                                                                                                              1. 2

                                                                                                                                It’s also fairly easy to automate/infrastructure-as-code compiling custom caddy’s. Here’s a personal FreeBSD Port that does so. As written it only supports the add-ons that I use but it would be easy to extend. It also predates FreeBSD Port’s support for Go modules.

                                                                                                                              2. 2

                                                                                                                                I thought sites-enabled was just a Debian thing, not Nginx itself?